Amazon WorkSpaces Personal Supports Rocky 9, Red Hat Enterprise Linux 9, and Ubuntu 24.04

AWS announces availability of new Linux bundles for Amazon WorkSpaces Personal, including Rocky Linux 9, Red Hat Enterprise Linux 9, and Ubuntu 24.04. With these bundles, customers can launch WorkSpaces powered by the latest enterprise-grade Linux operating systems and take advantage of modern versions of Linux packages only available in these updated releases. While Rocky Linux 8, Red Hat Enterprise Linux 8, and Ubuntu 22.04 powered WorkSpaces bundles remain available, the new OS options bring access to the latest software ecosystems, improved security postures, and extended long-term support lifecycles offered by each respective distribution. These new bundles also provide a migration path for Amazon Linux 2 customers ahead of its end of life in June 2026. You can get started using managed Rocky Linux 9, Red Hat Enterprise Linux 9, or Ubuntu 24.04 WorkSpaces bundles by selecting one when creating a new Linux WorkSpace. These new bundles are available in all AWS Regions where Amazon WorkSpaces is available. For pricing information, visit the Amazon WorkSpaces pricing page.
Quelle: aws.amazon.com

Amazon SageMaker HyperPod now supports G7e and r5d.16xlarge instances

Amazon SageMaker HyperPod now supports G7e and r5d.16xlarge instances. SageMaker HyperPod is a purpose-built infrastructure for developing, training, and deploying foundation models at scale. It provides a resilient and performant environment with built-in fault tolerance, automated cluster recovery, and optimized distributed training libraries, reducing the undifferentiated heavy lifting of managing large-scale AI/ML infrastructure. 
G7e instances are powered by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and deliver up to 2.3x better inference performance than G6e instances, allowing you to process more requests per second while reducing latency. With up to 768 GB of total GPU memory, G7e instances let you deploy larger language models or run multiple models on a single endpoint. You can use these instances for deploying LLMs, agentic AI, multimodal generative AI, and physical AI models. G7e instances are also well suited for cost-efficient single-node fine-tuning or training of NLP, computer vision, and smaller generative AI models, with up to 1.27x the TFLOPs and up to 4x the GPU-to-GPU bandwidth compared to G6e. In addition, HyperPod now supports r5d.16xlarge as well. The r5d.16xlarge instance provides 64 vCPUs, 512 GB of memory, and 5 x 600 GB NVMe SSD instance storage, powered by Intel Xeon Platinum 8000 series processors with a sustained all-core turbo frequency of up to 3.1 GHz. This instance is well suited for distributed training data preprocessing especially with frameworks such as Ray, large-scale feature engineering, and running memory-heavy orchestration services alongside GPU compute. G7e instances are available in US East (N. Virginia), US East (Ohio), Asia Pacific (Tokyo), and US West (Oregon) and r5d.16xlarge is available in all regions Amazon SageMaker HyperPod is available in. 
Quelle: aws.amazon.com