Introducing high-performance Confidential Computing with N2D and C2D VMs

We’re excited to announce Confidential Computing on the latest Google Compute Engine N2D and C2D Virtual Machines. At Google Cloud, we’re constantly striving to deliver performance improvements and feature enhancements. Last November, we announced the general availability of general-purpose N2D machine types running on 3rd Gen AMD EPYC™ processors. Then, in February, we announced the general availability of compute-optimized C2D machine types running on the same 3rd gen processors. Today, we are excited to announce that both of these new N2D and C2D machine types now offer Confidential Computing. By default, Google Cloud keeps all data encrypted, in-transit between customers and our data centers, and at rest. We believe the future of computing will increasingly shift to private, encrypted services where users can be confident that their data is not being exposed to cloud providers or their own insiders. Confidential Computing helps make this future possible by keeping data encrypted in memory, and elsewhere outside the CPU, while it is being processed – all without needing any code changes to applications.General Purpose Confidential VMs on N2DThe first product in Google Cloud’s Confidential Computing portfolio was Confidential VM. A Confidential VM is a type of Compute Engine VM that helps ensure that your data and applications stay private and encrypted even while in use.Today, Confidential VMs are available in Preview on general-purpose N2D machine types powered by 3rd Gen AMD EPYC processors. We worked closely with the AMD Cloud Solution engineering team to help ensure that the VM’s memory encryption doesn’t interfere with workload performance. N2D VMs are a great option for both general-purpose workloads and workloads that require larger VM sizes and memory ratios. General-purpose workloads that require a balance of compute and memory, like web applications and databases, can benefit from N2D’s performance, price, and wide array of features. Compute-Optimized Confidential VMs on C2DWe’re also optimizing Confidential Computing for more types of workloads. Today, Confidential VMs are also available in Preview on compute-optimized C2D machine types. C2D instances provide the largest VM sizes within the compute-optimized VM family and are optimized for memory-bound workloads such as high-performance databases and high-performance computing (HPC) workloads. Adding the compute-optimized machine family to our Confidential Computing portfolio gives you the ability to optimize performance-intensive workloads while maintaining confidentiality and can expand which of your workloads can easily switch to be confidential. Early FindingsYellowDog, a cloud workload management company, is an early user of the new Confidential VMs in the C2D VM family.“At YellowDog, we believe there should be no barriers to adopting secure cloud computing. YellowDog tested workloads across tens of thousands of cores using the new Google C2D VMs running on 3rd Gen AMD EPYC processors.We were truly impressed to discover that the Confidential VMs’ provisioning times were fantastic and the C2D VMs ran with no discernible difference in performance when enabling and disabling Confidential Computing,” said Simon Ponsford, CTO at YellowDog. “We at YellowDog recommend that anyone running secure workloads in Google Cloud enable the Confidential Computing feature by default.”Expanding Confidential Computing availabilityWe are expanding the availability of Confidential Computing, and Confidential VMs are now available in more regions and zones than before, anywhere N2D or C2D machines are available. Confidential N2D VMs and Confidential C2D VMs are available today in regions around the globe including us-central1 (Iowa), asia-southeast1 (Singapore), us-east1 (South Carolina), us-east4 (North Virginia), asia-east1 (Taiwan), and europe-west4 (Netherlands). The underpinnings of Confidential VMsConfidential N2D and C2D VMs with 3rd Gen AMD EPYC processors utilize AMD Secure Encrypted Virtualization (SEV). With the AMD SEV feature, Confidential VMs offer high performance for demanding computational tasks, while keeping VM memory encrypted with a dedicated per-VM instance key that is generated and managed by the processor. These keys are generated by the processor during VM creation and reside solely within it, making them unavailable to Google or other VMs running on the host. We’re currently supporting SEV on 3rd Gen AMD EPYC processors but will bring more advanced capabilities in the future.PricingConfidential N2D and C2D VMs with 3rd Gen AMD EPYC processors are offered at the same price as the previous generation Confidential N2D VMs. You can also take advantage of cost savings with spot pricing. To learn more, visit Confidential VM pricing. Ongoing Confidential Computing InvestmentToday’s announcement comes off the heels of the review that the Google Cloud Security team, Google Project Zero, and the AMD firmware and product security teams collaborated on of the technology and firmware that powers AMD Confidential Computing technology. Google Cloud and AMD are committed to securing sensitive workloads and shaping future Confidential Computing innovations. Getting StartedUpgrading your existing Confidential N2D VMs to use 3rd Gen AMD EPYC processors is easy. If you already use Confidential N2D machines or are just getting started, you can use the latest hardware by simply selecting “AMD Milan or later” as the CPU platform.To create a Confidential C2D VM, choose the C2D option when creating a new VM and check the box under “Confidential VM service” in the Google Cloud Console.With Confidential Computing, you can protect your data and run your most sensitive applications and services on N2D and C2D VMs.Related ArticleN2D VMs with latest AMD EPYC CPUs enable on average over 30% better price-performanceCompute Engine N2D VMs with 3rd Generation AMD EPYC processors deliver, on average, over 30% better price-performance compared to prior g…Read Article
Quelle: Google Cloud Platform

How Google Cloud monitors its Quality Management System

As a provider of software and services for global enterprises, Google Cloud understands that the quality and security of products is instrumental in maintaining trust among our customers. We are committed to providing products and services that help our customers meet their quality management objectives, ultimately helping organizations to meet their regulatory and customer requirements. At the heart of this commitment is our robust quality management system (QMS), a process-based approach that aims to achieve high standards of quality in all stages of the product or service lifecycle and which leverages our ISO 9001:2015 certification.In our new Quality Management System paper, we share the quality management principles and practices we follow that help us establish a defined and consistent process to continually monitor, manage, and improve the quality of our products and services. As with ISO 9001, Google Cloud’s QMS is predicated on seven quality management principles. These principles include: Customer focus: Through feedback collected from our customers, we have noted that they value security, speed, reliability, and productivity. At Google, we believe this is achieved by following defined practices for effective software development processes and customer communications. Therefore, Google focuses on Systems Development Lifecycle (SDLC) and Cloud Platform Support (CPS) as key components of our QMS.Leadership: Google’s quality policy is the foundation of its quality management program and is managed by Google’s Vice President of Security Engineering. The policy commits to controlling and maintaining the quality of Google Cloud products and related software development processes, limiting Google’s exposure to the risks arising from product quality issues, promoting continual improvement, and maintaining compliance with customer, legal and regulatory requirements.Engaging with people: We believe that for an effective and efficient QMS, it is important to involve people with diverse perspectives and different backgrounds, including our customers and our employees, and to respect and support them as individuals through recognition, empowerment, and learning opportunities. Google involves them from the first stage of the QMS context setting by gathering their requirements and feedback. Process approach: Google Cloud’s QMS uses the Plan-Do-Check-Act (PDCA) approach to process planning. We have defined four key process groups to achieve our quality management objectives, which are: Leadership and planning processes, Operational processes for software design and development, Evaluation and monitoring processes, and Improvement processes. By managing the inputs, activities, controls, outputs, and interfaces of these processes, we can establish and maintain system effectiveness.Improvement: Our proactive approach to quality management can help improve quality and expand business opportunities, enabling entire organizations to optimize operations and enhance performance. Evidence-based decision making: To help align our QMS with our business strategy, we collate and analyze pertinent information from internal and external sources to determine the potential impact on our context and subsequent strategy. Relationship management: Google directly conducts the data processing activities that are behind providing our services. However, we may engage some third-party suppliers to provide services related to customer and technical support. In such cases, our vendor onboarding processes (which includes consideration of the vendor’s requirements of Google) can facilitate streamlined supply chain integration.In a highly competitive, rapidly changing, and increasingly regulated environment, where quality is an integral part of top management agenda, Google holds its products and services to the highest standards of quality, enabling customers to transform their business through quality and become the quality leaders of tomorrow. You can learn more about Google Cloud’s quality management system by downloading the whitepaper.Related ArticleAnnouncing PSP’s cryptographic hardware offload at scale is now open sourceWe’re making the PSP Security Protocol for offloading encryption to network interface cards open source today. Here’s why.Read Article
Quelle: Google Cloud Platform

Meta selects Azure as strategic cloud provider to advance AI innovation and deepen PyTorch collaboration

Microsoft is committed to the responsible advancement of AI to enable every person and organization to achieve more. Over the last few months, we have talked about advancements in our Azure infrastructure, Azure Cognitive Services, and Azure Machine Learning to make Azure better at supporting the AI needs of all our customers, regardless of their scale. Meanwhile, we also work closely with some of the leading research organizations around the world to empower them to build great AI.

Today, we’re thrilled to announce an expansion of our ongoing collaboration with Meta: Meta has selected Azure as a strategic cloud provider to help accelerate AI research and development. 

As part of this deeper relationship, Meta will expand its use of Azure’s supercomputing power to accelerate AI research and development for its Meta AI group. Meta will utilize a dedicated Azure cluster of 5400 GPUs using the latest virtual machine (VM) series in Azure (NDm A100 v4 series, featuring NVIDIA A100 Tensor Core 80GB GPUs) for some of their large-scale AI research workloads. In 2021, Meta began using Microsoft Azure Virtual Machines (NVIDIA A100 80GB GPUs) for some of its large-scale AI research after experiencing Azure’s impressive performance and scale. With four times the GPU-to-GPU bandwidth between virtual machines compared to other public cloud offerings, the Azure platform enables faster distributed AI training. Meta used this, for example, to train their recent OPT-175B language model. The NDm A100 v4 VM series on Azure also gives customers the flexibility to configure clusters of any size automatically and dynamically from a few GPUs to thousands, and the ability to pause and resume during experimentation. Now, the Meta AI team is expanding their usage and bringing more cutting-edge machine learning training workloads to Azure to help further advance their leading AI research.

In addition, Meta and Microsoft will collaborate to scale PyTorch adoption on Azure and accelerate developers' journey from experimentation to production. Azure provides a comprehensive top to bottom stack for PyTorch users with best-in-class hardware (NDv4s and Infiniband). In the coming months, Microsoft will build new PyTorch development accelerators to facilitate rapid implementation of PyTorch-based solutions on Azure. Microsoft will also continue providing enterprise-grade support for PyTorch to enable customers and partners to deploy PyTorch models in production on both cloud and edge.

“We are excited to deepen our collaboration with Azure to advance Meta’s AI research, innovation, and open-source efforts in a way that benefits more developers around the world,” Jerome Pesenti, Vice President of AI, Meta. “With Azure’s compute power and 1.6 TB/s of interconnect bandwidth per VM we are able to accelerate our ever-growing training demands to better accommodate larger and more innovative AI models. Additionally, we’re happy to work with Microsoft in extending our experience to their customers using PyTorch in their journey from research to production.”

By scaling Azure’s supercomputing power to train large AI models for the world’s leading research organizations, and by expanding tools and resources for open source collaboration and experimentation, we can help unlock new opportunities for developers and the broader tech community, and further our mission to empower every person and organization around the world.
Quelle: Azure

E-Mail-Benachrichtigungen über die Annahme von Angeboten sind jetzt in AWS Marketplace verfügbar

Heute kündigte AWS Marketplace die allgemeine Verfügbarkeit von E-Mail-Benachrichtigungen über die Annahme von Angeboten an. Damit werden Benutzer per E-Mail benachrichtigt, wenn ein Kunde ein Angebot angenommen hat. Mit dieser Einführung haben Kunden nun in Echtzeit Einblick in die Angebotsannahme und -anmeldung der Käufer. Dadurch können sie den Gesamtfortschritt einer AWS-Marketplace-Transaktion verfolgen. Käufer, ISVs und Vertriebspartner können jetzt relevante Details wie Vertrags- und Angebots-ID sowie Kundendaten zum Zeitpunkt der Anmeldung erhalten. Damit können sie Beschaffungs-Workflows, die interne Auftragserstellung, die Umsatzrealisierung und die Softwarebereitstellung einleiten. Diese Funktion ist für alle AWS-Marketplace-Produkttypen verfügbar.
Quelle: aws.amazon.com

Jetzt neu: C7g-Instances von Amazon EC2, die von AWS-Graviton3-Prozessoren angetrieben werden

Die für Computing optimierten C7g-Instances von Amazon EC2 der neuesten Generation sind allgemein verfügbar. C7g-Instances sind die ersten Instances, die von den neuesten AWS-Graviton3-Prozessoren angetrieben werden und eine bis zu 25 % bessere Leistung als Graviton2-basierte C6g-Instances für ein breites Spektrum von Anwendungen wie Anwendungsserver, Microservices, Batch-Verarbeitung, Electronic Design Automation (EDA), Spiele, Videocodierung, wissenschaftliche Modellierung, verteilte Analytik, High Performance Computing (HPC), CPU-basierte Machine Learning (ML)-Inferenz und Ad Serving bieten.
Quelle: aws.amazon.com