Inside the Feature: Why Real-Time Backups Are a Big Deal

Let’s be honest, new and shiny features get most of the attention around here. It makes sense: New stuff is exciting! But WordPress.com has plenty of baked-in features that are worth talking about too. 

Losing your work is one of the most frustrating things you can experience as a website owner. When you choose WordPress.com, you never have to worry about that again. Today, let’s chat about backups, which are powered by Jetpack (Automattic’s own suite of security, performance, and growth tools). 

Real-time backups and one-click restores

With Jetpack VaultPress Backup, every single change to your site is captured in real-time. We also back up your site at a consistent time each day as a failsafe.

Our backups happen in real-time, making restoring your site to a previous state as easy as finding a cute dog on the internet. 

Let’s look closer at how this feature can benefit you and your site(s).  

No expertise required 

Manually backing up a website is a time-consuming and resource-intensive task, not to mention a bit daunting on a technical level. 

We’ve removed all that hassle by doing the work for you behind the scenes. 

Even better, we house redundant copies of your backups on multiple servers around the world, so your data is always secure and accessible. 

Version control, but for your website

With the Activity Log, you can quickly see every site change at a glance, letting you know exactly what action (and which user!) broke the site. 

Our one-click restores allow you to quickly recover a site from any point in time: Simply find when the problem occurred, click “Restore,” verify that you want to revert your site to a previous state, and in as little a few minutes’ time, you’ll be back up and running.  

Never miss a sales order 

If you’re running an online store, you know that orders can come in at any time. It goes without saying that you need a backup system to keep your order and customer data safe. There are times when daily or even hourly backups simply don’t cut it.

If you’re running WooCommerce on your site, you can reinstate your store to any previous iteration, while keeping all orders and products current. 

Losing your work is a thing of the past 

Our automated backups save everything for you: posts, files, databases, themes, plugins . . . all of it. Should your site crash for any reason — an incompatible plugin or theme, for instance — rest assured that it can be easily restored in just minutes. 

Whether you’re running a business or spending hours perfecting your site as a hobbyist, our state-of-the-art technology provides the peace of mind that you’ll never miss a sale or lose content again.  

Learn more about our real-time backups

Real-time backups and one-click restores are available on Business and Commerce sites.
Quelle: RedHat Stack

At Google I/O, generative AI gets to work

Over the past decade, artificial intelligence has evolved from experimental prototypes and early successes to mainstream enterprise use. And the recent advancements in generative AI have begun to change the way we create, connect, and collaborate. As Google CEO Sundar Pichai said in his keynote, every business and organization is thinking about how to drive transformation. That’s why we’re focused on making it easy and scalable for others to innovate with AI.In March, we announced exciting new products that infuse generative AI into our Google Cloud offerings, empowering developers to responsibly build with enterprise-level safety, security, and privacy. They include Gen App Builder, which lets developers quickly and easily create generative chat and enterprise search applications, and Generative AI support in Vertex AI, which expands our machine learning development platform with access to foundation models from Google and others to quickly build, customize and deploy models. We also introduced our vision for Google Workspace, and delivered generative AI features to trusted testers in Gmail and Google Docs that help people write.Last month we introduced Security AI Workbench, an industry-first extensible platform powered by our new LLM security model Sec-PaLM, which incorporates Google’s unique visibility into the evolving threat landscape and is fine-tuned for cybersecurity operations.Today at Google I/O, we are excited to share the next steps not only in our own AI journey, but also those of our customers and partners as well. We’ve already seen a number of organizations begin to develop with and deploy our generative AI offerings. These organizations have been able to move their ideas from experimentation to enterprise-ready applications with the training models, security, compute infrastructure, and cost controls needed to provide their customers with transformative experiences. Our open ecosystem, which provides opportunities for every kind of partner, continues to grow as well. And we are also pleased to share new services and capabilities across Google Cloud and Workspace, including Duet AI—our AI-powered collaborator—to enable more users and developers to start seeing the impact AI can have on their organization.Customers bringing ideas to life with generative AILeading companies in a variety of industries like eDreams ODIGEO, GitLab, Oxbotica, and more, are using our generative AI technologies to create engaging content, synthesize and organize information, automate business processes, and build amazing customer experiences. A few examples we showcased today include:Adore Me, a New York-based intimate apparel brand, is creating production-worthy copy with generative AI features in Docs and Gmail. This is accelerating projects and processes in ways that even surprised the company.Canva, the visual communication platform, uses Google Cloud’s rich generative AI capabilities in language translation to better support its non-English speaking users. Users can now easily translate presentations, posters, social media posts, and more into over a hundred languages. The company is also testing ways that Google’s PaLM technology can turn short video clips into longer, more compelling stories. The result will be a more seamless design experience while growing the Canva brand.Character.AI, a leading conversational AI platform, selected Google Cloud as its preferred cloud infrastructure provider because we offer the speed, security and flexibility required to meet the needs of its rapidly growing community of creators. We are enabling Character.AI to train and infer LLMs faster and more efficiently, and enhancing the customer experience by inspiring imagination, discovery, and understanding. Deutsche Bank is testing Google’s generative AI and large language models (LLMs) at scale to provide new insights to financial analysts, driving operational efficiencies and execution velocity. There is an opportunity to significantly reduce the time it takes to perform banking operations and financial analysts’ tasks, empowering employees by increasing their productivity while helping to safeguard customer data privacy, data integrity, and system security.Instacart is always looking for opportunities to adopt the latest technological innovations, and by joining the Workspace Labs program, they have access to the new features and can discover how generative AI will make an impact for their teams.Orange is exploring a next-generation contact center with Google Cloud. With customers in 26 countries, the global telecommunications firm is testing generative AI to transcribe the call, summarize the exchange between the customer and service representatives, and suggest possible follow up actions to the agent based on the discussion. This experiment has the potential to dramatically improve both the efficiency and quality of customer interactions. Orange is working closely with Google to help ensure data protection and make sure that systematic employee review of Generative AI output and transparency can be implemented.Replit is developing a collaborative software development platform powered by AI. Developers using Replit’s Ghostwriter coding AI already have 30% of their code written by generative AI today. With real-time debugging of the code output and context awareness of the program’s files, Ghostwriter frees up developers’ time for more challenging and creative aspects of programming.Uber is creating generative AI for customer-service chatbots and agent assist capabilities, which handle a range of common service issues with human-like interactions with the aim of achieving greater customer satisfaction and cost efficiency. Additionally, Uber is working on using our synthetic data systems (a technique for improving the quality of LLMs) in areas like product development, fraud detection, and employee productivity.Wendy’s is working with Google Cloud on a groundbreaking AI solution, Wendy’s FreshAI, designed to revolutionize the quick service restaurant industry. The technology is transforming Wendy’s drive-thru food ordering experience with Google Cloud’s generative AI and LLMs—with the ability to discern the billions of possible order combinations on the Wendy’s menu. In June, Wendy’s plans to launch its first pilot of the technology in a Columbus, Ohio-area restaurant, before expanding to more drive-thru locations.Leading companies build with generative AI on Google CloudPartnering creates a strong ecosystem of real-world options for customersAt Google Cloud, we are dedicated to being the most open hyperscale cloud provider, and that includes our AI ecosystem. Today, we are excited to expand upon the partnerships announced earlier this year for every layer of the AI stack—chipmakers, companies building foundation models and AI platforms, technology partners enabling companies to develop and deploy machine learning (ML) models, app-builders solving customer use cases with generative AI, and global services and consulting firms that help enterprise customers implement all of this technology at scale. We announced new or expanded partnerships with SaaS companies like Box, Dialpad, Jasper, Salesforce, and UKG; and consultancies including Accenture, BCG, Cognizant, Deloitte, and KPMG. Together with our previous announcements with companies like AI21 Labs, Aible, Anthropic, Anyscale, Bending Spoons, Cohere, Faraday, Glean, Gretel, Labelbox, Midjourney, Osmo, Replit, Snorkel AI, Tabnine, Weights & Biases, and many more, they provide the a wide range of options for businesses and governments looking to bring generative AI into their organizations. Introducing new generative AI capabilities for Google CloudTo help cloud users of all skill levels solve their everyday work challenges, we’re excited to announce Duet AI for Google Cloud, a new generative AI-powered collaborator. Duet AI serves as your expert pair programmer and assists cloud users with contextual code completion, offering suggestions tuned to your code base, generating entire functions in real-time, and assisting you with code reviews and inspections. It can fundamentally transform the way cloud users of all skill sets build new experiences and is embedded across Google Cloud interfaces—within the integrated development environment (IDE), Google Cloud Console, and even chat. For developers looking to create generative AI applications more simply and efficiently, we are also introducing new foundation models and capabilities across our Google Cloud AI products. And to continue to enable and inspire more customers and partners, we are opening up generative AI support in Vertex AI and expanding access to many of these new innovations to more organizations.New foundation models are now available in Vertex AI. Codey, our code generation foundation model, helps accelerate software development with code generation, code completion, and code chat. Imagen, our text-to-image foundation model, lets customers generate and customize studio-grade images. And Chirp, our state-of-the-art speech model, allows customers to more deeply engage with their customers and constituents inclusively in their native languages with captioning and voice assistance. They can each be accessed via APIs, tuned through our intuitive Generative AI Studio, and feature enterprise-grade security and reliability, including encryption, access control, content moderation, and recitation capabilities that let organizations see the sources behind model outputs. Text Embeddings API is a new API endpoint that lets developers build recommendation engines, classifiers, question-answering systems, similarity matching, and other sophisticated applications based on semantic understanding of text or images. Reinforcement Learning from Human Feedback (RLHF) allows organizations to incorporate human feedback to deeply customize and improve model performance. Underpinning all of these innovations is our AI-optimized infrastructure. We provide the widest choice of compute options among leading cloud providers and are excited to continue to build them out with the introduction of new A3 Virtual Machines based on NVIDIA’s H100 GPU. These VMs, alongside the recently announced G2 VMs, offer a comprehensive range of GPU power for training and serving AI models.Extending generative AI across Google Workspace Earlier this year, we shared our vision for bringing generative AI to Workspace, and gave many users early access to features that helped them write in Gmail and Google Docs. Today, we are excited to announce Duet AI for Google Workspace, which brings together our powerful generative AI features and lets users collaborate with AI so they can get more done every day. We’re delivering the following features to trusted testers via Workspace Labs: In Gmail, we’re adding the ability to draft responses that consider the context of your existing email thread—and making the experience available on mobile.In Google Slides and Meet, we’re enabling you to easily generate images from text descriptions. Custom images in slides can help bring your story to life, and in Meet they can be used to create custom backgrounds.In Google Sheets, we’re automating data classification and the creation of custom plans—helping you analyze and organize data faster than ever. Moving the industry forward, responsiblyCustomers continue to amaze us with their ideas and creativity, and we look forward to continuing to help them discover their own paths forward with generative AI. While the potential for impact on business is great, we remain committed to taking a responsible approach, guided by our AI Principles. As we gather more feedback from our customers and users, we will continue to bring new innovations to market, with a goal to enable organizations of every size and industry to increase efficiency, connect with customers in new ways, and unlock entirely new revenue streams.
Quelle: Google Cloud Platform

Preparing for future health emergencies with Azure HPC

A once-in-a-century global health emergency accelerates worldwide healthcare innovation and novel medical breakthroughs, all supported by powerful high-performance computing (HPC) capabilities.

COVID-19 has forever changed how nations function in the globally interconnected economy. To this day, it continues to affect and shape how countries respond to health emergencies. COVID-19 has demonstrated just how interconnected our society is and how risks, threats, and contagions can have global implications for many aspects of our daily lives.

COVID-19 was the largest global health emergency in over a century, with nearly 762 million cases reported as of the end of March 2023, according to the World Health Organization. The National Centre for Biotechnology Information points out the frequency and breath of new variants that continues to emerge at regular intervals. In response to this intricate health crisis, the global healthcare community quickly mobilized to better understand the virus, learn its behavior, and work toward preventative treatment measures to minimize the damage to lives across the world. Globally, nations mobilized resources for frontline workers, offered social protection to those most severely affected, and provided vaccine access for the billions who need it.

Recent technological innovations have provided the medical community with access to capabilities, such as HPC, that equipped healthcare professionals to better study, understand, and respond to COVID-19. Globally, healthcare innovators could access unprecedented computing power to design, test, and develop new treatments, faster, better, and more iteratively, than ever before.

Today, Azure HPC enables researchers to unleash the next generation of healthcare breakthroughs. For example, the computational capabilities offered by the Azure HPC HB-series virtual machines, powered by AMD EPYCTM CPU cores, allowed researchers to accelerate insights and advances into genomics, precision medicine, and clinical trials, with near-infinite high-performance bioinformatics infrastructure capabilities.

Since the beginning of COVID-19, companies have been leveraging Azure HPC to develop new treatments, run simulations, and testing at scale—all in preparation for the next health emergency. Azure HPC is helping companies unleash new treatments and health cure capabilities that are ushering in the next generation of treatments and healthcare capabilities, across the entire industry.

High-performance computing making a difference

A leading immunotherapy company partnered with Microsoft to leverage the capabilities of Azure HPC’s high-performance computing, in order to perform detailed computational analyses of the spike protein structure of SARS-CoV-2. Due to the critical nature of the spike protein structure and the role it plays in allowing the invasion of human cells, targeting it for study, analyses, and insights, is a crucial step in the development of treatments to combat the virus.

The company’s engineers and scientists collaborated with Microsoft, and quickly deployed HPC clusters on Azure, containing over 1250 core graphic processing units (GPUs). These GPUs are specifically designed for machine learning and similarly intense computational applications. The Azure HPC clusters augmented the company’s existing GPU clusters—which was already optimized for molecular modelling of proteins, antibodies, and antivirals—bringing a truly high-powered scaled engagement approach to fruition.

By collaborating with Microsoft in this way and making use of the massive, networked computing capabilities and advanced algorithms enabled by Azure HPC, the company was able to generate working models in days rather than the months it would have taken by following traditional approaches.

The incredible amount of computing power will help bolster drug discovery and therapeutic developments. By joining forces and bringing together the incredible power of Azure HPC and cutting edge immunotherapies, it helped contribute to the development of models that allowed researchers to better understand the virus, find novel binding sites to fight the virus, and ultimately guide the development of future treatments and vaccines for the virus.

Powering pharmaceutical research and innovation

The healthcare industry is making remarkable strides in the development of cutting-edge treatments and innovations that are geared towards solving some of the world’s greatest healthcare challenges.

For example, researchers are leveraging HPC to transform their research and development effort as well as accelerating the development of new life-saving treatments.

Using a technique producing amorphous solid dispersions (ASD), drug researchers break up active pharmaceutical ingredients and blend them with organic polymers to improve the dissolution rate, bioavailability, and solubility of drug delivery systems. Although a wonder of modern medicine, it is a highly complicated, often lab-based process that can take months.

Swiss-based Molecular Modelling Laboratory (MML), a leader in ASD screening, wanted to pivot its drug research and development to small organic and biomolecular polymers. This approach determines ASD stability prior to formulation, reveals new ASD combinations, enhances drug safety, and helps reduce drug development costs as well as delivery times.

MML chose to leverage Azure HPC resources on more than 18,000 Azure HBv2 virtual machines and to optimize high-throughput drug screening and active pharmaceutical ingredient solubility limit detection, with the aim to alleviate common development hurdles.

The adoption of Azure HPC has helped MML shift from a small start-up to an established business working with some of the top pharmaceutical companies in the world—all in a very short time.

For the global healthcare community, the computational power and scalability of Azure HPC presents an unprecedented opportunity to accelerate pharmaceutical, medical, as well as health innovation. Azure HPC will continue playing a leading role in supporting the healthcare industry to respond optimally to any future global health emergency that may arise.

Next steps

To request a demo, contact HPCdemo@microsoft.com.

Learn more about Azure HPC.

High-performance computing documentation.

View our HPC cloud journey infographic.

The post Preparing for future health emergencies with Azure HPC appeared first on Azure-Blog und Updates.
Quelle: Azure

Insights from the 2023 Open Confidential Computing Conference

I had the opportunity to participate in this year’s Open Confidential Computing Conference (OC3), hosted by our software partner, Edgeless Systems. This year’s event was particularly noteworthy due to a panel discussion on the impact and future of confidential computing. The panel featured some of the industry’s most respected technology leaders including Greg Lavender, Chief Technology Officer at Intel, Ian Buck, Vice President of Hyperscale and HPC at NVIDIA, and Mark Papermaster, Chief Technology Officer at AMD. Felix Schuster, Chief Executive Officer at Edgeless Systems, moderated the panel discussion, which explored topics such as the definition of confidential computing, customer adoption patterns, current challenges, and future developments. The insightful discussion left a lasting impression on me and my colleagues.

What is confidential computing?

When it comes to understanding what exactly confidential computing entails, it all begins with a trusted execution environment (TEE) that is rooted in hardware. This TEE protects any code and data placed inside it, while in use in memory, from threats outside the enclave. These threats include everything from vulnerabilities in the hypervisor and host operating system to other cloud tenants and even cloud operators. In addition to providing protection for the code and data in memory, the TEE also possesses two crucial properties. The first is the ability to measure the code contained within the enclave. The second property is attestation, which allows the enclave to provide a verified signature that confirms the trustworthiness of what is held within it. This feature allows software outside of the enclave to establish trust with the code inside, allowing for the safe exchange of data and keys while protecting the data from the hosting environment. This includes hosting operating systems, hypervisors, management software and services, and even the operators of the environment.

Regarding what is not confidential computing, it is not other privacy enhancing technologies (PETs) like homomorphic encryption or secure multiparty computation. It is hardware rooted, trusted execution environments with attestation.

In Azure, confidential computing is integrated into our overall defense in depth strategy, which includes trusted launch, customer managed keys, Managed HSM, Microsoft Azure Attestation, and confidential virtual machine guest attestation integration with Microsoft Defender for Cloud.

Customer adoption patterns

With regards to customer adoption scenarios for confidential computing, we see customers across regulated industries such as the public sector, healthcare, and financial services ranging from private to public cloud migrations and cloud native workloads. One scenario that I’m really excited about is multi-party computations and analytics where you have multiple parties bringing their data together, in what is now being called data clean rooms, to perform computation on that data and get back insights that are much richer than what they would have gotten off their own data set alone. Confidential computing addresses the regulatory and privacy concerns around sharing this sensitive data with third parties. One of my favorite examples of this is in the advertising industry, where the Royal Bank of Canada (RBC) has set up a clean room solution where they take merchant purchasing data and combine it with their information around the consumers credit card transactions to get a full picture of what the consumer is doing. Using these insights, RBC’s credit card merchants can then offer their consumer very precise offers that are tailored to them, all without RBC seeing or revealing any confidential information from the consumers or the merchants. I believe that this architecture is the future of advertising.

Another exciting multi-party use case is BeeKeeperAI’s application of confidential computing and machine learning to accelerate the development of effective drug therapies. Until recently, drug researchers have been hampered by inaccessibility of patient data due to strict regulations applied to the sharing of personal health information (PHI). Confidential computing removes this bottleneck by ensuring that PHI is protected not just at rest and when transmitted, but also while in use, thus eliminating the need for data providers to anonymize this data before sharing it with researchers. And it is not just the data that confidential computing is protecting, but also the AI models themselves. These models can be expensive to train and therefore are valuable pieces of intellectual property that need to be protected.

To allow these valuable AI models to remain confidential yet scale, Azure is collaborating with NVIDIA to deploy confidential graphics processing units (GPUs) on Azure based on NVIDIA H100 Tensor Core GPU.

Current challenges

Regarding the challenges facing confidential computing, they tended to fall into four broad categories:

Availability, regional, and across services. Newer technologies are in limited supply or still in development, yet Azure has remained a leader in bringing to market services based on Intel® Software Guard Extensions (Intel® SGX) and AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP). We are the first major cloud provider to offer confidential virtual machines based on Intel® Trust Domain Extensions (Intel® TDX) and we look forward to being one of the first cloud providers to offer confidential NVIDIA H100 Tensor Core GPUs. We see availability rapidly improving over the next 12 to 24 months.

Ease of adoption for developers and end users. The first generation of confidential computing services, based on Intel SGX technology, required rewriting of code and working with various open source tools to make applications confidential computing enabled. Microsoft and our partners have collaborated on these open source tools and we have an active community of partners running their Intel SGX solutions on Azure. The newer generation of confidential virtual machines on Azure, using AMD SEV-SNP, a hardware security feature enabled by AMD Infinity Gaurd and and Intel TDX, lets users run off-the-shelf operating systems, lift and shift their sensitive workloads, and run them confidentially. We are also using this technology to offer confidential containers in Azure which allows users to run their existing container images confidentially.

Performance and interoperability. We need to ensure that confidential computing does not mean slower computing. The issue becomes more important with accelerators like GPUs where the data must be protected as it moves between the central processing unit (CPU) and the accelerator. Advances in this area will come from continued collaboration with standards committees such as the PCI-SIG, which has issued the TEE Device Interface Security Protocol (TDISP) for secure PCIe bus communication and the CXL Consortium which has issued the Compute Express Link™ (CXL™) specification for the secure sharing of memory among processors. Open source projects like Caliptra which has created the specification, silicon logic, have read-only memory (ROM), and firmware for implementing a Root of Trust for Measurement (RTM) block inside a system on chip (SoC).

Industry awareness. While confidential computing adoption is growing, awareness among IT and security professionals is still low. There is a tremendous opportunity for all confidential computing vendors to collaborate and participate in events aimed at raising awareness of this technology to key decision-makers such as CISOs, CIOs, and policymakers. This is especially relevant in industries such as government and other regulated sectors where the handling of highly sensitive data is critical. By promoting the benefits of confidential computing and increasing adoption rates, we can establish it as a necessary requirement for handling sensitive data. Through these efforts, we can work together to foster greater trust in the cloud and build a more secure and reliable digital ecosystem for all.

The future of confidential computing

When the discussion turned to the future of confidential computing, I had the opportunity to reinforce Azure’s vision for the confidential cloud, where all services will run in trusted execution environments. As this vision becomes a reality, confidential computing will no longer be a specialty feature but rather the standard for all computing tasks. In this way, the concept of confidential computing will simply become synonymous with computing itself.

Finally, all panelists agreed that the biggest advances in confidential computing will be the result of industry collaboration.

Microsoft at OC3

In addition to the panel discussion, Microsoft participated in several other presentations at OC3 that you may find of interest:

Removing our Hyper-V host OS and hypervisor from the Trusted Computing Base (TCB).

Container code and configuration integrity with confidential containers on Azure.

Customer managed and controlled Trusted Computing Base (TCB) with CVMs on Azure.

Enabling faster AI model training in healthcare with Azure confidential computing.

Project Amber—Intel’s attestation service.

Finally, I would like to encourage our readers to learn about Greg Lavender’s thoughts on OC3 2023.

All product names, logos, and brands mentioned above are properties of their respective owners. 
The post Insights from the 2023 Open Confidential Computing Conference appeared first on Azure-Blog und Updates.
Quelle: Azure

Insights from the 2023 Open Confidential Computing Conference

I had the opportunity to participate in this year’s Open Confidential Computing Conference (OC3), hosted by our software partner, Edgeless Systems. This year’s event was particularly noteworthy due to a panel discussion on the impact and future of confidential computing. The panel featured some of the industry’s most respected technology leaders including Greg Lavender, Chief Technology Officer at Intel, Ian Buck, Vice President of Hyperscale and HPC at NVIDIA, and Mark Papermaster, Chief Technology Officer at AMD. Felix Schuster, Chief Executive Officer at Edgeless Systems, moderated the panel discussion, which explored topics such as the definition of confidential computing, customer adoption patterns, current challenges, and future developments. The insightful discussion left a lasting impression on me and my colleagues.

What is confidential computing?

When it comes to understanding what exactly confidential computing entails, it all begins with a trusted execution environment (TEE) that is rooted in hardware. This TEE protects any code and data placed inside it, while in use in memory, from threats outside the enclave. These threats include everything from vulnerabilities in the hypervisor and host operating system to other cloud tenants and even cloud operators. In addition to providing protection for the code and data in memory, the TEE also possesses two crucial properties. The first is the ability to measure the code contained within the enclave. The second property is attestation, which allows the enclave to provide a verified signature that confirms the trustworthiness of what is held within it. This feature allows software outside of the enclave to establish trust with the code inside, allowing for the safe exchange of data and keys while protecting the data from the hosting environment. This includes hosting operating systems, hypervisors, management software and services, and even the operators of the environment.

Regarding what is not confidential computing, it is not other privacy enhancing technologies (PETs) like homomorphic encryption or secure multiparty computation. It is hardware rooted, trusted execution environments with attestation.

In Azure, confidential computing is integrated into our overall defense in depth strategy, which includes trusted launch, customer managed keys, Managed HSM, Microsoft Azure Attestation, and confidential virtual machine guest attestation integration with Microsoft Defender for Cloud.

Customer adoption patterns

With regards to customer adoption scenarios for confidential computing, we see customers across regulated industries such as the public sector, healthcare, and financial services ranging from private to public cloud migrations and cloud native workloads. One scenario that I’m really excited about is multi-party computations and analytics where you have multiple parties bringing their data together, in what is now being called data clean rooms, to perform computation on that data and get back insights that are much richer than what they would have gotten off their own data set alone. Confidential computing addresses the regulatory and privacy concerns around sharing this sensitive data with third parties. One of my favorite examples of this is in the advertising industry, where the Royal Bank of Canada (RBC) has set up a clean room solution where they take merchant purchasing data and combine it with their information around the consumers credit card transactions to get a full picture of what the consumer is doing. Using these insights, RBC’s credit card merchants can then offer their consumer very precise offers that are tailored to them, all without RBC seeing or revealing any confidential information from the consumers or the merchants. I believe that this architecture is the future of advertising.

Another exciting multi-party use case is BeeKeeperAI’s application of confidential computing and machine learning to accelerate the development of effective drug therapies. Until recently, drug researchers have been hampered by inaccessibility of patient data due to strict regulations applied to the sharing of personal health information (PHI). Confidential computing removes this bottleneck by ensuring that PHI is protected not just at rest and when transmitted, but also while in use, thus eliminating the need for data providers to anonymize this data before sharing it with researchers. And it is not just the data that confidential computing is protecting, but also the AI models themselves. These models can be expensive to train and therefore are valuable pieces of intellectual property that need to be protected.

To allow these valuable AI models to remain confidential yet scale, Azure is collaborating with NVIDIA to deploy confidential graphics processing units (GPUs) on Azure based on NVIDIA H100 Tensor Core GPU.

Current challenges

Regarding the challenges facing confidential computing, they tended to fall into four broad categories:

Availability, regional, and across services. Newer technologies are in limited supply or still in development, yet Azure has remained a leader in bringing to market services based on Intel® Software Guard Extensions (Intel® SGX) and AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP). We are the first major cloud provider to offer confidential virtual machines based on Intel® Trust Domain Extensions (Intel® TDX) and we look forward to being one of the first cloud providers to offer confidential NVIDIA H100 Tensor Core GPUs. We see availability rapidly improving over the next 12 to 24 months.

Ease of adoption for developers and end users. The first generation of confidential computing services, based on Intel SGX technology, required rewriting of code and working with various open source tools to make applications confidential computing enabled. Microsoft and our partners have collaborated on these open source tools and we have an active community of partners running their Intel SGX solutions on Azure. The newer generation of confidential virtual machines on Azure, using AMD SEV-SNP, a hardware security feature enabled by AMD Infinity Gaurd and and Intel TDX, lets users run off-the-shelf operating systems, lift and shift their sensitive workloads, and run them confidentially. We are also using this technology to offer confidential containers in Azure which allows users to run their existing container images confidentially.

Performance and interoperability. We need to ensure that confidential computing does not mean slower computing. The issue becomes more important with accelerators like GPUs where the data must be protected as it moves between the central processing unit (CPU) and the accelerator. Advances in this area will come from continued collaboration with standards committees such as the PCI-SIG, which has issued the TEE Device Interface Security Protocol (TDISP) for secure PCIe bus communication and the CXL Consortium which has issued the Compute Express Link™ (CXL™) specification for the secure sharing of memory among processors. Open source projects like Caliptra which has created the specification, silicon logic, have read-only memory (ROM), and firmware for implementing a Root of Trust for Measurement (RTM) block inside a system on chip (SoC).

Industry awareness. While confidential computing adoption is growing, awareness among IT and security professionals is still low. There is a tremendous opportunity for all confidential computing vendors to collaborate and participate in events aimed at raising awareness of this technology to key decision-makers such as CISOs, CIOs, and policymakers. This is especially relevant in industries such as government and other regulated sectors where the handling of highly sensitive data is critical. By promoting the benefits of confidential computing and increasing adoption rates, we can establish it as a necessary requirement for handling sensitive data. Through these efforts, we can work together to foster greater trust in the cloud and build a more secure and reliable digital ecosystem for all.

The future of confidential computing

When the discussion turned to the future of confidential computing, I had the opportunity to reinforce Azure’s vision for the confidential cloud, where all services will run in trusted execution environments. As this vision becomes a reality, confidential computing will no longer be a specialty feature but rather the standard for all computing tasks. In this way, the concept of confidential computing will simply become synonymous with computing itself.

Finally, all panelists agreed that the biggest advances in confidential computing will be the result of industry collaboration.

Microsoft at OC3

In addition to the panel discussion, Microsoft participated in several other presentations at OC3 that you may find of interest:

Removing our Hyper-V host OS and hypervisor from the Trusted Computing Base (TCB).

Container code and configuration integrity with confidential containers on Azure.

Customer managed and controlled Trusted Computing Base (TCB) with CVMs on Azure.

Enabling faster AI model training in healthcare with Azure confidential computing.

Project Amber—Intel’s attestation service.

Finally, I would like to encourage our readers to learn about Greg Lavender’s thoughts on OC3 2023.

All product names, logos, and brands mentioned above are properties of their respective owners. 
The post Insights from the 2023 Open Confidential Computing Conference appeared first on Azure-Blog und Updates.
Quelle: Azure

Microsoft Cost Management updates—April 2023

Whether you’re a new student, a thriving startup, or the largest enterprise, you have financial constraints, and you need to know what you’re spending, where it’s being spent, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Cost Management comes in.

We’re always looking for ways to learn more about your challenges and how Microsoft Cost Management can help you better understand where you’re accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

FinOps Foundation announces a new specification project to demystify cloud billing data.

Centrally managed Azure Hybrid Benefit for SQL Server is generally available.

Scheduled alerts in Azure Government.

Register for Securely Migrate and Optimize with Azure.

Register for Optimize your IT costs with Azure Monitor.

Cut costs with AI-powered productivity in Microsoft Teams.

3 ways to reduce costs with Microsoft Teams Phone.

What’s new in Cost Management Labs.

New ways to save money with Microsoft Cloud.

New videos and learning opportunities.

Documentation updates.

Let’s dig into the details.

FinOps Foundation announced a new specification project to demystify cloud billing data

Microsoft partnered with FinOps Foundation and Google to launch FOCUS (FinOps Open Cost and Usage Specification), a technical project to build and maintain an open specification for cloud cost data. As one of the key contributors and principal steering committee members for this project, we’re incredibly excited about the potential value this will bring for organizations of all sizes.

Some of the benefits you’ll see include the ability to:

Better understand how they’re being charged across services and especially cloud providers.

Reduce data ingestion and normalization requirements.

Streamline reporting and monitoring efforts, like cost allocation and showback.

Leverage shared guidance across the industry for how to monitor and manage costs.

FOCUS will play a major role in the evolution of the FinOps Framework and its guidance as it drives more consistency in how to analyze and communicate changes in cost, including anything from measuring key performance indicators (KPIs) to managing anomalies and commitment-based discounts to tracking resource utilization and more.

To learn more, read the FinOps Foundation announcement and join us at FinOps X, where we’ll announce an initial draft release. All FOCUS steering committee members will be on-site for deeper discussions about its roadmap and implementation.

Centrally managed Azure Hybrid Benefit for SQL Server is generally available

If you’re migrating from on-premises to the cloud, Azure Hybrid Benefit should be part of your cost optimization plan. Azure Hybrid Benefit is a licensing benefit that helps customers significantly reduce the costs of running their workloads in the cloud. It works by letting customers use their on-premises licenses with active Software Assurance or subscription-enabled Windows Server and SQL Server licenses on Azure. You can also leverage active Linux subscriptions, including Red Hat Enterprise Linux or SUSE Linux Enterprise server running in Azure. Traditionally, you would track available licenses that you’re using with Azure Hybrid Benefit internally and compare that with cost reports available from Cost Management Power BI reports, which can be tedious. With centralized management, you can assign SQL Server licenses to individual subscriptions or share them across an entire billing account to let the cloud manage the licenses for you, maximizing your benefit and sustaining compliance with less effort.

Centralized management of Azure Hybrid Benefit for SQL Server is now generally available.

To learn more, see Azure Hybrid Benefit documentation.

Scheduled alerts in Azure Government

Last month, you saw the addition of scheduled alerts for built-in views in Cost analysis. This month, we’re happy to announce that scheduled alerts are now available for Azure Government. Scheduled alerts allow you to get notified on a daily, weekly, or monthly basis about changes in cost by sending a picture of a chart view in Cost analysis to a list of recipients. You can even send it to stakeholders who don’t have direct access to costs in the Azure portal. To learn more, see subscribe to scheduled alerts.

Register for Securely Migrate and Optimize with Azure

Did you know you can lower operating costs by up to 40 percent when you migrate Windows Server and SQL Server to Azure versus on-premises?1 Furthermore, you can improve IT efficiency and operating costs by up to 53 percent by automating management of your virtual machines in cloud and hybrid environments. To maximize the value of your existing cloud investments, you can utilize tools like Microsoft Cost Management and Azure Advisor. A recent study showed that our customers achieve up to 34 percent reduction in Azure spend in the first year by using Microsoft Cost Management. To learn more about how to achieve efficiency and maximize cloud value with Azure, join us and register for Securely Migrate and Optimize with Azure, a free digital event on Wednesday, April 26, 2023, 9:00 AM to 11:00 AM Pacific Time.

To learn more, see 5 reasons to join us at Securely Migrate and Optimize with Azure.

Register for Optimize Your IT Costs with Azure Monitor

Join the Azure Monitor engineering team on May 17, 2023 from 10:00 AM to 11:00 AM Pacific Time, as they continue to listen and respond to feedback to ensure your corporate priorities are kept at the forefront!

The Azure Monitor team introduced some new pricing plans that can drive costs down without compromising performance. The team has taken some of the key points along with valuable guidance and best practices and will share it during this webinar.

In this webinar, you will learn:

New Azure Monitor pricing plans and different scenarios in which the new price plan can be applied.

Other levers that you can take advantage of to optimize your monitoring costs.

No regret moves you can implement today to start realizing cost savings.

Register for Optimize your IT costs with Azure Monitor and join us on May 17, 2023 from 10:00 AM to 11:00 AM Pacific Time.

Cut costs with AI-powered productivity in Microsoft Teams

As we face economic uncertainties and changes to work patterns, organizations are searching for ways to optimize IT investments and re-energize employees to achieve business results. Now—more than ever—organizations need solutions to adapt to change, improve productivity, and reduce costs. Fortunately, modern tools powered by AI hold the promise to boost individual, team, and organizational-level productivity and fundamentally change how we work, including intelligent recap for meetings in Microsoft Teams Premium with AI-augmented video recordings, AI-generated notes, and AI-generated tasks and action items, reusable meeting templates, and more.

To learn more, see Microsoft Teams Premium: Cut costs and add AI-powered productivity.

3 ways to reduce costs with Microsoft Teams Phone

As the way we work evolves, today’s organizations need cost-effective, reliable telephony solutions that help them support flexible work and truly bridge the gap between the physical and digital worlds. Our customers are searching for products that help them promote an inclusive working environment and streamline communications. And they need solutions that simplify their technological footprint and cut the cost of legacy IT solutions and other non-essential expenses.

After examining the potential ROI that companies may realize by implementing Teams Phone, a recent study found that businesses could:

Reduce licensing and usage costs.

Minimize the burden on IT.

Help people save time and collaborate more effectively.

To learn more, including customer quotes, see 3 ways to improve productivity and reduce costs with Microsoft Teams Phone.

What’s new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what’s coming in Microsoft Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

New: Settings in the cost analysis preview—Enabled by default in Labs.Get quick access to cost-impacting settings from the Cost analysis preview. You will see this by default in Labs and can enable the option from the try preview menu.

Update: Customers view for Cloud Solution Provider partnersCustomers view for Cloud Solution Provider (CSP) partners—Now enabled by default in Labs.View a breakdown of costs by customer and subscription in the Cost analysis preview. Note this view is only available for CSP billing accounts and billing profiles. You will see this by default in Labs and can enable the option from the Try preview menu.

Merge cost analysis menu items.Only show one cost analysis item in the Cost Management menu. All classic and saved views are one-click away, making them easier than ever to find and access. You can enable this option from the try preview menu.

Recommendations view.View a summary of cost recommendations that help you optimize your Azure resources in the cost analysis preview. You can opt in using the try preview menu.

Forecast in the cost analysis preview.Show your forecast cost for the period at the top of the cost analysis preview. You can opt in using Try preview.

Group related resources in the cost analysis preview.Group related resources, like disks under virtual machinesVMs or web apps under App Service plans, by adding a “cm-resource-parent” tag to the child resources with a value of the parent resource ID.

Charts in the cost analysis preview.View your daily or monthly cost over time in the cost analysis preview. You can opt in using Try Preview.

View cost for your resources.The cost for your resources is one click away from the resource overview in the preview portal. Just click View cost to quickly jump to the cost of that resource.

Change scope from the menu.Change scope from the menu for quicker navigation. You can opt-in using Try Preview.

Of course, that’s not all. Every change in Microsoft Cost Management is available in Cost Management Labs a week before it’s in the full Azure portal or Microsoft 365 admin center. We’re eager to hear your thoughts and understand what you’d like to see next. What are you waiting for? Try Cost Management Labs today.

New ways to save money in the Microsoft Cloud

Lots of cost optimization improvements over the last month! Here are 10 general availability offers you might be interested in:

Azure Kubernetes Service introduces new Free and Standard pricing tiers.

Spot priority mix for Virtual Machine Scale Sets (VMSS).

More transactions at no additional cost for Azure Standard SSD storage.

Arm-based VMs now available in four additional Azure regions.

New General-Purpose VMs—Dlsv5 and Dldsv5.

Azure Cosmos DB for PostgreSQL cluster compute start and stop.

New burstable SKUs for Azure Database for PostgreSQL—Flexible Server.

Azure Database for PostgreSQL—Flexible Server in Australia Central.

App Configuration geo-replication.

And six new preview offers:

New Memory Optimized VM sizes—E96bsv5 and E112ibsv5.

Azure HX series and HBv4 series virtual machines.

Azure Container Apps offers new plan and pricing structure.

Read-write premium caching for Azure HPC Cache.

In-place scaling for enterprise caches in Azure Redis Cache.

Azure Chaos Studio is now available in Brazil South region.

New videos and learning opportunities

Here’s one new video you might be interested in:

Optimize IT investments to maximize efficiency and reduce cloud spend (10 minutes).

Follow the Microsoft Cost Management YouTube channel to stay in the loop with new videos as they’re released and let us know what you’d like to see next.

Want a more guided experience? Start with Control Azure spending and manage bills with Microsoft Cost Management.

Documentation updates

Here are a few documentation updates you might be interested in:

New: Calculate Enterprise Agreement (EA) savings plan cost savings.

Updated: Understand usage details fields.

Updated: Group and allocate costs using tag inheritance.

Updated: Allocate Azure costs.

Updated: EA Billing administration on the Azure portal.

Updated: Create a Microsoft Customer Agreement subscription.

Updated: Change an Azure reservation directory.

Updated: Optimize Azure Synapse Analytics costs with a Pre-Purchase Plan.

22 updates based on your feedback.

Want to keep an eye on all documentation updates? Check out the Cost Management and Billing documentation change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request. You can also submit a GitHub issue. We welcome and appreciate all contributions!

What’s next?

These are just a few of the big updates from last month. Don’t forget to check out the previous Microsoft Cost Management updates. We’re always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @MSCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. You can also share ideas and vote up others in the Cost Management feedback forum or join the research panel to participate in a future study and help shape the future of Microsoft Cost Management.

We know these are trying times for everyone. Best wishes from the Microsoft Cost Management team. Stay safe and stay healthy.

1 Forrester Consulting, “The Total Economic Impact™ of Azure Cost Management and Billing”, February 2021.
The post Microsoft Cost Management updates—April 2023 appeared first on Azure-Blog und Updates.
Quelle: Azure

What’s new with Azure Files

Azure Files provides fully managed file shares in the cloud that you can access from anywhere using standard protocols such as Server Message Block (SMB) or Network File System (NFS). 

Since announcing the general availability of support for the Network File System (NFS) v4.1 protocol back in December 2021, we have seen customers leveraging this capability for a wide variety of important use cases including enterprise resource planning (ERP) solutions, development and test environments (devops), content management systems (CMS), and mission critical workloads like SAP. We’re thrilled to share that SAP ECS has adopted Azure Files NFS as the default choice for deploying SAP NetWeaver servers and SAP HANA shared directories on Azure. SAP’s decision to include Azure file shares is a testament to the fact that they’re a cost-effective choice for mission critical workloads requiring high performance and high availability. We’ve also continued to listen to customer feedback and are very excited to announce several highly anticipated features, including a 99.99 percent uptime SLA, snapshot support, and nconnect.

NFS Azure file shares are now the default option for SAP Enterprise Cloud Services (ECS) deployments

Azure file shares provide the functionality, performance, and reliability required to keep your SAP applications running smoothly. Being a fully managed service brings simplicity and more cost effectiveness than alternatives, such as building NFS cluster (DRDB) file shares, especially when considering redundancy. SAP and Microsoft partnered to rigorously validate the use of Azure Files in high-availability deployments for Azure SAP RISE, where it is now offered by default for deployment of SAP NetWeaver servers and SAP HANA shared directories. We’re excited that SAP themselves have chosen Azure Files to help power many of the world’s largest and most complex workloads.

“Partnering with Microsoft and the Azure Files team was very productive. Our teams worked closely together to enable new highly available solutions around NFS shares and lower cost structures. The zonal replication capabilities that Azure Files provides strengthen and simplify SAP RISE architectures on Azure beyond what we could deploy with any other technology on Azure. We expect to reduce costs both directly and indirectly by using this service. With the lower time-to-market now achieved with this simplified architecture, we can bootstrap more deployments rather quickly and earn new business.”—Lalit Patil, Chief Technology Officer, SAP Enterprise Cloud Services.

To learn more about running SAP workloads on Azure, see the following articles:

High availability for SAP NetWeaver (RHEL)

High availability for SAP NetWeaver (SLES)

High availability for HANA scale-out system with HSR (SLES)

Additionally, you can use Azure Center for SAP Solutions (Preview) to deploy a highly available S4/HANA system with NFS on Azure Files.

One such customer, Germany-based Munich Re benefited from using Azure Files NFS with its SAP deployments. Munich Re, one of the world’s leading insurance companies, runs one of the largest SAP environments in Europe. Munich Re took a keen interest in Azure Files and has been using it in production since the NFS protocol became generally available. With Azure Files, they can quickly deploy a file share with just a few clicks. It used to take Munich Re from four to six months to add resources, but with SAP on Azure and their infrastructure automation, they can now do it within an hour.

“We love how easy Azure Files is to use and manage, and we certainly appreciate its interoperability with other Azure services. And having a fully managed service eliminates the burden and costs of managing NFS servers.”—Matthias Spang, Technical Architect for SAP Solutions, Munich Re.

High-availability (HA) SAP solutions need a highly available file share for hosting sapmnt, transport, and interface directories. You can use Azure Files premium NFS with availability sets and availability zones.

Figure 1 – High-availability (HA) SAP NetWeaver system with Azure Files.

A highly available SAP HANA system in a scale-out configuration with HANA system replication (HSR) and Pacemaker needs shared file systems for storing shared files between all hosts in an SAP HANA system. You can use Azure Files premium NFS for satisfying this usecase. 

Figure 2 – Azure Files NFS for SAP HANA scale-out system with Pacemaker cluster. Note: Azure Files is used for /hana/shared and not used for storing DBMS or logs.

New SLA of 99.99 percent uptime for Azure Files Premium Tier is generally available

In today’s world of digital business, downtime is not an option. Azure Files now offers a 99.99 percent SLA per file share for its Premium Tier. The new 99.99 percent uptime SLA applies to all Azure Files Premium shares, regardless of protocol (SMB, NFS, and REST) or redundancy type (Locally Redundant Storage (LRS) or Zonally Redundant Storage (ZRS)). This means that you can benefit from this SLA immediately, without any configuration changes or extra costs.

With this new SLA, you can be confident that your data is highly available. If the availability drops below the guaranteed 99.99 percent uptime, you’re eligible for service credits.

Furthermore, Azure Files offers a ZRS solution with twelve 9’s durability. This means you can trust that your data is safe, even in the face of hardware failures or other unexpected events.

With the new 99.99 percent uptime SLA for Azure Files Premium Tier, you can have a high level of confidence and assurance that your data is always available. By leveraging the latest in cloud technologies and features, Azure Files delivers a reliable and durable storage solution that can meet the needs of even the most demanding workloads.

Snapshot support for NFS file shares (Preview)

While it’s rare, data corruption or accidental deletion can happen to anyone, and you need to be protected. File share snapshots protect your data from these events by ensuring you have a crash consistent dataset to recover from. File share snapshots capture the share state at a point in time, are immutable (read-only), and are differential (delta copies to keep your TCO low).

Snapshots are easy to manage and use in Azure Files. The creation of a snapshot is instantaneous. Once created, you can manage snapshots using the Azure portal, REST API, Azure CLI, or PowerShell. Enumeration of snapshots, browsing of file or folders, and copying of data is supported from within the NFS clients under the “.snapshots” folder which is present at the root of the mountpath.

Figure 3 – List, browse, and copy from your snapshots from any connected client.

Data protection is a key enterprise promise and a compliance requirement for many organizations. To date, our customers have fulfilled this requirement by doing their own replication or using one of our backup partners to copy the primary data from the share to another location. You can use snapshots to enhance these solutions. By replicating from snapshots instead of the primary share, you can ensure that the data being copied is all from a specific point in time.

Are you as excited as we are? If so, you can fill out the enrollment form to get informed of the availability of this preview feature in the region of your choice.

Boosting per-client performance with NFS nconnect is generally available

Azure Files recently announced support for nconnect on its NFS shares. By using nconnect, you can improve a multi-core client’s throughput and IOPS up to four times without making any changes to the app itself. Previously, applications were limited by the bandwidth of a single core using a single TCP connection. Nconnect is ideal for throughput-intensive scenarios like data ingestion, analytics, machine learning, devops, ETL pipelines, batch processing, and more. For example, financial services performing Value at Risk (VaR) Monte Carlo or machine learning model data simulations running on the Azure Kubernetes Service (AKS) with Azure Files can now perform more risk calculations with fewer client machines in their AKS node pools. With the parallelism that nconnect enables, you can complete your throughput-intensive simulations in less time, allowing you to scale down the compute resources sooner and reduce overall TCO.

In addition to delivering higher performance, nconnect enhances fault tolerance by allowing the client to switch to an alternative TCP connection in the event of a connection failure. By enabling the client to use multiple connections, nconnect provides greater flexibility and load balancing for machines with multiple network paths.

Using nconnect is simple. For example, to use four channels on a Linux VM, simply add “-o nconnect=4” parameter to the mount command. You can use nconnect with your AKS cluster as a persistent volume by using the same Azure Files NFS CSI driver and adding “-nconnect=4” under “mountoptions”. Nconnect is available in all regions where NFS is supported at no additional cost. Please visit our product page to learn more.

Up to 1100 MiB/s single client write throughput

Up to 1700 MiB/s single client read throughput

Learn more

Learn how to create an NFS Azure file share.

Check out NFS FAQ for more information.

With these new features, Azure Files NFS is poised to power even more of the world’s largest and most complex workloads, delivering superior functionality, performance, and reliability to customers across a range of industries and use cases. We believe that the new 99.99 percent SLA, snapshot capabilities, and higher performance with nconnect will unlock many more use cases and applications. We’re excited to see how you take advantage of these new capabilities!

If you have a feature request or feedback, don’t hesitate to reach out to the Azure Files team by emailing azurefiles@microsoft.com or filling out this form.
The post What’s new with Azure Files appeared first on Azure-Blog und Updates.
Quelle: Azure