Navigating the SPACE between productivity and developer happiness

Early in my career, I worked as a developer and system administrator. I loved my teams and projects and noticed that many of the things engineers talked about when we were really getting work done (“being productive”) just didn’t make it to the weekly or monthly reports our management seemed to care about. For example, the reports only captured a few things, like the tests we had executed in burndown charts and the number of bugs closed. And while those things were important, they missed the rest of the important work we did that really contributed to our projects shipping and our systems staying online, like being able to focus, working well with teams, and solving hard problems. To reflect our renewed focus on the overall developer experience, I am excited to share that we are rebranding Developer Velocity Lab to Developer Experience Lab. And that’s just the start.

The SPACE framework and new joint research with Vista Equity Partners to help developers

Metrics that only look at activities, or purely focus on speed and volume don’t capture the important capabilities required to make a project successful. They also miss the ways that tools, culture, and processes intersect to help or hinder the code’s journey to the customer. I realized that by focusing on output instead of outcomes, organizations were only getting a partial view of what it means to make an impact building systems and software; this is truer today than ever before with increasingly complex systems and changing market and customer demands.

This led me to a line of research that became my first book, Accelerate: the Science of Lean Software and DevOps. Exploring these ideas further with Microsoft and GitHub, we released the SPACE framework, which presents a holistic framework to evaluate developer productivity using five dimensions: Satisfaction, Performance, Activity, Communication, and Efficiency. We also investigated ways to help developers have better days more consistently and found the developer experience is a central factor in not only personal productivity, but also well-being and satisfaction; the Good Day Project shares our findings and continues to influence teams and projects.

Today, Microsoft and GitHub are expanding this vision by applying our research to help build tools and environments that help developers do what they do best: create. As part of this effort, we’re announcing new research with Vista Equity Partners, a leading global asset manager with more than two decades of experience investing exclusively in enterprise software, data, and technology-enabled organizations. 

Beyond velocity: A holistic way to understand software developers

Productivity in the software world can’t be boiled down to lines of code written, commits made, or pull requests completed. Often, fewer lines of elegant, easy-to-read code are better than large, complex blocks.

There is much more to developers’ work than just writing code, too. Developers contribute to the success of their teams by doing work that doesn’t show up in traditionally-measured activity metrics. For example, there are stand-up meetings and collaborations that help a software project stay on course, we contribute to project docs and architectural diagrams, and there are times you just grab coffee to mentor or stop by to help debug some code. How do we fold these intangibles into the productivity discussion?

We also know there’s a strong correlation between process efficiency and job satisfaction. Streamlining tasks and processes can help facilitate developers’ abilities to find their flow state and string together those good, productive days.

By shifting the name of Microsoft and GitHub’s joint research lab from the Developer Velocity Lab to the Developer Experience Lab, we’re putting developers and their experience at the center of this discussion and focusing on a holistic approach that considers the individual, organizational, and community outcomes that really matter. The SPACE framework was developed to make sense of this complexity; beyond that, the SPACE framework gives us a multi-dimensional blueprint for creating fulfilling experiences that recognize support developer happiness and well-being are key components of work and productivity.

The new Developer Experience Lab

The goals for our work at Microsoft and GitHub through the Developer Experience Lab are to remove friction in the developer experience, advance DevOps practices, and resolve the technical and real-world inefficiencies that keep code from reaching the cloud.

As part of that, this week we’re announcing new research with Vista Equity Partners that provides a deeper look into what developers want and need.

As expected, our research found that the capabilities and user experience of development tools play a huge role in developers’ ability to focus and innovate—and the importance of tools goes beyond just providing a place to code. Over the past few years, remote and hybrid work has become the norm, and developers rely on their tools to facilitate the collaboration, connection, and work processes that are so critical to building software. 

Findings like these are guiding how we think about supporting developers in the field. The Developer Experience Lab is connecting what we’re learning about developer happiness to our policy guidance and to Microsoft’s next generation of developer tools, including some groundbreaking work with AI.

AI as your copilot

Along with the monumental shift to hybrid work, AI is making headlines across industries. We’re already seeing its impact on software development, and we’re imagining ways to pair AI tools with human programmers to amplify developers’ abilities and help spark innovation.

To this end, we’ve developed and released GitHub Copilot, an AI assistant across GitHub apps. As the name implies, Github Copilot is a tool that works alongside people to augment and assist their work. For developers, that means handling tasks that would typically cause an interruption, such as locating a code library, building repetitive infrastructure, or spotting bugs. Native GitHub Copilot integrations simplify everything from pull requests to code reviews, and they’re delivered through an engaging, streamlined interface.

Looking ahead, we’re also thinking about how we can use AI to help organizations evaluate their level of skill, productivity, and developer happiness within the context of SPACE. By helping organizations find the most useful metrics for their environment and applying advanced analytics, we can make it easier for them to optimize processes and engage with developers.

Developers, too, have long found value in tracking their own productivity, both to assess their own skills and methodologies, and to improve collaboration. We’ll continue to innovate here as well, exploring how to deliver high-value insights so developers can get the most of out their days.

Providing the right experience to build better code

As the demand for software innovation continues to boom, there is increasing pressure on developers tasked with building the future. Studying their complex world of code, products, policies, communities, and culture is a passion of mine.

I’m excited to be a researcher here at Microsoft, where we can reimagine and research the future of the developer experience. The Developer Experience Lab team is a group of experts from a variety of backgrounds conducting socio-technical research. This allows us to ask deep, interesting questions about the developer experience and how to best enable it, and then amplify those findings through new tools, technologies, and best practices.

Learn more about the Developer Experience Lab

We are still in the early stages of this journey, and we hope you’ll join us on the ride. You can stay up to date on everything we’re working on at Developer Experience Lab.
The post Navigating the SPACE between productivity and developer happiness appeared first on Azure Blog.
Quelle: Azure

Microsoft Azure security evolution: Embrace secure multitenancy, Confidential Compute, and Rust

In the first blog of our series on Azure Security, we delved into our defense-in-depth approach for tackling cloud vulnerabilities. The second blog highlighted our use of variant hunting to detect patterns of vulnerabilities across our services. In this installment, we will introduce our game-changing bets that will enable us to deliver industry-leading security architectures with built-in security for years to come, ensuring a secure cloud experience for our customers. We will discuss our focus on secure multitenancy and share our vision for harnessing the power of Confidential Compute and the Rust programming language to protect our customers’ data from cyber threats. By investing in groundbreaking security strategies, such as Secure Multitenancy, Confidential Compute, and the Rust programming language, Azure provides customers with robust, built-in security measures that not only protect their data but also enhance the overall cloud experience, giving customers the confidence to innovate and grow their businesses securely.

Secure multitenancy with robust compute, network, and credential isolation

In our first blog, we touched on the benefits we’ve seen from improvements in compute, network, and credential isolation. Now, we want to dive deeper into what this means. For compute isolation, we’re investing heavily in hardware-based virtualization (HBV), the foundation of running untrusted code in Azure. Traditional Virtual Machines are at the core of many Azure Services hosting customer workloads. Our current bounty of up to USD250,000 on Microsoft Hyper-V vulnerabilities demonstrates our strong defense and highlights the importance of this boundary.

Our innovations with HBV extends beyond traditional virtual machines (VMs). Azure Container Instances (ACI) serve as our platform for running container workloads, utilizing HBV to isolate container groups from each other. ACI container groups take advantage of the same HBV that powers Azure Virtual Machines, but they offer a platform tailored for modern container-based applications. Numerous new and existing services are moving to ACI as a simple, high-performance model for secure multitenancy. Building services atop secure foundations like ACI enables us to address many isolation problems centrally, allowing multiple services to benefit from fixes simultaneously. Furthermore, we’re excited to introduce HBV to Kubernetes workloads via industry-standard Kata Container support in Azure Kubernetes Service. Similar to ACI container groups, Kata Container pods utilize HBV for robust isolation of untrusted workloads. In the coming months, we’ll share more about our efforts to bring this approach to WebAssembly hosting, boasting single-millisecond overhead compared to hosting WebAssembly without HBV. For network isolation, we’re shifting services towards dedicated virtual networks per tenant and ensuring support for Private Links which enable our services to communicate directly with customer-managed virtual networks. Shared networks have proven error-prone, with mistakes in network Access Control Lists or subnets leading to inadequate network isolation between tenants. Dedicated virtual networks make it difficult to accidentally enable connectivity between tenants that should remain separate.

Credential isolation, on the other hand, involves using credentials scoped to the resources of a single tenant whenever possible. Employing credentials with minimal permissions ensures that even if vulnerabilities are discovered, credentials providing access to other tenants’ data aren’t readily available.

Through significant investments in HBV and a focus on compute, network, and credential isolation, Azure is providing customers with enhanced security and isolation for their workloads. By developing innovative solutions such as Azure Container Instances, and bringing HBV to Kubernetes and WebAssembly hosting, we are creating a robust and secure multitenancy environment that protects data and improves the overall cloud experience. As we continue to strengthen Azure’s security foundation, we are also exploring new opportunities to further enhance our defense-in-depth approach. In the next section, we will discuss the role of Confidential Compute in adding an extra layer of protection to our customers’ data and workloads.  

Confidential Compute: A new layer of defense

Since the dawn of cloud computing in Azure, we’ve recognized the crucial role of HBV in running customer workloads on VMs. However, VMs only protect the host machine from malicious activity within the VM. In many cases, a vulnerability in the VM interface could allow a bad actor to escape to the host, and from there they could fully access other customers’ VM. Confidential Compute presents a new layer of defense against these attacks by preventing bad actors with hosting environment access from accessing the content running in a VM. Our goal is to leverage Confidential VMs and Confidential Containers broadly across Azure Services, adding this extra layer of defense to VMs and containers utilized by our services. This has the potential to reduce the blast radius of a compromise at any level in Azure. While ambitious, one day using Confidential Compute should be as ubiquitous as other best practices have become such as encryption in transit or encryption at rest.

Rust as the path forward over C/C++

Decades of vulnerabilities have proven how difficult it is to prevent memory-corrupting bugs when using C/C++. While garbage-collected languages like C# or Java have proven more resilient to these issues, there are scenarios where they cannot be used. For such cases, we’re betting on Rust as the alternative to C/C++. Rust is a modern language designed to compete with the performance C/C++, but with memory safety and thread safety guarantees built into the language. While we are not able to rewrite everything in Rust overnight, we’ve already adopted Rust in some of the most critical components of Azure’s infrastructure. We expect our adoption of Rust to expand substantially over time.

Our unwavering commitment

Our commitment to secure multitenancy, Confidential Compute, and Rust represents a major investment that we’ll be making in the coming years. Fortunately, Microsoft’s security culture is among the strongest in the industry, empowering us to deliver on these ambitious bets. By prioritizing security as an integral component of our services, we are helping our customers to build and maintain secure, reliable, and scalable applications in the cloud, while ensuring their trust in our platform remains steadfast. 

Learn more

Read the previous two blogs in this series to learn how Azure leverages a defense-in-depth security approach and cloud variant hunting to learn from vulnerabilities and layer protection throughout every phase of design, development, and deployment.

Explore the built-in security features in our cloud platforms and technologies that help you be secure from the start. 

Join Azure Security engineering experts at Microsoft Build to engage in live Q&A around Azure’s robust defense-in-depth strategies, the intriguing world of cloud variant hunting, and maintaining secure multitenancy. Don’t miss this chance to enhance your skills and remain at the forefront of the ever-changing cybersecurity landscape.

The post Microsoft Azure security evolution: Embrace secure multitenancy, Confidential Compute, and Rust appeared first on Azure Blog.
Quelle: Azure

Microsoft Build 2023: Innovation through Microsoft commercial marketplace

As we look forward to Microsoft Build 2023, I am inspired by the innovation coming from our ISV partners and SaaS providers building on the Microsoft Cloud.

In the past year, we’ve seen large-scale, generative AI models support the creation of new capabilities that expand our vision of the possible, improve productivity, and ignite creativity. The general availability of Azure OpenAI Service is helping developers apply these models to a variety of use cases such as natural language understanding, writing assistance, code generation, data reasoning, content summarization, and semantic search. With Azure’s enterprise-grade security and built-in responsible AI, the rate of innovation is growing exponentially.

Making new strides in AI

The Microsoft commercial marketplace makes it possible for customers to find, purchase, and deploy innovative applications and services to drive their business outcomes. At Microsoft Build 2023, we’re proud to highlight several partners with AI solutions available in the marketplace:

Orkes empowers developers to easily build reliable and secure AI applications, tools, and integrations on Azure with the Conductor open source microservices orchestration platform. With built-in elastic scaling and reliability, teams can more quickly bring applications to market.

Run:ai helps companies deliver AI faster and bridge the gap between data science and computing infrastructure by providing a high-performance compute virtualization layer for deep learning, which accelerates the training of neural network models and enables the development of large AI models to help organizations in every industry accelerate AI innovation.

Statsig allows any company to experiment like big tech at a fraction of the cost. With advanced feature management tools such as automated A/B testing and integrated product analytics, developers can use data insights to learn faster and build better products.

Explore security solutions with our partners

As AI is experiencing rapid growth, security has never been more important. Companies of all sizes and across every industry are increasing their investments in cybersecurity. Partners specializing in security solutions that run on the Microsoft Cloud help customers reduce costs, close coverage gaps, and prevent even the most sophisticated attacks.

At Microsoft Build 2023, we’re excited to feature select partners with security solutions offered in the marketplace:

Anjuna is a multi-cloud confidential computing platform for complete data security and privacy, featuring a unique trusted execution environment that leverages hardware-level isolation to intrinsically secure data and code in the cloud so enterprises can run applications inside Azure Confidential Computing instances in minutes without code changes.

Kovrr transforms cyber security data into actionable, financially quantified cyber risk mitigation recommendations to manage enterprise cyber risk exposure, inform which security controls to invest in, and provide insights into how to optimize cyber insurance and capital management strategies.

Noname Security protects APIs from attacks in real-time while detecting vulnerabilities and misconfigurations before they are exploited, offering deeper visibility and security than API gateways, load balancers, and well architected frameworks (WAFs) without requiring agents or network modifications.

Manage your cloud portfolio with the Microsoft commercial marketplace

The Microsoft commercial marketplace continues to grow and is becoming customers’ preferred method for managing their entire cloud portfolio.

Through the marketplace, customers can search across thousands of applications and services in a single catalog, creating a one-stop destination for all cloud needs including AI, security, data, infrastructure, and more. Solutions available on the marketplace are validated for compatibility with Microsoft applications, ensuring that customers can buy with confidence and deploy seamlessly on Azure.

For customers with enterprise agreements, purchases can be added directly to an Azure bill, simplifying the purchasing process and reducing the number of vendors to be paid separately. For organizations with a cloud consumption commitment, the entire purchase can count towards remaining commitment. Thousands of applications in the marketplace are eligible to count towards an Azure commitment, including the solutions highlighted above—Orkes, Run:ai, Statsig, Anjuna, Kovrr, and Noname Security. With the Microsoft commercial marketplace, customers can get the innovative solutions needed to stay ahead in a competitive market while maximizing the value of cloud investments.
The post Microsoft Build 2023: Innovation through Microsoft commercial marketplace appeared first on Azure Blog.
Quelle: Azure

Preparing for future health emergencies with Azure HPC

A once-in-a-century global health emergency accelerates worldwide healthcare innovation and novel medical breakthroughs, all supported by powerful high-performance computing (HPC) capabilities.

COVID-19 has forever changed how nations function in the globally interconnected economy. To this day, it continues to affect and shape how countries respond to health emergencies. COVID-19 has demonstrated just how interconnected our society is and how risks, threats, and contagions can have global implications for many aspects of our daily lives.

COVID-19 was the largest global health emergency in over a century, with nearly 762 million cases reported as of the end of March 2023, according to the World Health Organization. The National Centre for Biotechnology Information points out the frequency and breath of new variants that continues to emerge at regular intervals. In response to this intricate health crisis, the global healthcare community quickly mobilized to better understand the virus, learn its behavior, and work toward preventative treatment measures to minimize the damage to lives across the world. Globally, nations mobilized resources for frontline workers, offered social protection to those most severely affected, and provided vaccine access for the billions who need it.

Recent technological innovations have provided the medical community with access to capabilities, such as HPC, that equipped healthcare professionals to better study, understand, and respond to COVID-19. Globally, healthcare innovators could access unprecedented computing power to design, test, and develop new treatments, faster, better, and more iteratively, than ever before.

Today, Azure HPC enables researchers to unleash the next generation of healthcare breakthroughs. For example, the computational capabilities offered by the Azure HPC HB-series virtual machines, powered by AMD EPYCTM CPU cores, allowed researchers to accelerate insights and advances into genomics, precision medicine, and clinical trials, with near-infinite high-performance bioinformatics infrastructure capabilities.

Since the beginning of COVID-19, companies have been leveraging Azure HPC to develop new treatments, run simulations, and testing at scale—all in preparation for the next health emergency. Azure HPC is helping companies unleash new treatments and health cure capabilities that are ushering in the next generation of treatments and healthcare capabilities, across the entire industry.

High-performance computing making a difference

A leading immunotherapy company partnered with Microsoft to leverage the capabilities of Azure HPC’s high-performance computing, in order to perform detailed computational analyses of the spike protein structure of SARS-CoV-2. Due to the critical nature of the spike protein structure and the role it plays in allowing the invasion of human cells, targeting it for study, analyses, and insights, is a crucial step in the development of treatments to combat the virus.

The company’s engineers and scientists collaborated with Microsoft, and quickly deployed HPC clusters on Azure, containing over 1250 core graphic processing units (GPUs). These GPUs are specifically designed for machine learning and similarly intense computational applications. The Azure HPC clusters augmented the company’s existing GPU clusters—which was already optimized for molecular modelling of proteins, antibodies, and antivirals—bringing a truly high-powered scaled engagement approach to fruition.

By collaborating with Microsoft in this way and making use of the massive, networked computing capabilities and advanced algorithms enabled by Azure HPC, the company was able to generate working models in days rather than the months it would have taken by following traditional approaches.

The incredible amount of computing power will help bolster drug discovery and therapeutic developments. By joining forces and bringing together the incredible power of Azure HPC and cutting edge immunotherapies, it helped contribute to the development of models that allowed researchers to better understand the virus, find novel binding sites to fight the virus, and ultimately guide the development of future treatments and vaccines for the virus.

Powering pharmaceutical research and innovation

The healthcare industry is making remarkable strides in the development of cutting-edge treatments and innovations that are geared towards solving some of the world’s greatest healthcare challenges.

For example, researchers are leveraging HPC to transform their research and development effort as well as accelerating the development of new life-saving treatments.

Using a technique producing amorphous solid dispersions (ASD), drug researchers break up active pharmaceutical ingredients and blend them with organic polymers to improve the dissolution rate, bioavailability, and solubility of drug delivery systems. Although a wonder of modern medicine, it is a highly complicated, often lab-based process that can take months.

Swiss-based Molecular Modelling Laboratory (MML), a leader in ASD screening, wanted to pivot its drug research and development to small organic and biomolecular polymers. This approach determines ASD stability prior to formulation, reveals new ASD combinations, enhances drug safety, and helps reduce drug development costs as well as delivery times.

MML chose to leverage Azure HPC resources on more than 18,000 Azure HBv2 virtual machines and to optimize high-throughput drug screening and active pharmaceutical ingredient solubility limit detection, with the aim to alleviate common development hurdles.

The adoption of Azure HPC has helped MML shift from a small start-up to an established business working with some of the top pharmaceutical companies in the world—all in a very short time.

For the global healthcare community, the computational power and scalability of Azure HPC presents an unprecedented opportunity to accelerate pharmaceutical, medical, as well as health innovation. Azure HPC will continue playing a leading role in supporting the healthcare industry to respond optimally to any future global health emergency that may arise.

Next steps

To request a demo, contact HPCdemo@microsoft.com.

Learn more about Azure HPC.

High-performance computing documentation.

View our HPC cloud journey infographic.

The post Preparing for future health emergencies with Azure HPC appeared first on Azure-Blog und Updates.
Quelle: Azure

Insights from the 2023 Open Confidential Computing Conference

I had the opportunity to participate in this year’s Open Confidential Computing Conference (OC3), hosted by our software partner, Edgeless Systems. This year’s event was particularly noteworthy due to a panel discussion on the impact and future of confidential computing. The panel featured some of the industry’s most respected technology leaders including Greg Lavender, Chief Technology Officer at Intel, Ian Buck, Vice President of Hyperscale and HPC at NVIDIA, and Mark Papermaster, Chief Technology Officer at AMD. Felix Schuster, Chief Executive Officer at Edgeless Systems, moderated the panel discussion, which explored topics such as the definition of confidential computing, customer adoption patterns, current challenges, and future developments. The insightful discussion left a lasting impression on me and my colleagues.

What is confidential computing?

When it comes to understanding what exactly confidential computing entails, it all begins with a trusted execution environment (TEE) that is rooted in hardware. This TEE protects any code and data placed inside it, while in use in memory, from threats outside the enclave. These threats include everything from vulnerabilities in the hypervisor and host operating system to other cloud tenants and even cloud operators. In addition to providing protection for the code and data in memory, the TEE also possesses two crucial properties. The first is the ability to measure the code contained within the enclave. The second property is attestation, which allows the enclave to provide a verified signature that confirms the trustworthiness of what is held within it. This feature allows software outside of the enclave to establish trust with the code inside, allowing for the safe exchange of data and keys while protecting the data from the hosting environment. This includes hosting operating systems, hypervisors, management software and services, and even the operators of the environment.

Regarding what is not confidential computing, it is not other privacy enhancing technologies (PETs) like homomorphic encryption or secure multiparty computation. It is hardware rooted, trusted execution environments with attestation.

In Azure, confidential computing is integrated into our overall defense in depth strategy, which includes trusted launch, customer managed keys, Managed HSM, Microsoft Azure Attestation, and confidential virtual machine guest attestation integration with Microsoft Defender for Cloud.

Customer adoption patterns

With regards to customer adoption scenarios for confidential computing, we see customers across regulated industries such as the public sector, healthcare, and financial services ranging from private to public cloud migrations and cloud native workloads. One scenario that I’m really excited about is multi-party computations and analytics where you have multiple parties bringing their data together, in what is now being called data clean rooms, to perform computation on that data and get back insights that are much richer than what they would have gotten off their own data set alone. Confidential computing addresses the regulatory and privacy concerns around sharing this sensitive data with third parties. One of my favorite examples of this is in the advertising industry, where the Royal Bank of Canada (RBC) has set up a clean room solution where they take merchant purchasing data and combine it with their information around the consumers credit card transactions to get a full picture of what the consumer is doing. Using these insights, RBC’s credit card merchants can then offer their consumer very precise offers that are tailored to them, all without RBC seeing or revealing any confidential information from the consumers or the merchants. I believe that this architecture is the future of advertising.

Another exciting multi-party use case is BeeKeeperAI’s application of confidential computing and machine learning to accelerate the development of effective drug therapies. Until recently, drug researchers have been hampered by inaccessibility of patient data due to strict regulations applied to the sharing of personal health information (PHI). Confidential computing removes this bottleneck by ensuring that PHI is protected not just at rest and when transmitted, but also while in use, thus eliminating the need for data providers to anonymize this data before sharing it with researchers. And it is not just the data that confidential computing is protecting, but also the AI models themselves. These models can be expensive to train and therefore are valuable pieces of intellectual property that need to be protected.

To allow these valuable AI models to remain confidential yet scale, Azure is collaborating with NVIDIA to deploy confidential graphics processing units (GPUs) on Azure based on NVIDIA H100 Tensor Core GPU.

Current challenges

Regarding the challenges facing confidential computing, they tended to fall into four broad categories:

Availability, regional, and across services. Newer technologies are in limited supply or still in development, yet Azure has remained a leader in bringing to market services based on Intel® Software Guard Extensions (Intel® SGX) and AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP). We are the first major cloud provider to offer confidential virtual machines based on Intel® Trust Domain Extensions (Intel® TDX) and we look forward to being one of the first cloud providers to offer confidential NVIDIA H100 Tensor Core GPUs. We see availability rapidly improving over the next 12 to 24 months.

Ease of adoption for developers and end users. The first generation of confidential computing services, based on Intel SGX technology, required rewriting of code and working with various open source tools to make applications confidential computing enabled. Microsoft and our partners have collaborated on these open source tools and we have an active community of partners running their Intel SGX solutions on Azure. The newer generation of confidential virtual machines on Azure, using AMD SEV-SNP, a hardware security feature enabled by AMD Infinity Gaurd and and Intel TDX, lets users run off-the-shelf operating systems, lift and shift their sensitive workloads, and run them confidentially. We are also using this technology to offer confidential containers in Azure which allows users to run their existing container images confidentially.

Performance and interoperability. We need to ensure that confidential computing does not mean slower computing. The issue becomes more important with accelerators like GPUs where the data must be protected as it moves between the central processing unit (CPU) and the accelerator. Advances in this area will come from continued collaboration with standards committees such as the PCI-SIG, which has issued the TEE Device Interface Security Protocol (TDISP) for secure PCIe bus communication and the CXL Consortium which has issued the Compute Express Link™ (CXL™) specification for the secure sharing of memory among processors. Open source projects like Caliptra which has created the specification, silicon logic, have read-only memory (ROM), and firmware for implementing a Root of Trust for Measurement (RTM) block inside a system on chip (SoC).

Industry awareness. While confidential computing adoption is growing, awareness among IT and security professionals is still low. There is a tremendous opportunity for all confidential computing vendors to collaborate and participate in events aimed at raising awareness of this technology to key decision-makers such as CISOs, CIOs, and policymakers. This is especially relevant in industries such as government and other regulated sectors where the handling of highly sensitive data is critical. By promoting the benefits of confidential computing and increasing adoption rates, we can establish it as a necessary requirement for handling sensitive data. Through these efforts, we can work together to foster greater trust in the cloud and build a more secure and reliable digital ecosystem for all.

The future of confidential computing

When the discussion turned to the future of confidential computing, I had the opportunity to reinforce Azure’s vision for the confidential cloud, where all services will run in trusted execution environments. As this vision becomes a reality, confidential computing will no longer be a specialty feature but rather the standard for all computing tasks. In this way, the concept of confidential computing will simply become synonymous with computing itself.

Finally, all panelists agreed that the biggest advances in confidential computing will be the result of industry collaboration.

Microsoft at OC3

In addition to the panel discussion, Microsoft participated in several other presentations at OC3 that you may find of interest:

Removing our Hyper-V host OS and hypervisor from the Trusted Computing Base (TCB).

Container code and configuration integrity with confidential containers on Azure.

Customer managed and controlled Trusted Computing Base (TCB) with CVMs on Azure.

Enabling faster AI model training in healthcare with Azure confidential computing.

Project Amber—Intel’s attestation service.

Finally, I would like to encourage our readers to learn about Greg Lavender’s thoughts on OC3 2023.

All product names, logos, and brands mentioned above are properties of their respective owners. 
The post Insights from the 2023 Open Confidential Computing Conference appeared first on Azure-Blog und Updates.
Quelle: Azure

Insights from the 2023 Open Confidential Computing Conference

I had the opportunity to participate in this year’s Open Confidential Computing Conference (OC3), hosted by our software partner, Edgeless Systems. This year’s event was particularly noteworthy due to a panel discussion on the impact and future of confidential computing. The panel featured some of the industry’s most respected technology leaders including Greg Lavender, Chief Technology Officer at Intel, Ian Buck, Vice President of Hyperscale and HPC at NVIDIA, and Mark Papermaster, Chief Technology Officer at AMD. Felix Schuster, Chief Executive Officer at Edgeless Systems, moderated the panel discussion, which explored topics such as the definition of confidential computing, customer adoption patterns, current challenges, and future developments. The insightful discussion left a lasting impression on me and my colleagues.

What is confidential computing?

When it comes to understanding what exactly confidential computing entails, it all begins with a trusted execution environment (TEE) that is rooted in hardware. This TEE protects any code and data placed inside it, while in use in memory, from threats outside the enclave. These threats include everything from vulnerabilities in the hypervisor and host operating system to other cloud tenants and even cloud operators. In addition to providing protection for the code and data in memory, the TEE also possesses two crucial properties. The first is the ability to measure the code contained within the enclave. The second property is attestation, which allows the enclave to provide a verified signature that confirms the trustworthiness of what is held within it. This feature allows software outside of the enclave to establish trust with the code inside, allowing for the safe exchange of data and keys while protecting the data from the hosting environment. This includes hosting operating systems, hypervisors, management software and services, and even the operators of the environment.

Regarding what is not confidential computing, it is not other privacy enhancing technologies (PETs) like homomorphic encryption or secure multiparty computation. It is hardware rooted, trusted execution environments with attestation.

In Azure, confidential computing is integrated into our overall defense in depth strategy, which includes trusted launch, customer managed keys, Managed HSM, Microsoft Azure Attestation, and confidential virtual machine guest attestation integration with Microsoft Defender for Cloud.

Customer adoption patterns

With regards to customer adoption scenarios for confidential computing, we see customers across regulated industries such as the public sector, healthcare, and financial services ranging from private to public cloud migrations and cloud native workloads. One scenario that I’m really excited about is multi-party computations and analytics where you have multiple parties bringing their data together, in what is now being called data clean rooms, to perform computation on that data and get back insights that are much richer than what they would have gotten off their own data set alone. Confidential computing addresses the regulatory and privacy concerns around sharing this sensitive data with third parties. One of my favorite examples of this is in the advertising industry, where the Royal Bank of Canada (RBC) has set up a clean room solution where they take merchant purchasing data and combine it with their information around the consumers credit card transactions to get a full picture of what the consumer is doing. Using these insights, RBC’s credit card merchants can then offer their consumer very precise offers that are tailored to them, all without RBC seeing or revealing any confidential information from the consumers or the merchants. I believe that this architecture is the future of advertising.

Another exciting multi-party use case is BeeKeeperAI’s application of confidential computing and machine learning to accelerate the development of effective drug therapies. Until recently, drug researchers have been hampered by inaccessibility of patient data due to strict regulations applied to the sharing of personal health information (PHI). Confidential computing removes this bottleneck by ensuring that PHI is protected not just at rest and when transmitted, but also while in use, thus eliminating the need for data providers to anonymize this data before sharing it with researchers. And it is not just the data that confidential computing is protecting, but also the AI models themselves. These models can be expensive to train and therefore are valuable pieces of intellectual property that need to be protected.

To allow these valuable AI models to remain confidential yet scale, Azure is collaborating with NVIDIA to deploy confidential graphics processing units (GPUs) on Azure based on NVIDIA H100 Tensor Core GPU.

Current challenges

Regarding the challenges facing confidential computing, they tended to fall into four broad categories:

Availability, regional, and across services. Newer technologies are in limited supply or still in development, yet Azure has remained a leader in bringing to market services based on Intel® Software Guard Extensions (Intel® SGX) and AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP). We are the first major cloud provider to offer confidential virtual machines based on Intel® Trust Domain Extensions (Intel® TDX) and we look forward to being one of the first cloud providers to offer confidential NVIDIA H100 Tensor Core GPUs. We see availability rapidly improving over the next 12 to 24 months.

Ease of adoption for developers and end users. The first generation of confidential computing services, based on Intel SGX technology, required rewriting of code and working with various open source tools to make applications confidential computing enabled. Microsoft and our partners have collaborated on these open source tools and we have an active community of partners running their Intel SGX solutions on Azure. The newer generation of confidential virtual machines on Azure, using AMD SEV-SNP, a hardware security feature enabled by AMD Infinity Gaurd and and Intel TDX, lets users run off-the-shelf operating systems, lift and shift their sensitive workloads, and run them confidentially. We are also using this technology to offer confidential containers in Azure which allows users to run their existing container images confidentially.

Performance and interoperability. We need to ensure that confidential computing does not mean slower computing. The issue becomes more important with accelerators like GPUs where the data must be protected as it moves between the central processing unit (CPU) and the accelerator. Advances in this area will come from continued collaboration with standards committees such as the PCI-SIG, which has issued the TEE Device Interface Security Protocol (TDISP) for secure PCIe bus communication and the CXL Consortium which has issued the Compute Express Link™ (CXL™) specification for the secure sharing of memory among processors. Open source projects like Caliptra which has created the specification, silicon logic, have read-only memory (ROM), and firmware for implementing a Root of Trust for Measurement (RTM) block inside a system on chip (SoC).

Industry awareness. While confidential computing adoption is growing, awareness among IT and security professionals is still low. There is a tremendous opportunity for all confidential computing vendors to collaborate and participate in events aimed at raising awareness of this technology to key decision-makers such as CISOs, CIOs, and policymakers. This is especially relevant in industries such as government and other regulated sectors where the handling of highly sensitive data is critical. By promoting the benefits of confidential computing and increasing adoption rates, we can establish it as a necessary requirement for handling sensitive data. Through these efforts, we can work together to foster greater trust in the cloud and build a more secure and reliable digital ecosystem for all.

The future of confidential computing

When the discussion turned to the future of confidential computing, I had the opportunity to reinforce Azure’s vision for the confidential cloud, where all services will run in trusted execution environments. As this vision becomes a reality, confidential computing will no longer be a specialty feature but rather the standard for all computing tasks. In this way, the concept of confidential computing will simply become synonymous with computing itself.

Finally, all panelists agreed that the biggest advances in confidential computing will be the result of industry collaboration.

Microsoft at OC3

In addition to the panel discussion, Microsoft participated in several other presentations at OC3 that you may find of interest:

Removing our Hyper-V host OS and hypervisor from the Trusted Computing Base (TCB).

Container code and configuration integrity with confidential containers on Azure.

Customer managed and controlled Trusted Computing Base (TCB) with CVMs on Azure.

Enabling faster AI model training in healthcare with Azure confidential computing.

Project Amber—Intel’s attestation service.

Finally, I would like to encourage our readers to learn about Greg Lavender’s thoughts on OC3 2023.

All product names, logos, and brands mentioned above are properties of their respective owners. 
The post Insights from the 2023 Open Confidential Computing Conference appeared first on Azure-Blog und Updates.
Quelle: Azure

Microsoft Cost Management updates—April 2023

Whether you’re a new student, a thriving startup, or the largest enterprise, you have financial constraints, and you need to know what you’re spending, where it’s being spent, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Cost Management comes in.

We’re always looking for ways to learn more about your challenges and how Microsoft Cost Management can help you better understand where you’re accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

FinOps Foundation announces a new specification project to demystify cloud billing data.

Centrally managed Azure Hybrid Benefit for SQL Server is generally available.

Scheduled alerts in Azure Government.

Register for Securely Migrate and Optimize with Azure.

Register for Optimize your IT costs with Azure Monitor.

Cut costs with AI-powered productivity in Microsoft Teams.

3 ways to reduce costs with Microsoft Teams Phone.

What’s new in Cost Management Labs.

New ways to save money with Microsoft Cloud.

New videos and learning opportunities.

Documentation updates.

Let’s dig into the details.

FinOps Foundation announced a new specification project to demystify cloud billing data

Microsoft partnered with FinOps Foundation and Google to launch FOCUS (FinOps Open Cost and Usage Specification), a technical project to build and maintain an open specification for cloud cost data. As one of the key contributors and principal steering committee members for this project, we’re incredibly excited about the potential value this will bring for organizations of all sizes.

Some of the benefits you’ll see include the ability to:

Better understand how they’re being charged across services and especially cloud providers.

Reduce data ingestion and normalization requirements.

Streamline reporting and monitoring efforts, like cost allocation and showback.

Leverage shared guidance across the industry for how to monitor and manage costs.

FOCUS will play a major role in the evolution of the FinOps Framework and its guidance as it drives more consistency in how to analyze and communicate changes in cost, including anything from measuring key performance indicators (KPIs) to managing anomalies and commitment-based discounts to tracking resource utilization and more.

To learn more, read the FinOps Foundation announcement and join us at FinOps X, where we’ll announce an initial draft release. All FOCUS steering committee members will be on-site for deeper discussions about its roadmap and implementation.

Centrally managed Azure Hybrid Benefit for SQL Server is generally available

If you’re migrating from on-premises to the cloud, Azure Hybrid Benefit should be part of your cost optimization plan. Azure Hybrid Benefit is a licensing benefit that helps customers significantly reduce the costs of running their workloads in the cloud. It works by letting customers use their on-premises licenses with active Software Assurance or subscription-enabled Windows Server and SQL Server licenses on Azure. You can also leverage active Linux subscriptions, including Red Hat Enterprise Linux or SUSE Linux Enterprise server running in Azure. Traditionally, you would track available licenses that you’re using with Azure Hybrid Benefit internally and compare that with cost reports available from Cost Management Power BI reports, which can be tedious. With centralized management, you can assign SQL Server licenses to individual subscriptions or share them across an entire billing account to let the cloud manage the licenses for you, maximizing your benefit and sustaining compliance with less effort.

Centralized management of Azure Hybrid Benefit for SQL Server is now generally available.

To learn more, see Azure Hybrid Benefit documentation.

Scheduled alerts in Azure Government

Last month, you saw the addition of scheduled alerts for built-in views in Cost analysis. This month, we’re happy to announce that scheduled alerts are now available for Azure Government. Scheduled alerts allow you to get notified on a daily, weekly, or monthly basis about changes in cost by sending a picture of a chart view in Cost analysis to a list of recipients. You can even send it to stakeholders who don’t have direct access to costs in the Azure portal. To learn more, see subscribe to scheduled alerts.

Register for Securely Migrate and Optimize with Azure

Did you know you can lower operating costs by up to 40 percent when you migrate Windows Server and SQL Server to Azure versus on-premises?1 Furthermore, you can improve IT efficiency and operating costs by up to 53 percent by automating management of your virtual machines in cloud and hybrid environments. To maximize the value of your existing cloud investments, you can utilize tools like Microsoft Cost Management and Azure Advisor. A recent study showed that our customers achieve up to 34 percent reduction in Azure spend in the first year by using Microsoft Cost Management. To learn more about how to achieve efficiency and maximize cloud value with Azure, join us and register for Securely Migrate and Optimize with Azure, a free digital event on Wednesday, April 26, 2023, 9:00 AM to 11:00 AM Pacific Time.

To learn more, see 5 reasons to join us at Securely Migrate and Optimize with Azure.

Register for Optimize Your IT Costs with Azure Monitor

Join the Azure Monitor engineering team on May 17, 2023 from 10:00 AM to 11:00 AM Pacific Time, as they continue to listen and respond to feedback to ensure your corporate priorities are kept at the forefront!

The Azure Monitor team introduced some new pricing plans that can drive costs down without compromising performance. The team has taken some of the key points along with valuable guidance and best practices and will share it during this webinar.

In this webinar, you will learn:

New Azure Monitor pricing plans and different scenarios in which the new price plan can be applied.

Other levers that you can take advantage of to optimize your monitoring costs.

No regret moves you can implement today to start realizing cost savings.

Register for Optimize your IT costs with Azure Monitor and join us on May 17, 2023 from 10:00 AM to 11:00 AM Pacific Time.

Cut costs with AI-powered productivity in Microsoft Teams

As we face economic uncertainties and changes to work patterns, organizations are searching for ways to optimize IT investments and re-energize employees to achieve business results. Now—more than ever—organizations need solutions to adapt to change, improve productivity, and reduce costs. Fortunately, modern tools powered by AI hold the promise to boost individual, team, and organizational-level productivity and fundamentally change how we work, including intelligent recap for meetings in Microsoft Teams Premium with AI-augmented video recordings, AI-generated notes, and AI-generated tasks and action items, reusable meeting templates, and more.

To learn more, see Microsoft Teams Premium: Cut costs and add AI-powered productivity.

3 ways to reduce costs with Microsoft Teams Phone

As the way we work evolves, today’s organizations need cost-effective, reliable telephony solutions that help them support flexible work and truly bridge the gap between the physical and digital worlds. Our customers are searching for products that help them promote an inclusive working environment and streamline communications. And they need solutions that simplify their technological footprint and cut the cost of legacy IT solutions and other non-essential expenses.

After examining the potential ROI that companies may realize by implementing Teams Phone, a recent study found that businesses could:

Reduce licensing and usage costs.

Minimize the burden on IT.

Help people save time and collaborate more effectively.

To learn more, including customer quotes, see 3 ways to improve productivity and reduce costs with Microsoft Teams Phone.

What’s new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what’s coming in Microsoft Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

New: Settings in the cost analysis preview—Enabled by default in Labs.Get quick access to cost-impacting settings from the Cost analysis preview. You will see this by default in Labs and can enable the option from the try preview menu.

Update: Customers view for Cloud Solution Provider partnersCustomers view for Cloud Solution Provider (CSP) partners—Now enabled by default in Labs.View a breakdown of costs by customer and subscription in the Cost analysis preview. Note this view is only available for CSP billing accounts and billing profiles. You will see this by default in Labs and can enable the option from the Try preview menu.

Merge cost analysis menu items.Only show one cost analysis item in the Cost Management menu. All classic and saved views are one-click away, making them easier than ever to find and access. You can enable this option from the try preview menu.

Recommendations view.View a summary of cost recommendations that help you optimize your Azure resources in the cost analysis preview. You can opt in using the try preview menu.

Forecast in the cost analysis preview.Show your forecast cost for the period at the top of the cost analysis preview. You can opt in using Try preview.

Group related resources in the cost analysis preview.Group related resources, like disks under virtual machinesVMs or web apps under App Service plans, by adding a “cm-resource-parent” tag to the child resources with a value of the parent resource ID.

Charts in the cost analysis preview.View your daily or monthly cost over time in the cost analysis preview. You can opt in using Try Preview.

View cost for your resources.The cost for your resources is one click away from the resource overview in the preview portal. Just click View cost to quickly jump to the cost of that resource.

Change scope from the menu.Change scope from the menu for quicker navigation. You can opt-in using Try Preview.

Of course, that’s not all. Every change in Microsoft Cost Management is available in Cost Management Labs a week before it’s in the full Azure portal or Microsoft 365 admin center. We’re eager to hear your thoughts and understand what you’d like to see next. What are you waiting for? Try Cost Management Labs today.

New ways to save money in the Microsoft Cloud

Lots of cost optimization improvements over the last month! Here are 10 general availability offers you might be interested in:

Azure Kubernetes Service introduces new Free and Standard pricing tiers.

Spot priority mix for Virtual Machine Scale Sets (VMSS).

More transactions at no additional cost for Azure Standard SSD storage.

Arm-based VMs now available in four additional Azure regions.

New General-Purpose VMs—Dlsv5 and Dldsv5.

Azure Cosmos DB for PostgreSQL cluster compute start and stop.

New burstable SKUs for Azure Database for PostgreSQL—Flexible Server.

Azure Database for PostgreSQL—Flexible Server in Australia Central.

App Configuration geo-replication.

And six new preview offers:

New Memory Optimized VM sizes—E96bsv5 and E112ibsv5.

Azure HX series and HBv4 series virtual machines.

Azure Container Apps offers new plan and pricing structure.

Read-write premium caching for Azure HPC Cache.

In-place scaling for enterprise caches in Azure Redis Cache.

Azure Chaos Studio is now available in Brazil South region.

New videos and learning opportunities

Here’s one new video you might be interested in:

Optimize IT investments to maximize efficiency and reduce cloud spend (10 minutes).

Follow the Microsoft Cost Management YouTube channel to stay in the loop with new videos as they’re released and let us know what you’d like to see next.

Want a more guided experience? Start with Control Azure spending and manage bills with Microsoft Cost Management.

Documentation updates

Here are a few documentation updates you might be interested in:

New: Calculate Enterprise Agreement (EA) savings plan cost savings.

Updated: Understand usage details fields.

Updated: Group and allocate costs using tag inheritance.

Updated: Allocate Azure costs.

Updated: EA Billing administration on the Azure portal.

Updated: Create a Microsoft Customer Agreement subscription.

Updated: Change an Azure reservation directory.

Updated: Optimize Azure Synapse Analytics costs with a Pre-Purchase Plan.

22 updates based on your feedback.

Want to keep an eye on all documentation updates? Check out the Cost Management and Billing documentation change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request. You can also submit a GitHub issue. We welcome and appreciate all contributions!

What’s next?

These are just a few of the big updates from last month. Don’t forget to check out the previous Microsoft Cost Management updates. We’re always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @MSCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. You can also share ideas and vote up others in the Cost Management feedback forum or join the research panel to participate in a future study and help shape the future of Microsoft Cost Management.

We know these are trying times for everyone. Best wishes from the Microsoft Cost Management team. Stay safe and stay healthy.

1 Forrester Consulting, “The Total Economic Impact™ of Azure Cost Management and Billing”, February 2021.
The post Microsoft Cost Management updates—April 2023 appeared first on Azure-Blog und Updates.
Quelle: Azure

What’s new with Azure Files

Azure Files provides fully managed file shares in the cloud that you can access from anywhere using standard protocols such as Server Message Block (SMB) or Network File System (NFS). 

Since announcing the general availability of support for the Network File System (NFS) v4.1 protocol back in December 2021, we have seen customers leveraging this capability for a wide variety of important use cases including enterprise resource planning (ERP) solutions, development and test environments (devops), content management systems (CMS), and mission critical workloads like SAP. We’re thrilled to share that SAP ECS has adopted Azure Files NFS as the default choice for deploying SAP NetWeaver servers and SAP HANA shared directories on Azure. SAP’s decision to include Azure file shares is a testament to the fact that they’re a cost-effective choice for mission critical workloads requiring high performance and high availability. We’ve also continued to listen to customer feedback and are very excited to announce several highly anticipated features, including a 99.99 percent uptime SLA, snapshot support, and nconnect.

NFS Azure file shares are now the default option for SAP Enterprise Cloud Services (ECS) deployments

Azure file shares provide the functionality, performance, and reliability required to keep your SAP applications running smoothly. Being a fully managed service brings simplicity and more cost effectiveness than alternatives, such as building NFS cluster (DRDB) file shares, especially when considering redundancy. SAP and Microsoft partnered to rigorously validate the use of Azure Files in high-availability deployments for Azure SAP RISE, where it is now offered by default for deployment of SAP NetWeaver servers and SAP HANA shared directories. We’re excited that SAP themselves have chosen Azure Files to help power many of the world’s largest and most complex workloads.

“Partnering with Microsoft and the Azure Files team was very productive. Our teams worked closely together to enable new highly available solutions around NFS shares and lower cost structures. The zonal replication capabilities that Azure Files provides strengthen and simplify SAP RISE architectures on Azure beyond what we could deploy with any other technology on Azure. We expect to reduce costs both directly and indirectly by using this service. With the lower time-to-market now achieved with this simplified architecture, we can bootstrap more deployments rather quickly and earn new business.”—Lalit Patil, Chief Technology Officer, SAP Enterprise Cloud Services.

To learn more about running SAP workloads on Azure, see the following articles:

High availability for SAP NetWeaver (RHEL)

High availability for SAP NetWeaver (SLES)

High availability for HANA scale-out system with HSR (SLES)

Additionally, you can use Azure Center for SAP Solutions (Preview) to deploy a highly available S4/HANA system with NFS on Azure Files.

One such customer, Germany-based Munich Re benefited from using Azure Files NFS with its SAP deployments. Munich Re, one of the world’s leading insurance companies, runs one of the largest SAP environments in Europe. Munich Re took a keen interest in Azure Files and has been using it in production since the NFS protocol became generally available. With Azure Files, they can quickly deploy a file share with just a few clicks. It used to take Munich Re from four to six months to add resources, but with SAP on Azure and their infrastructure automation, they can now do it within an hour.

“We love how easy Azure Files is to use and manage, and we certainly appreciate its interoperability with other Azure services. And having a fully managed service eliminates the burden and costs of managing NFS servers.”—Matthias Spang, Technical Architect for SAP Solutions, Munich Re.

High-availability (HA) SAP solutions need a highly available file share for hosting sapmnt, transport, and interface directories. You can use Azure Files premium NFS with availability sets and availability zones.

Figure 1 – High-availability (HA) SAP NetWeaver system with Azure Files.

A highly available SAP HANA system in a scale-out configuration with HANA system replication (HSR) and Pacemaker needs shared file systems for storing shared files between all hosts in an SAP HANA system. You can use Azure Files premium NFS for satisfying this usecase. 

Figure 2 – Azure Files NFS for SAP HANA scale-out system with Pacemaker cluster. Note: Azure Files is used for /hana/shared and not used for storing DBMS or logs.

New SLA of 99.99 percent uptime for Azure Files Premium Tier is generally available

In today’s world of digital business, downtime is not an option. Azure Files now offers a 99.99 percent SLA per file share for its Premium Tier. The new 99.99 percent uptime SLA applies to all Azure Files Premium shares, regardless of protocol (SMB, NFS, and REST) or redundancy type (Locally Redundant Storage (LRS) or Zonally Redundant Storage (ZRS)). This means that you can benefit from this SLA immediately, without any configuration changes or extra costs.

With this new SLA, you can be confident that your data is highly available. If the availability drops below the guaranteed 99.99 percent uptime, you’re eligible for service credits.

Furthermore, Azure Files offers a ZRS solution with twelve 9’s durability. This means you can trust that your data is safe, even in the face of hardware failures or other unexpected events.

With the new 99.99 percent uptime SLA for Azure Files Premium Tier, you can have a high level of confidence and assurance that your data is always available. By leveraging the latest in cloud technologies and features, Azure Files delivers a reliable and durable storage solution that can meet the needs of even the most demanding workloads.

Snapshot support for NFS file shares (Preview)

While it’s rare, data corruption or accidental deletion can happen to anyone, and you need to be protected. File share snapshots protect your data from these events by ensuring you have a crash consistent dataset to recover from. File share snapshots capture the share state at a point in time, are immutable (read-only), and are differential (delta copies to keep your TCO low).

Snapshots are easy to manage and use in Azure Files. The creation of a snapshot is instantaneous. Once created, you can manage snapshots using the Azure portal, REST API, Azure CLI, or PowerShell. Enumeration of snapshots, browsing of file or folders, and copying of data is supported from within the NFS clients under the “.snapshots” folder which is present at the root of the mountpath.

Figure 3 – List, browse, and copy from your snapshots from any connected client.

Data protection is a key enterprise promise and a compliance requirement for many organizations. To date, our customers have fulfilled this requirement by doing their own replication or using one of our backup partners to copy the primary data from the share to another location. You can use snapshots to enhance these solutions. By replicating from snapshots instead of the primary share, you can ensure that the data being copied is all from a specific point in time.

Are you as excited as we are? If so, you can fill out the enrollment form to get informed of the availability of this preview feature in the region of your choice.

Boosting per-client performance with NFS nconnect is generally available

Azure Files recently announced support for nconnect on its NFS shares. By using nconnect, you can improve a multi-core client’s throughput and IOPS up to four times without making any changes to the app itself. Previously, applications were limited by the bandwidth of a single core using a single TCP connection. Nconnect is ideal for throughput-intensive scenarios like data ingestion, analytics, machine learning, devops, ETL pipelines, batch processing, and more. For example, financial services performing Value at Risk (VaR) Monte Carlo or machine learning model data simulations running on the Azure Kubernetes Service (AKS) with Azure Files can now perform more risk calculations with fewer client machines in their AKS node pools. With the parallelism that nconnect enables, you can complete your throughput-intensive simulations in less time, allowing you to scale down the compute resources sooner and reduce overall TCO.

In addition to delivering higher performance, nconnect enhances fault tolerance by allowing the client to switch to an alternative TCP connection in the event of a connection failure. By enabling the client to use multiple connections, nconnect provides greater flexibility and load balancing for machines with multiple network paths.

Using nconnect is simple. For example, to use four channels on a Linux VM, simply add “-o nconnect=4” parameter to the mount command. You can use nconnect with your AKS cluster as a persistent volume by using the same Azure Files NFS CSI driver and adding “-nconnect=4” under “mountoptions”. Nconnect is available in all regions where NFS is supported at no additional cost. Please visit our product page to learn more.

Up to 1100 MiB/s single client write throughput

Up to 1700 MiB/s single client read throughput

Learn more

Learn how to create an NFS Azure file share.

Check out NFS FAQ for more information.

With these new features, Azure Files NFS is poised to power even more of the world’s largest and most complex workloads, delivering superior functionality, performance, and reliability to customers across a range of industries and use cases. We believe that the new 99.99 percent SLA, snapshot capabilities, and higher performance with nconnect will unlock many more use cases and applications. We’re excited to see how you take advantage of these new capabilities!

If you have a feature request or feedback, don’t hesitate to reach out to the Azure Files team by emailing azurefiles@microsoft.com or filling out this form.
The post What’s new with Azure Files appeared first on Azure-Blog und Updates.
Quelle: Azure

Insights from the 2023 Open Confidential Computing Conference

I had the opportunity to participate in this year's Open Confidential Computing Conference (OC3), hosted by our software partner, Edgeless Systems. This year's event was particularly noteworthy due to a panel discussion on the impact and future of confidential computing. The panel featured some of the industry's most respected technology leaders including Greg Lavender, Chief Technology Officer at Intel, Ian Buck, Vice President of Hyperscale and HPC at NVIDIA, and Mark Papermaster, Chief Technology Officer at AMD. Felix Schuster, Chief Executive Officer at Edgeless Systems, moderated the panel discussion, which explored topics such as the definition of confidential computing, customer adoption patterns, current challenges, and future developments. The insightful discussion left a lasting impression on me and my colleagues.

What is confidential computing?

When it comes to understanding what exactly confidential computing entails, it all begins with a trusted execution environment (TEE) that is rooted in hardware. This TEE protects any code and data placed inside it, while in use in memory, from threats outside the enclave. These threats include everything from vulnerabilities in the hypervisor and host operating system to other cloud tenants and even cloud operators. In addition to providing protection for the code and data in memory, the TEE also possesses two crucial properties. The first is the ability to measure the code contained within the enclave. The second property is attestation, which allows the enclave to provide a verified signature that confirms the trustworthiness of what is held within it. This feature allows software outside of the enclave to establish trust with the code inside, allowing for the safe exchange of data and keys while protecting the data from the hosting environment. This includes hosting operating systems, hypervisors, management software and services, and even the operators of the environment.

Regarding what is not confidential computing, it is not other privacy enhancing technologies (PETs) like homomorphic encryption or secure multiparty computation. It is hardware rooted, trusted execution environments with attestation.

In Azure, confidential computing is integrated into our overall defense in depth strategy, which includes trusted launch, customer managed keys, Managed HSM, Microsoft Azure Attestation, and confidential virtual machine guest attestation integration with Microsoft Defender for Cloud.

Customer adoption patterns

With regards to customer adoption scenarios for confidential computing, we see customers across regulated industries such as the public sector, healthcare, and financial services ranging from private to public cloud migrations and cloud native workloads. One scenario that I'm really excited about is multi-party computations and analytics where you have multiple parties bringing their data together, in what is now being called data clean rooms, to perform computation on that data and get back insights that are much richer than what they would have gotten off their own data set alone. Confidential computing addresses the regulatory and privacy concerns around sharing this sensitive data with third parties. One of my favorite examples of this is in the advertising industry, where the Royal Bank of Canada (RBC) has set up a clean room solution where they take merchant purchasing data and combine it with their information around the consumers credit card transactions to get a full picture of what the consumer is doing. Using these insights, RBC’s credit card merchants can then offer their consumer very precise offers that are tailored to them, all without RBC seeing or revealing any confidential information from the consumers or the merchants. I believe that this architecture is the future of advertising.

Another exciting multi-party use case is BeeKeeperAI’s application of confidential computing and machine learning to accelerate the development of effective drug therapies. Until recently, drug researchers have been hampered by inaccessibility of patient data due to strict regulations applied to the sharing of personal health information (PHI). Confidential computing removes this bottleneck by ensuring that PHI is protected not just at rest and when transmitted, but also while in use, thus eliminating the need for data providers to anonymize this data before sharing it with researchers. And it is not just the data that confidential computing is protecting, but also the AI models themselves. These models can be expensive to train and therefore are valuable pieces of intellectual property that need to be protected.

To allow these valuable AI models to remain confidential yet scale, Azure is collaborating with NVIDIA to deploy confidential graphics processing units (GPUs) on Azure based on NVIDIA H100 Tensor Core GPU.

Current challenges

Regarding the challenges facing confidential computing, they tended to fall into four broad categories:

Availability, regional, and across services. Newer technologies are in limited supply or still in development, yet Azure has remained a leader in bringing to market services based on Intel® Software Guard Extensions (Intel® SGX) and AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP). We are the first major cloud provider to offer confidential virtual machines based on Intel® Trust Domain Extensions (Intel® TDX) and we look forward to being one of the first cloud providers to offer confidential NVIDIA H100 Tensor Core GPUs. We see availability rapidly improving over the next 12 to 24 months.

Ease of adoption for developers and end users. The first generation of confidential computing services, based on Intel SGX technology, required rewriting of code and working with various open source tools to make applications confidential computing enabled. Microsoft and our partners have collaborated on these open source tools and we have an active community of partners running their Intel SGX solutions on Azure. The newer generation of confidential virtual machines on Azure, using AMD SEV-SNP, a hardware security feature enabled by AMD Infinity Gaurd and and Intel TDX, lets users run off-the-shelf operating systems, lift and shift their sensitive workloads, and run them confidentially. We are also using this technology to offer confidential containers in Azure which allows users to run their existing container images confidentially.

Performance and interoperability. We need to ensure that confidential computing does not mean slower computing. The issue becomes more important with accelerators like GPUs where the data must be protected as it moves between the central processing unit (CPU) and the accelerator. Advances in this area will come from continued collaboration with standards committees such as the PCI-SIG, which has issued the TEE Device Interface Security Protocol (TDISP) for secure PCIe bus communication and the CXL Consortium which has issued the Compute Express Link™ (CXL™) specification for the secure sharing of memory among processors. Open source projects like Caliptra which has created the specification, silicon logic, have read-only memory (ROM), and firmware for implementing a Root of Trust for Measurement (RTM) block inside a system on chip (SoC).

Industry awareness. While confidential computing adoption is growing, awareness among IT and security professionals is still low. There is a tremendous opportunity for all confidential computing vendors to collaborate and participate in events aimed at raising awareness of this technology to key decision-makers such as CISOs, CIOs, and policymakers. This is especially relevant in industries such as government and other regulated sectors where the handling of highly sensitive data is critical. By promoting the benefits of confidential computing and increasing adoption rates, we can establish it as a necessary requirement for handling sensitive data. Through these efforts, we can work together to foster greater trust in the cloud and build a more secure and reliable digital ecosystem for all.

The future of confidential computing

When the discussion turned to the future of confidential computing, I had the opportunity to reinforce Azure's vision for the confidential cloud, where all services will run in trusted execution environments. As this vision becomes a reality, confidential computing will no longer be a specialty feature but rather the standard for all computing tasks. In this way, the concept of confidential computing will simply become synonymous with computing itself.

Finally, all panelists agreed that the biggest advances in confidential computing will be the result of industry collaboration.

Microsoft at OC3

In addition to the panel discussion, Microsoft participated in several other presentations at OC3 that you may find of interest:

Removing our Hyper-V host OS and hypervisor from the Trusted Computing Base (TCB).
Container code and configuration integrity with confidential containers on Azure.
Customer managed and controlled Trusted Computing Base (TCB) with CVMs on Azure.
Enabling faster AI model training in healthcare with Azure confidential computing.
Project Amber—Intel's attestation service.

Finally, I would like to encourage our readers to learn about Greg Lavender’s thoughts on OC3 2023.

 

All product names, logos, and brands mentioned above are properties of their respective owners. 
Quelle: Azure

Preparing for future health emergencies with Azure HPC

A once-in-a-century global health emergency accelerates worldwide healthcare innovation and novel medical breakthroughs, all supported by powerful high-performance computing (HPC) capabilities.

COVID-19 has forever changed how nations function in the globally interconnected economy. To this day, it continues to affect and shape how countries respond to health emergencies. COVID-19 has demonstrated just how interconnected our society is and how risks, threats, and contagions can have global implications for many aspects of our daily lives.

COVID-19 was the largest global health emergency in over a century, with nearly 762 million cases reported as of the end of March 2023, according to the World Health Organization. The National Centre for Biotechnology Information points out the frequency and breath of new variants that continues to emerge at regular intervals. In response to this intricate health crisis, the global healthcare community quickly mobilized to better understand the virus, learn its behavior, and work toward preventative treatment measures to minimize the damage to lives across the world. Globally, nations mobilized resources for frontline workers, offered social protection to those most severely affected, and provided vaccine access for the billions who need it.

Recent technological innovations have provided the medical community with access to capabilities, such as HPC, that equipped healthcare professionals to better study, understand, and respond to COVID-19. Globally, healthcare innovators could access unprecedented computing power to design, test, and develop new treatments, faster, better, and more iteratively, than ever before.

Today, Azure HPC enables researchers to unleash the next generation of healthcare breakthroughs. For example, the computational capabilities offered by the Azure HPC HB-series virtual machines, powered by AMD EPYCTM CPU cores, allowed researchers to accelerate insights and advances into genomics, precision medicine, and clinical trials, with near-infinite high-performance bioinformatics infrastructure capabilities.

Since the beginning of COVID-19, companies have been leveraging Azure HPC to develop new treatments, run simulations, and testing at scale—all in preparation for the next health emergency. Azure HPC is helping companies unleash new treatments and health cure capabilities that are ushering in the next generation of treatments and healthcare capabilities, across the entire industry.

High-performance computing making a difference

A leading immunotherapy company partnered with Microsoft to leverage the capabilities of Azure HPC’s high-performance computing, in order to perform detailed computational analyses of the spike protein structure of SARS-CoV-2. Due to the critical nature of the spike protein structure and the role it plays in allowing the invasion of human cells, targeting it for study, analyses, and insights, is a crucial step in the development of treatments to combat the virus.

The company’s engineers and scientists collaborated with Microsoft, and quickly deployed HPC clusters on Azure, containing over 1250 core graphic processing units (GPUs). These GPUs are specifically designed for machine learning and similarly intense computational applications. The Azure HPC clusters augmented the company’s existing GPU clusters—which was already optimized for molecular modelling of proteins, antibodies, and antivirals—bringing a truly high-powered scaled engagement approach to fruition.

By collaborating with Microsoft in this way and making use of the massive, networked computing capabilities and advanced algorithms enabled by Azure HPC, the company was able to generate working models in days rather than the months it would have taken by following traditional approaches.

The incredible amount of computing power will help bolster drug discovery and therapeutic developments. By joining forces and bringing together the incredible power of Azure HPC and cutting edge immunotherapies, it helped contribute to the development of models that allowed researchers to better understand the virus, find novel binding sites to fight the virus, and ultimately guide the development of future treatments and vaccines for the virus.

Powering pharmaceutical research and innovation

The healthcare industry is making remarkable strides in the development of cutting-edge treatments and innovations that are geared towards solving some of the world's greatest healthcare challenges.

For example, researchers are leveraging HPC to transform their research and development effort as well as accelerating the development of new life-saving treatments.

Using a technique producing amorphous solid dispersions (ASD), drug researchers break up active pharmaceutical ingredients and blend them with organic polymers to improve the dissolution rate, bioavailability, and solubility of drug delivery systems. Although a wonder of modern medicine, it is a highly complicated, often lab-based process that can take months.

Swiss-based Molecular Modelling Laboratory (MML), a leader in ASD screening, wanted to pivot its drug research and development to small organic and biomolecular polymers. This approach determines ASD stability prior to formulation, reveals new ASD combinations, enhances drug safety, and helps reduce drug development costs as well as delivery times.

MML chose to leverage Azure HPC resources on more than 18,000 Azure HBv2 virtual machines and to optimize high-throughput drug screening and active pharmaceutical ingredient solubility limit detection, with the aim to alleviate common development hurdles.

The adoption of Azure HPC has helped MML shift from a small start-up to an established business working with some of the top pharmaceutical companies in the world—all in a very short time.

For the global healthcare community, the computational power and scalability of Azure HPC presents an unprecedented opportunity to accelerate pharmaceutical, medical, as well as health innovation. Azure HPC will continue playing a leading role in supporting the healthcare industry to respond optimally to any future global health emergency that may arise.

Next steps

To request a demo, contact HPCdemo@microsoft.com.
Learn more about Azure HPC.
High-performance computing documentation.
View our HPC cloud journey infographic.

Quelle: Azure