Dreaming big, traveling far, and expanding access to technology

Editor’s note: In honor of Black History Month, we’re talking to Cloud Googlers about what identity means to them and how their personal histories shape their work to influence the future of cloud technology. Albert Sanders, senior counsel for government affairs and public policy at Google Cloud, has worked in the White House, negotiated bipartisan deals in Congress, and recently addressed the United Nations General Assembly. His personal and professional travels have taken him to five continents—and he’s visited 11 (and counting!) countries in Africa. We sat down with Albert to hear more about his journey, some of his favorite moments, and advice on navigating career.Why did you choose a career in public policy?I’ve seen the real-life benefit when policymakers and government agencies get it right—and the troubling consequences when they do not. For example, I went to a high school where most students qualified for free, publicly funded meals. I didn’t fully appreciate it at the time, but that meant many of my classmates were living at or below the poverty line, so school was often the only place they’d receive balanced, hot meals on a consistent basis. We had some incredibly dedicated teachers and administrators, but my high school also operated at about double its maximum capacity. There were sometimes not enough seats or textbooks, so some of us had to stand in class and often we were prohibited from taking textbooks home. I learned early on that the decisions made in city halls, capitol buildings, and government agencies have a direct impact—sometimes positive, sometimes negative—on real people. Later in life I’d learn that this was not just true in education but all across society. So, I knew from an early age that I didn’t want to be a bystander. I wanted to have a direct impact on these decisions. Tell us about your path to working in government.My entree to public service was law school. I wanted to learn how the system worked, gain some expertise, and figure out how I could add value. I started out at a corporate law firm, working long hours learning the law, advising clients, honing my written and oral communication skills, and experiencing first-hand how various laws and regulations were directly impacting my clients. It was incredibly challenging and rewarding work. But, one day the phone rang with the proverbial “offer I could not refuse.”After a series of interviews, Senator Dick Durbin of Illinois asked me to join his Senate staff. At the time, he was the second-highest ranking U.S. Senator, who in 2004 had introduced a Senate candidate by the name of Barack Obama to the Democratic Convention. I was a twenty-something lawyer whose political “experience” was basically comprised of watching each one of those conventions from the age of 8—and telling my parents how to vote thereafter. Taking the job was a no-brainer. Adjusting to the 60% pay cut that came with it was much harder. Looking back, I’m so glad that I pursued my passion and chose to follow the path that gave me a chance to have the most impact—even if that meant waiting until later to maximize my earning potential. Money is important and individual circumstances differ, but no amount of money could purchase the experiences, opportunities, or relationships that blossomed during my time on Capitol Hill. What did you learn from your time on Capitol Hill?Television pundits, reporters, social media influencers, folks at the barber shop and others all across America were debating the things I was working on with Sen. Durbin each day. We were working incredibly hard to improve the lives of everyday Americans. And I loved every minute of it! Some days I was working on issues about which I had deep knowledge. Other days, I worked on issues that forced me out of my comfort zone, requiring me to lean on outside experts for insight. Both were equally valuable to my growth because they helped me build—and trust—my own instincts. I learned how to assess the character, knowledge, and motives of the external stakeholders trying to sway us one way or another on an issue. Having and exercising good judgment, especially where you have limited information or time, is a learned skill.I also saw the power of personal stories to compel people to action. When writing policy, we would look to the facts and the figures. But when it was time to advocate and persuade, Sen. Durbin encouraged us to find and share the stories of people who would be helped or harmed by a given approach. We did this in 2011, when I helped him build and lead the bipartisan coalition to pass the FDA Food Safety Modernization Act—the most comprehensive reform of our nation’s food safety laws in more than 50 years. It would not have happened without the courageous kids, adults, and seniors who came to Congress to talk about the loved ones they had lost or the physical and emotional consequences they endured as a result of foodborne illnesses. Those compelling voices, combined with a well-organized coalition of bipartisan advocates and a handful of policymakers willing to tackle the problem, got that bill through both houses of Congress and to President Obama’s desk for signature.What was it like working in the White House? I could talk about that experience for hours! I’ll never forget the day I received the phone call offering me the job of Associate Counsel to the President in the Office of White House Counsel. I am smiling as I reflect on it now. I was pacing in my bedroom, trying to process some bad news, when the phone rang. In an instant, that call changed my mood, and the course of my career! The opportunity to work for President Obama in the White House was literally a dream come true. My portfolio included oversight and investigations, cybersecurity and privacy, and high-stakes litigation. The substantive work was tough and invigorating, and offered an opportunity to apply lessons from each of my prior roles. The people on our team were some of the most brilliant and dedicated public servants I’d ever met. Their backgrounds and personal stories were so impressive, but I recall being even more impressed by their humility and work ethic. Working at the White House involved late nights, long weekends, and its fair share of stress. But I was reminded of the privilege I had and the gravity of my responsibility every time I parked my car on The Ellipse, chatted with Secret Service agents as I swiped my badge or gave a West Wing tour. I’ll never forget the smiles on the faces of the D.C. high school students we hosted in the basement bowling alley one weekend. Some of them came from high schools similar to mine, and I could see in their eyes just how special this moment was for them.We heard you have a goal to visit every African country. Can you tell us more?I do! That’s another topic I could speak on at length. I’ve been to 11 countries in Africa so far, and my goal is to spend quality time in all 54. My first trip was to South Africa several years ago. During that trip, we would barely scratch the surface of the culture, history, energy, challenges and opportunities of this beautiful, complicated county. But the depth of connection we felt, the openness of the people, and the overall richness of that initial experience made a lasting impression.I’ve tried many times—often unsuccessfully—to explain the special connection that I and many other African Americans feel to the continent of Africa. Many Americans may take for granted that they can trace their family origins to places outside the United States. One of the many enduring legacies of slavery is that most African Americans don’t have that direct connection to their family history. We were the only group of people to arrive on American soil en masse against their will, and it’s often difficult to trace family history even four or five generations. This creates a void that is often uncomfortable to discuss, because it’s a stark reminder of the present-day impact of our nation’s brutal history.Traveling through Africa is intensely personal. It’s a way to connect with a rich and textured personal history about which so many of us know so little. My visits are, in some ways, a small, personal tribute to that history and those who lived it. I may not know the names of my ancestors or the place of their birth, but I’m reminded regularly that they passed on to us a resilience, faith, and determination that could not be shackled. When they were praying for freedom in the bowels of a slave ship, nursing wounds from a vicious beating, or hoping for a better tomorrow—those prayers and hopes were for my generation and all the others that have followed. I stand on their shoulders and I can only hope that I make them proud. Traveling through Africa is also just incredibly fun. Every country I visit is packed with new discoveries, incredible adventures, amazing food, unforgettable people, rich culture and so much more! I’ve walked with gorillas in Rwanda’s Volcanoes National Park, scaled Sahara Desert sand dunes in Merzouga, Morocco, and I’ve run my fingertips over the hieroglyphics on Nubian Pyramids in Meroe, Sudan. I celebrated Eid al Fitr, the feast that marks the end of Ramadan, in Dakar, Senegal with a family who met me one day and welcomed me into their home the next. And I’ll never forget standing in the doorway and looking out into the expanse of the Atlantic Ocean from the Point of No Return at Cape Coast Castle in Accra, Ghana—the very same doorway through which many enslaved Africans began their horrific journey to the United States 400 years ago. How have your experiences shaped your work at Google? As the lead for global infrastructure public policy, I partner with subject matter experts, attorneys, engineers, and other Googlers from all over the world. Ultimately, we strive to help more people benefit from cloud computing. There used to be a huge technology barrier to building a business. With cloud computing, all you need is an internet connection and you can have the same computing power, data analytics, artificial intelligence, and secure infrastructure that powers Google products like Gmail, YouTube, and Google Maps. Google Cloud tools don’t only improve business outcomes, they expand technology access—and thereby opportunity. I’m pleased to help bring our cutting-edge technology to more organizations globally and support policymakers, NGOs, and other organizations that leverage our cloud tools to drive innovation, improve local economies, and enhance digital literacy.For someone so passionate about public service, moving into the private sector was definitely a change. But I continue to be guided by a personal mission statement of working for individuals, or in the case of Google, a company, with a mission I support and values I share.Do you have any career advice to share?Along with following a personal mission statement, I’ve gotten other advice from mentors and colleagues. First, it’s important to embrace the uncomfortable and unprecedented. Three years ago, I was the first hire on the public policy team for Google Cloud. Since then, our team has experienced exponential growth and global distribution. I still remember some of the early challenges, but it’s been an incredible journey and I’m happy I stepped up to the plate. Second, don’t be afraid to advocate for yourself. Suffering in silence or being reluctantly agreeable doesn’t win allies. It only builds internal resentment and deprives your existing allies of the opportunity to help you resolve issues. Third, representation matters. One of the reasons I do my best every day is because I’m aware that I must excel for myself—and for other people of color who are still terribly underrepresented in our industry. I appreciate Google’s various initiatives to address this issue. I’m committed to doing my part to support those efforts, ensure accountability, and demonstrate through my own work product and work ethic what’s possible when diverse perspectives and people have a seat at the table.
Quelle: Google Cloud Platform

ExpressRoute Global Reach: Building your own cloud-based global backbone

Connectivity has gone through a fundamental shift as more workloads and services have moved to the Cloud. Traditional enterprise Wide Area Networks (WAN) have been fixed in nature, without the ability to dynamically scale to meet modern customer demands. For customers seeking to increasingly apply a cloud-first approach as the basis for their app and networking strategy, hybrid cloud enables applications and services to be deployed cross-premises as a fully connected and seamless architecture. The connectivity across premises is moving to utilize a more cloud-first model, with services offered by global hyper-scale networks.

Microsoft global network

Microsoft operates one of the  largest networks on the globe  spanning over 130,000 miles of terrestrial and subsea fiber cable systems across 6 continents. Besides Azure, the global network powers all our cloud services, including Bing, Office 365 and Xbox. The network carries more than 30 billion packets per second at any one time and is accessible for peering, private connectivity and application content delivery through our more than 160 global network PoPs. Microsoft continuously add new network PoPs to optimize the experience for our customers accessing Microsoft services.

The global network is built and operated using intelligent software-defined traffic engineering technologies, that allow Microsoft to dynamically select optimal paths and route around network faults and congestion scenarios in near real-time. The network has multiple redundant paths to ensure maximum uptime and reliability when powering mission-critical workloads for our customers.

ExpressRoute overview

Azure ExpressRoute provides enterprises with a service that bypasses the Internet to securely and privately connect to Azure and to create their own global network. A common scenario is for enterprises to use ExpressRoute to access their Azure virtual networks (VNets) containing their own private IP addresses. This allows Azure to become a seamless hybrid extension of their on-premises networks. Another scenario includes using ExpressRoute to access public services over a private connection such as Azure Storage or Azure SQL. Traffic for ExpressRoute enters the Microsoft network at our networking Points of Presence (or PoPs) strategically distributed across the world, which are hosted in carrier-neutral facilities to provide customers options when picking a carrier or Telco partner.

ExpressRoute provides three different SKUs of ExpressRoute circuits:

ExpressRoute Local: Available at ExpressRoute sites physically close to an Azure region and can be used only to access the local Azure region. Because the traffic stays in the regional network and does not traverse the global network, the ExpressRoute Local traffic has no egress charge.
ExpressRoute Standard: Provides connectivity to any Azure region with in the same geopolitical region as the ExpressRoute site from London to West Europe, for example.
ExpressRoute Premium: Provides connectivity to any Azure region within the cloud environment. For example, an ExpressRoute Premium circuit at the New Zealand site can access Azure regions in Australia or other geographies from Europe or North America.

In addition to using the more than 200 ExpressRoute partners to connect for ExpressRoute, enterprises can directly connect to ExpressRoute routers with the ExpressRoute Direct option, at either 10G or 100G physical interfaces. Within ExpressRoute Direct, enterprises can divide up this physical port into multiple ExpressRoute circuits to serve different business units and use cases.

Many customers want to take further advantage of their existing architecture and ExpressRoute connections to provide connectivity between their on-premises sites or data centers. Enabling site-to-site connectivity across our global network is now very easy. When Azure introduced ExpressRoute Global Reach, as the first in public cloud, we provided a sleek and simple way to take full advantage of our global backbone assets. 

ExpressRoute Global Reach

With ExpressRoute Global Reach, we are democratizing connectivity, allowing enterprises to build cloud based virtual global backbones by using ExpressRoute and Microsoft’s global network. ExpressRoute Global Reach enables connectivity from on-premises to on-premises fully routed privately within the Microsoft global backbone. This capability can be a backup to existing network infrastructure, or it can be the primary means to serve enterprise Wide Area Network (WAN) needs. Microsoft takes care of redundancy, the larger global infrastructure investments, and the scale out requirements, allowing customers to focus on their core mission. 

Consider Contoso, a multi-national company headquartered in Dallas, Texas with global offices in London and Tokyo. These three main locations also serve as major connectivity hubs for branch offices and on-premises datacenters. Utilizing a local last-mile carrier, Contoso invests in redundant paths to meet at the ExpressRoute sites in these same locations. After establishing the physical connectivity, Contoso stands up their ExpressRoute connectivity through a local provider or via ExpressRoute Direct and starts advertising routes via the industry standard, Border Gateway Protocol (BGP). Contoso can now connect all these sites together and opt to enable Global Reach, which will take the on-premises routes and advertise them to the peered circuit in the remote locations, enabling cross-premises connectivity. Contoso has now created a cloud-based Wide Area Network and all within minutes. Effectively end-to-end global connectivity without long-haul investments and fixed contracts.

Modernizing the network and applying the cloud-first model help customers scale with their needs, while at the same time take full advantage and build onto their existing cloud infrastructure. As on-premises sites and branches emerge or change, global connectivity should be as easy as a click of a button. ExpressRoute Global Reach enables companies to provide best in class connectivity on one of the most comprehensive software-defined networks on the planet.

ExpressRoute Global Reach is generally available in these locations, including Azure US Government.
Quelle: Azure

Hitting the Silicon Slopes with a new Salt Lake City region, now open

Today, we’re launching our newest Google Cloud Platform region in Salt Lake City, bringing a third region to the western United States, the sixth nationally, and our global total to 22.A region for the Silicon SlopesUtah’s Silicon Slopes area is home to many digitally savvy companies. Now open to Google Cloud customers, the Salt Lake City region (us-west3) provides you with the speed and availability you need to innovate faster, build high-performing applications, and best serve local customers. Additionally, the region gives you added flexibility to distribute your workloads across the western U.S., including our existing cloud regions in Los Angeles and Oregon.The Salt Lake City region offers immediate access to three zones, for high availability workloads, and our standard set of products, including Compute Engine, Kubernetes Engine, Bigtable, Spanner, and BigQuery. Our private backbone connects Salt Lake City to our global network quickly and securely. In addition, you can integrate your on-premises workloads with our new region using Cloud Interconnect. This means that Salt Lake City-based customers can expand globally from their front door, and those based outside the region can easily reach their users in the mountain West.Visit our cloud locations page for a complete list of services available in the Salt Lake City region.What customers are sayingIndustries including healthcare, financial services, and IT are investing in Salt Lake City. Organizations across these verticals have turned to the Google Cloud to innovate faster and help solve their most complex challenges.PayPal, a leading technology platform and digital payments company, is migrating key portions of its payments infrastructure to the new region. For more on PayPal’s journey with Google Cloud, read today’s press release for details. Overstock, a 20-year-old tech company that provides best-in-class retail customer experiences, has been in the technology space long before enterprise cloud environments became a reality. “Our home-grown infrastructure was built in a pre-cloud world and needed upgrading. In our search for a cloud partner, we had a specific set of criteria in mind given our industry and global customer base. We were able to maintain site-wide performance while updating our legacy systems to a custom public/private cloud hybrid with Google’s systems. With this new region, we expect to achieve higher availability, lower latency, greater business continuity, and improved quality of our service going forward,” said Joel Weight, CTO, Overstock.  Recursion, a digital biology company based in Salt Lake City that focuses on industrializing drug discovery, selected Google Cloud as its primary public cloud provider as it builds a drug discovery platform that has the potential to cut the time to discover and develop a new medicine by a factor of 10. “Google Cloud’s continued investment in the area is a clear indicator that Salt Lake City is a force to be reckoned with as an influential tech hub. With the new cloud region, companies like ours have access to faster, scalable computing infrastructure to better serve their customers. We look forward to the opportunities that are ahead in collaboration with Google,” said Ben Mabey, Chief Technical Officer, Recursion.StorageCraft, a data protection and recovery provider headquartered in Draper, Utah, will deploy Google Cloud to support business growth and future-proof its data protection and recovery product cloud services portfolio. “StorageCraft Cloud Solutions are a central part of our product offering and growth strategy. As our business expands, we will continue to deploy technology that optimizes the performance of our solutions to the benefit of our partners and our customers. Collaborating with Google Cloud close to our headquarters will help ensure that we can easily scale the capacity of our offerings with high-performing cloud services. This is a critical requirement of partners and customers who rely on StorageCraft solutions to always keep their data safe, accessible and optimized,” said Jawaad Tariq, VP of Engineering, StorageCraft. What’s nextWe are excited to welcome you to our new cloud region in Salt Lake City, and eagerly await to see what you build with our platform. Stay tuned for more region announcements and launches this year, starting with our next U.S. region in Las Vegas. For more information, contact sales to get started with Google Cloud today.
Quelle: Google Cloud Platform

Azure HBv2 Virtual Machines eclipse 80,000 cores for MPI HPC

HPC-optimized virtual machines now available

Azure HBv2-series Virtual Machines (VMs) are now generally available in the South Central US region. HBv2 VMs will also be available in West Europe, East US, West US 2, North Central US, Japan East soon.

HBv2 VMs deliver supercomputer-class performance, message passing interface (MPI) scalability, and cost efficiency for a variety of real-world high performance computing (HPC) workloads, such as CFD, explicit finite element analysis, seismic processing, reservoir modeling, rendering, and weather simulation.

Azure HBv2 VMs are the first in the public cloud to feature 200 gigabit per second HDR InfiniBand from Mellanox. HDR InfiniBand on Azure delivers latencies as low as 1.5 microseconds, more than 200 million messages per second per VM, and advanced in-network computing engines like hardware offload of MPI collectives and adaptive routing for higher performance on the largest scaling HPC workloads. HBv2 VMs use standard Mellanox OFED drivers that support all RDMA verbs and MPI variants.

Each HBv2 VM features 120 AMD EPYC™ 7002-series CPU cores with clock frequencies up to 3.3 GHz, 480 GB of RAM, 480 MB of L3 cache, and no simultaneous multithreading (SMT). HBv2 VMs provide up to 340 GB/sec of memory bandwidth, which is 45-50 percent more than comparable x86 alternatives and three times faster than what most HPC customers have in their datacenters today. A HBv2 virtual machine is capable of up to 4 double-precision teraFLOPS, and up to 8 single-precision teraFLOPS.

One and three year Reserved Instance, Pay-As-You-Go, and Spot Pricing for HBv2 VMs is available now for both Linux and Windows deployments. For information about five-year Reserved Instances, contact your Azure representative.

Disruptive speed for critical weather forecasting

Numerical Weather Prediction (NWP) and simulation has long been one of the most beneficial use cases for HPC. Using NWP techniques, scientists can better understand and predict the behavior of our atmosphere, which in turn drives advances in everything from coordinating airline traffic, shipping of goods around the globe, ensuring business continuity, and critical disaster preparedness from the most adverse weather. Microsoft recognizes the criticality of this field is to science and society, which is why Azure shares US hourly weather forecast data produced by the Global Forecast System (GFS) from the National Oceanic and Atmospheric Administration (NOAA) as part of the Azure Open Datasets initiative.

Cormac Garvey, a member of the HPC Azure Global team, has extensive experience supporting weather simulation teams on the world’s most powerful supercomputers. Today, he’s published a guide to running the widely-used Weather Research and Forecasting (WRF) Version 4 simulation suite on HBv2 VMs.

Cormac used a 371M grid point simulation of Hurricane Maria, a Category 5 storm that struck the Caribbean in 2017, with a resolution of 1 kilometer. This model was chosen not only as a rigorous benchmark of HBv2 VMs but also because the fast and accurate simulation of dangerous storms is one of the most vital functions of the meteorology community.

Figure 1: WRF Speedup from 1 to 672 Azure HBv2 VMs.

Nodes

(VMs)

Parallel

Processes

Average Time(s)

per Time Step

Scaling

Efficiency

Speedup

(VM-based)

1

120

18.51

100 percent

1.00

2

240

8.9

104 percent

2.08

4

480

4.37

106 percent

4.24

8

960

2.21

105 percent

8.38

16

1,920

1.16

100 percent

15.96

32

3,840

0.58

100 percent

31.91

64

7,680

0.31

93 percent

59.71

128

15,360

0.131

110 percent

141.30

256

23,040

0.082

88 percent

225.73

512

46,080

0.0456

79 percent

405.92

640

57,600

0.0393

74 percent

470.99

672

80,640

0.0384

72 percent

482.03

Figure 2: Scaling and configuration data for WRF on Azure HBv2 VMs.

Note: for some scaling points, optimal performance is achieved with 30 MPI ranks and 4 threads per rank, while in others 90 MPI ranks was optimal. All tests were run with OpenMPI 4.0.2.

Azure HBv2 VMs executed the “Maria” simulation with mostly super-linear scalability up to 128 VMs (15,360 parallel processes). Improvements from scaling continue up to the largest scale of 672 VMs (80,640 parallel processes) tested in this exercise, where a 482x speedup over a single VM. At 512 nodes (VMs) we observe a ~2.2x performance increase as compared to a leading supercomputer that debuted among the top 20 fastest machines in 2016.

The gating factor to higher levels of scaling efficiency? The 371M grid point model, even as one of the largest known WRF models, is too small at such extreme levels of parallel processing. This opens the door for leading weather forecasting organizations to leverage Azure to build and operationalize even higher resolution models that higher numerical accuracy and a more realistic understanding of these complex weather phenomena.

Visit Cormac’s blog post on the Azure Tech Community to learn how to run WRF on our family of H-series Virtual Machines, including HBv2.

Better, safer product design from hyper-realistic CFD

Computational fluid dynamics (CFD) is core to the simulation-driven businesses of many Azure customers. A common request from customers is to “10x” their capabilities while keeping costs as close to constant as possible. Specifically, customers often seek ways to significantly increase the accuracy of their models by simulating it in higher resolution. Given that many customers already solve CFD problems with ~500-1000 parallel processes per job, this is a tall task that implies linear scaling to at least 5,000-10,000 parallel processes. Last year, Azure accomplished one of these objectives when it became the first public cloud to scale a CFD application to more than 10,000 parallel processes. With the launch of HBv2 VMs, Azure’s CFD capabilities are increasing again.

Jon Shelley, also a member of the Azure Global HPC team, worked with Siemens PLM to validate one its largest CFD simulations ever, a 1 billion cell model of a sports car named after the famed 24 Hours of Le Mans race with a 10x higher-resolution mesh than what Azure tested just last year. Jon has published a guide to running Simcenter STAR-CCM+ at large scale on HBv2 VMs.

Figure 3: Simcenter STAR-CCM+ Scaling Efficiency from 1 to 640 Azure HBv2 VMs

Nodes

(VMs)

Parallel

Processes

Solver Elapsed Time

Scaling Efficiency

Speedup

(VM-based)

8

928

337.71

100 percent

1.00

16

1,856

164.79

102.5 percent

2.05

32

3,712

82.07

102.9 percent

4.11

64

7,424

41.02

102.9 percent

8.23

128

14,848

20.94

100.8 percent

16.13

256

29,696

12.02

87.8 percent

28.10

320

37,120

9.57

88.2 percent

35.29

384

44,544

7.117

98.9 percent

47.45

512

59,392

6.417

82.2 percent

52.63

640

57,600

5.03

83.9 percent

67.14

Figure 4: Scaling and configuration data for STAR-CCM+ on Azure HBv2 VMs

Note: A given scaling point may achieve optimal performance with 90, 112, 116, or 120 parallel processes per VM. Plotted data below shows optimal performance figures. All tests were run with HPC-X MPI ver. 2.50.

Once again, Azure HBv2 executed the challenging problem with linear efficiency to more than 15,000 parallel processes across 128 VMs. From there, high scaling efficiency continued, peaking at nearly 99 percent at more than 44,000 parallel processes. At the largest scale of 640 VMs and 57,600 parallel processes, HBv2 delivered 84 percent scaling efficiency. This is among the largest scaling CFD simulations with Simcenter STAR-CCM+ ever performed, and now can be replicated by Azure customers.

Visit Jon’s blog post on the Azure Tech Community site to learn how to run Simcenter STAR-CCM+ on our family of H-series Virtual Machines, including HBv2.

Extreme HPC I/O meets cost-efficiency

An increasing scenario on the cloud is on-demand HPC-grade parallel filesystems. The rationale is straight forward; if a customer needs to perform a large quantity of compute, that customer often needs to also move a lot of data into and out of those compute resources. The catch? Simple cost comparisons against traditional on-premises HPC filesystem appliances can be unfavorable, depending on circumstances. With Azure HBv2 VMs, however, NVMeDirect technology can be combined with ultra low-latency RDMA capabilities to deliver on-demand “burst buffer” parallel filesystems at no additional cost beyond the HBv2 VMs already provisioned for compute purposes.

BeeGFS is one such filesystem and has a rapidly growing user base among both entry-level and extreme-scale users. The BeeOND filesystem is even used in production on the novel HPC + AI hybrid supercomputer “Tsubame 3.0.”

Here is a high-level summary of how a sample BeeOND filesystem looks when created across 352 HBv2 VMs, providing 308 terabytes of usable, high-performance namespace.

Figure 5: Overview of example BeeOND filesystem on HBv2 VMs.

Running the widely-used IOR test of parallel filesystems across 352 HBv2 VMs, BeeOND achieved peak read performance of 763 gigabytes per second, and peak write performance of 352 gigabytes per second.

Visit Cormac’s blog post on the Azure Tech Community to learn how to run BeeGFS on RDMA-powered Azure Virtual Machines.

10x-ing the cloud HPC experience

Microsoft Azure is committed to delivering to our customers a world-class HPC experience, and maximum levels of performance, price/performance, and scalability.

“The 2nd Gen AMD EPYC processors provide fantastic core scaling, access to massive memory bandwidth and are the first x86 server processors that support PCIe 4.0; all of these features enable some of the best high-performance computing experiences for the industry,” said Ram Peddibhotla, corporate vice president, Data Center Product Management, AMD. “What Azure has done for HPC in the cloud is amazing; demonstrating that HBv2 VMs and 2nd Gen EPYC processors can deliver supercomputer-class performance, MPI scalability, and cost efficiency for a variety of real-world HPC workloads, while democratizing access to HPC that will help drive the advancement of science and research.”

"200 gigabit HDR InfiniBand delivers high data throughout, extremely low latency, and smart In-Network Computing engines, enabling high performance and scalability for compute and data applications. We are excited to collaborate with Microsoft to bring the InfiniBand advantages into Azure, providing users with leading HPC cloud services” said Gilad Shainer, Senior Vice President of Marketing at Mellanox Technologies. “By taking advantage of InfiniBand RDMA and its MPI acceleration engines, Azure delivers higher performance compared to other cloud options based on Ethernet. We look forward to continuing to work with Microsoft to introduce future generations and capabilities."

Find out more about High Performance Computing in Azure.
Running WRF v4 on Azure.
Running Siemens Simcenter Star-CCM+ on Azure.
Tuning BeeGFS and BeeOND on Azure for Specific I/O Patterns.
Azure HPC on Github.
Azure HPC CentOS 7.6 and 7.7 images.
Learn about Azure Virtual Machines.
AMD EPYC™ 7002-series.

Quelle: Azure

Accelerate Your cloud strategy with Skytap on Azure

Azure is the best cloud for existing Microsoft workloads, and we want to ensure all of our customers can take full advantage of Azure services. We work hard to understand the needs of those customers running Microsoft workloads on premises, including Windows Server, and help them to navigate a path to the cloud. But not all customers can take advantage of Azure services due to the diversity of their on-premises platforms, the complexity of their environments, and the mission-critical applications running in those environments.

Microsoft works with many partners to create strategic partnerships to unlock the power of the cloud for customers relying on traditional on-premises application platforms. Azure currently offers several specialized application platforms and experiences, including Cray, SAP, and NetApp, and we continue to invest in additional options and platforms.

Allowing businesses to innovate with the cloud faster

Today we're pleased to share that we are enabling more customers to start on their journey to the cloud. Skytap has announced the availability of Skytap on Azure. The Skytap on Azure service simplifies cloud migration for traditional applications running on IBM Power while minimizing disruption to the business. Skytap has more than a decade of experience working with customers and offering extensible application environments that are compatible with on-premises data centers; Skytap’s environments simplify migration and provide self-service access to develop, deploy, and accelerate innovation for complex applications.

Brad Schick, Skytap CEO: “Today, we are thrilled to make the service generally available.  Enterprises and ISVs can now move their traditional applications from aging data centers and use all the benefits of Azure to innovate faster.”

Customers can learn more about Skytap and the Skytap on Azure service here.

Cloud migration remains a crucial component for any organization in the transformation of their business, and Microsoft continues to focus on how best to support customers in that journey. We often hear about the importance of enabling the easy movement of existing applications running on traditional on-premises platforms to the cloud and the desire to have those platforms be available on Azure.

The migration of applications running on IBM Power to the cloud is often seen as a difficult and challenging move involving re-platforming. For many businesses, these environments are running traditional, and frequently, mission-critical applications. The idea of re-architecting or re-platforming these applications to be cloud native can be daunting. With Skytap on Azure, customers gain the ability to run native Power workloads, including AIX, IBM i, and Linux on Azure. The Skytap service allows customers to unlock the benefits of the cloud faster and begin innovating across applications sooner, by providing the ability to take advantage of and integrate with the breadth of Azure native services. All of this is possible with minimal changes to the way existing IBM Power applications are managed on-premises.

Application running on IBM Power and x86 in Skytap on Azure.

With Skytap on Azure, Microsoft brings the unique capabilities of IBM Power9 servers to Azure data centers, directly integrating with Azure network, and enabling Skytap to provide their platform with minimal connectivity latency to Azure native services such as Blob Storage, Azure NetApp Files, or Azure Virtual Machines.

Skytap on Azure is now available in the East US Azure region. Given the high level of interest we have seen already, we intend to expand availability to additional regions across Europe, the United States, and Asia Pacific. Stay tuned for more details on specific regional rollout availability.

Try Skytap on Azure today, available through the Azure Marketplace. For more information on the Public Availability of Skytap on Azure, please access the full Skytap press release. Skytap on Azure is a Skytap first-party service delivered on Microsoft Azure’s global cloud infrastructure.
Quelle: Azure