Docker Enterprise Container Cloud Helps Kubernetes Developers Ship Code Faster on Public and Private Clouds

The post Docker Enterprise Container Cloud Helps Kubernetes Developers Ship Code Faster on Public and Private Clouds appeared first on Mirantis | Pure Play Open Cloud.
Mirantis reduces cloud complexity with real choice, simplicity, and security
MIRANTIS LAUNCHPAD VIRTUAL EVENT, (Campbell, CA) — September 16, 2020 — Announced at the inaugural Mirantis Launchpad 2020 virtual conference, Docker Enterprise Container Cloud offers enterprises unprecedented speed to ship code faster on public clouds and on premise infrastructure. It simplifies Kubernetes with one consistent cloud experience for developers and operators across public and private clouds, with complete app and devops portability.
“Docker Enterprise Container Cloud and Lens will enable businesses to streamline delivery of hundreds of daily deployments across thousands of apps, overcoming the complexity of Kubernetes development at enterprise scale,” said Mirantis customer Don Bauer, Docker Captain and VP Technology Services / DevOps Manager.
Docker Enterprise Container Cloud is available free of charge for up to 3 clusters totaling 15 nodes, without any limitations in functionality.
The launch follows the introduction of Docker Enterprise 3.1 and new 24×7 and managed operations support offerings, which launched in May 2020. The release builds on the Mirantis Kubernetes vision to deliver Your Cloud Everywhere and the recent announcement of Lens, the world’s most popular Kubernetes IDE. Lens greatly simplifies app development by consolidating more than a dozen Kubernetes tools into a single integrated development environment that provides dev and ops teams with situational awareness in their context. 
With Lens and Docker Enterprise Container Cloud, Mirantis empowers a new breed of Kubernetes developers by removing infrastructure and operations complexity through automated full-stack lifecycle management and continuous updates, and providing tools for insights and management that support cloud-native software development.
“Docker Enterprise Container Cloud breaks the mold with real choice at every layer of the stack,” said Adrian Ionel, CEO and co-founder, Mirantis. “Unlike lock-in solutions like IBM/Red Hat and VMware that force you to deploy their rigid stack, Container Cloud empowers you to deploy your own multi cloud everywhere, unlocking speed with freedom of choice, simplicity, and industry-leading security.”
New capabilities in Docker Enterprise Container Cloud include:

Multi-Cloud: Public, Private, and Bare Metal
Multi-Cluster Management
Self-Service Clusters On-Demand
Automated Full-Stack Lifecycle Management with Continuous Updates
Centralized Insights and Management

Container Cloud is the only independent container platform that provides a choice of operating system and virtualization software. Developers can benefit from a frictionless “managed Kubernetes” experience of self-service cluster provisioning across any infrastructure, while enterprise IT can ensure compliance with regulations and corporate policies. Developers can easily deploy and manage clusters via API, CLI, or UI, and can approve automated zero-downtime updates to their clusters as they become available from Mirantis.
Docker Enterprise Container Cloud enables companies to ship code faster with these key capabilities:
Cloud Choice – Provides choice at every level of the stack, from the virtualization layer to the OS to Kubernetes, so that organizations can build on open standards and use their favorite tools and frameworks to ship code faster that runs on any private, public, or hybrid cloud.
Cloud Simplicity – Simplifies Kubernetes for developers and operators with one cohesive cloud experience, built in security, a single pane of glass, and full lifecycle management.  
Cloud Speed – Increases developer velocity with a modern software supply chain for getting secured code to production faster and continuously. Container Cloud provides developers self-service access to Kubernetes clusters, complete app and devops portability, and built-in industry leading security.
Cloud Secured – Enhances the security of your Kubernetes clusters and modern apps through a secure supply chain that integrates security into every stage of the application lifecycle. Container Cloud provides the industry’s most secure container engine with FIPS 140-2 validation along with built-in image scanning and signing.
Cloud Scale – Enables organizations to achieve massive scale from the desktop to the data center, delivering consistent clusters everywhere. Operators benefit from complete observability and management across a fleet of automatically updated Kubernetes clusters.
Pricing and availabilityDocker Enterprise Container Cloud is available for download free HERE, without any limitations in functionality, for up to 3 clusters totaling 15 nodes. For larger-scale deployments requiring enterprise support, Mirantis provides annual subscriptions for LabCare 8×5 support, ProdCare 24×7 support and OpsCare 24×7 managed operations.
To learn more about Docker Enterprise Container Cloud, visit: https://www.mirantis.com/software/docker/docker-enterprise-container-cloud/.
About Mirantis
Mirantis helps organizations ship code faster on public and private clouds. The company provides a public cloud experience on any infrastructure to the data center to the edge. With Lens and Docker Enterprise Container Cloud, Mirantis empowers a new breed of Kubernetes developers by removing infrastructure and operations complexity and providing one cohesive cloud experience for complete app and devops portability, a single pane of glass, and automated full-stack lifecycle management with continuous updates.
Mirantis serves many of the world’s leading enterprises, including Adobe, DocuSign, Liberty Mutual, Nationwide Insurance, PayPal, Reliance Jio, Splunk, and STC. Learn more at www.mirantis.com.
###
Media Contact
Joseph Eckert for Mirantis
jeckert@eckertcomms.com
The post Docker Enterprise Container Cloud Helps Kubernetes Developers Ship Code Faster on Public and Private Clouds appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Deutsche Bӧrse Group continues its journey to the cloud

The word “transformation” brings many things to mind, like innovation, agility, and change. Consistency and stability are probably not as high on the list of synonyms, but for regulated industries undergoing digital transformation initiatives, those characteristics are just as critical—in fact, they’re critically important for digital transformation to succeed.Deutsche Bӧrse Group, an international financial exchange organization offering products and services that cover the entire value chain, plays a role in contributing to the soundness of the global financial system. It is a prime example of how a large company in a highly regulated industry can achieve a delicate balance between innovation and stability. The company sees cloud as an important enabler, supporting its strategic focus on new technologies and helping to keep it on the forefront (of technology) while maintaining its own highly secure, resilient, trusted infrastructure. Under the leadership of CIO and COO Christoph Böhm, Deutsche Bӧrse started its cloud transformation journey more than three years ago, bringing on strategic partners like Google Cloud to advise and support it during the process. Midway through what has been a tumultuous year for organizations and people around the world, Deutsche Bӧrse has made significant progress in its growth strategy, most recently adopting Google Cloud VMware Engine to extend and migrate its on-premises workloads to Google Cloud. The best of all worldsA central part of Deutsche Bӧrse’s growth strategy is to maintain an agile, sophisticated IT infrastructure that spans a wide range of on-prem and cloud apps, as well as multiple cloud platforms. This multi-cloud, hybrid environment helps Deutsche Bӧrse keep the stability, resilience, and control required within a highly regulated environment—without sacrificing the scale, speed, and agility needed to stay ahead of the market and service evolving customer needs.Deutsche Bӧrse has a long list of on-prem VMware applications across its portfolio that have been customized over the years for the company’s unique processes. Many of these applications, especially in the business’s post-trading side, could benefit from the cloud. Using Google Cloud VMware Engine, Deutsche Bӧrse is now migrating these apps to the cloud without the cost or complexity of refactoring applications—in most cases, with just a few mouse clicks.The ability to run and manage workloads consistent with its on-prem environments reduces the team’s operational burden and enables staff to continue using existing skills, tools, policies, and processes that comply with the company’s stringent regulatory requirements. The exchange’s hybrid, multi-cloud approach also helps with choice and portability to avoid vendor lock-in and gives the IT team another option for disaster recovery. Deutsche Bӧrse’s developers are also benefiting from the company’s move to the cloud. According to Böhm, using cloud services has significantly sped up development and testing of new customer-facing services. Prior to this, the team had limited testing capabilities, but can now conduct thousands of tests across three or four environments in the same amount of time, allowing for earlier identification of errors in the development process. By moving VMware workloads to Google Cloud, Deutsche Börse can set up a new private cloud instance in minutes, while maintaining existing policies and tools, including existing cloud constructs such as networking interconnects—all while increasing business agility.In general, the use of cloud services has kept teams fully productive, especially during the peak of the global COVID-19 pandemic, when 95 percent of teams were working remotely. Böhm attributes many of these benefits to scaling in the cloud. He also highlights Google’s AI capabilities and comprehensive machine learning framework. Hyperscaling machine-learning services will enable Deutsche Börse to train data science models in a couple of hours rather than weeks, a huge improvement in supporting Deutsche Börse’s ambitions to further automate internal processes and provide new data-driven services to customers faster than before—in a secure way. For the company, ensuring the security of the data has always been a top priority: a powerful data privacy strategy is applied to all public cloud activities in place, enabling two layers of encryption. From the very beginning, the strict privacy and data security measures Google Cloud is offering and applying were a critical factor in deciding for Google Cloud.Like many companies on their paths to the cloud, Deutsche Bӧrse is not finished with its journey, proving that cloud transformation (or any transformation, for that matter) doesn’t happen overnight. However, the cloud’s productivity benefits and speed of innovation have already prepared the company for the future, without placing too many demands on the present.Learn more here about how Deutsche Börse Group is adopting Google Cloud to lay the foundations for scalability, resilience, and compliance.
Quelle: Google Cloud Platform

What you can learn in our Q3 2020 Google Cloud Security Talks

Cloud deployments and technologies have become an even more central part of organizations’ security program in today’s new normal. As you continue to evolve your strategies and operations, it’s vital to understand the resources at your disposal to protect your users, applications, and data. To help you navigate the latest thinking in cloud security, we hope you’ll join us for the latest installment of our Google Cloud Security Talks, a live online event on September 23rd. We’ll share expert insights into our security ecosystem and cover the following topicsSunil Potti and Rob Sadowski will open the digital event with our latest security announcements.Ansh Patnaik and Svetla Yankova will do a deep dive into threat detection and investigation with Chronicle, followed by a panel discussion with Matthew Svensson from BetterCloud, Ryan Ogden from Groupon and Sean Doyle from Paradigm Quest.Anoosh Saboori and Anton Chuvakin will talk about our new Certificate Authority Service (CAS) that automates the management and deployment of private CAs while meeting the needs of modern developers and applications.Nelly Porter will discuss our two new products in the area of Confidential Computing:  Confidential VMs and Confidential GKE NodesWe’ll look at security solutions such as Cloud Armor and reCAPTCHA Enterprise which can be deployed to protect online applications, preventing denial of service and stopping bots, fraud, and malware, with Cy Khormaee and Emil Kiner. You’ll get a walkthrough of our latest security best practices for Meet, Chat and Gmail with G Suite security experts Brad Meador and Vidya Nagarajan Raman.Finally, Sam Lugani will host the Google Cloud Security Showcase, a special segment where we’ll focus on security use cases. We’ll pick a few security problems and show how we’ve helped customers solve them using the tools and products that Google Cloud provides. We look forward to sharing our latest security insights and solutions with you. Sign-up now to reserve your virtual seat.
Quelle: Google Cloud Platform

Forrester names Google Cloud a Leader in Notebook-based Predictive Analytics and Machine Learning

Forrester Research has named Google Cloud a Leader in its latest report on Notebook-based Predictive Analytics and Machine Learning Solutions. Forrester’s analysis and recognition gives customers the confidence they need as they make important platform choices that will have lasting business impact. This recognition is based on Forrester’s evaluation of Google Cloud’s AI Platform that includes Notebooks, Explainable AI, and AutoML products, amongst a suite of predictive analytics and machine learning services used by data scientists, developers, and machine learning engineers. In the report, Forrester evaluated 12 notebook-based predictive analytics and machine learning solutions against a set of pre-defined criteria. In addition to being named a leader, Google Cloud received the highest possible score in eleven evaluation criteria including explainability, security, open source, and partners.Google offers one-stop AI shopping on Google Cloud Platform The Forrester Wave:™ Notebook-based Predictive Analytics and Machine Learning Solutions, Q3 2020Our AI Platform supports the entire ML lifecycle from data ingestion and preparation all the way up to model deployment, monitoring, and management. And we recently announced new MLOps services that unify ML systems development and operations, removing many of the challenges of scaling production ML workflows.  AI Platform Notebooks is a managed JupyterLab notebook service, with enterprise security features like CMEK, VPC-SC, shared VPC, and private IP controls built-in.  It also comes with deep integration to BigQuery (our serverless, multi-cloud data warehouse), Dataproc (managed Hadoop, Spark and Presto) and Google Cloud Storage (GCS). And with Dataproc Hub, you can use Notebooks to work with Spark and your favorite ML and data science libraries. This streamlines cost management for data science teams and reduces the overhead of managing different environments for IT administrators. AI for all interests and levels of expertiseAt Google Cloud, we think that AI can meaningfully improve people’s lives and that the biggest impact will come when everyone can access it. Between Kaggle Notebooks for enthusiasts, Colab for researchers and students, and AI Platform Notebooks for enterprise users, we are working hard to make sure that all users can build and use AI. Be it domain users, or seasoned data scientists, everyone has a part to play in mapping business objectives against key outcomes achieved through AI. We recently announced that AutoML technology will be integrated as a workflow within AI Platform supporting structured and unstructured data problems. With this integration, AI Platform will provide a unified workflow with no code and code-based options for model builders of all types and experiences. Our vision to empower every enterprise to transform their business with AI is inspired by Google’s mission of universal access to information and shows up in our Responsible AI practice and Explainable AI tools and services. Apart from providing the best-in-class tools for model understanding and evaluation, we are steering a path with best practices, design guides, and education that advocates for AI governance in organizations. Regardless of your experience and expertise, our platform is built with a purpose to help you achieve your business objectives with AI. To learn more about how to make AI work for you, download a complimentary copy of The Forrester Wave™ for Notebook-based Predictive Analytics and Machine Learning Solutions, Q3 2020 report.
Quelle: Google Cloud Platform

Making it easier to manage Windows Server VMs

Google Cloud provides a first-class experience for migrating and modernizing Windows workloads. Organizations choose Google Cloud for reliability, performance, and cost savings from the underlying infrastructure, as well as for features, tooling and guidance that helps them modernize. Companies like Geotab rely on Google Cloud to keep ahead of change and increased demand, and reduce licensing costs by modernizing from a proprietary stack to open-source. We’re also incredibly proud to have earned the recognition of analyst firms such as IDC for helping companies migrate and modernize their Windows-based workloads.Today we’re announcing a number of new features that will make running Windows Server workloads in Google Cloud easier: boot-screen diagnostics, auto-upgrade for Windows Server, new diagnostics tooling, and improved license reporting. Read on to learn about these new features.  Boot-screen diagnostics (beta)Windows VMs often rely on a virtual display device to report certain errors, and when connecting by Remote Desktop Protocol (RDP) doesn’t work, accessing the virtual display screen becomes a necessity. Building on the Virtual Displays feature we launched last year, we’re now enabling you to more easily troubleshoot Windows VMs by capturing boot-screen screenshots without having to RDP into the machine. Capturing a screenshot from a VM can help diagnose issues if VMs are not otherwise accessible, for example, during the boot process, or if trying to start a VM with a corrupted disk image.For those of us who miss seeing this blue screen, it is now viewable directly from within the Cloud Console :-)Auto-upgrade for Windows 2008 (beta)Many customers are still using Windows Server 2008 even after end-of-service was reached earlier this year. We want to make it easier for you to upgrade your instances by performing an in-place auto-upgrade for Windows Server 2008 using a single gcloud command. This command backs up your current VM, performs the upgrade, and handles roll-backs automatically if something fails. You can quickly test if a Windows OS in-place upgrade will work and then automate upgrades at scale. Collect diagnostic information (beta)When you try to troubleshoot your Windows VMs or reach out to Google Cloud Support, it can be hard to provide all the necessary diagnostic information that you need to quickly and effectively troubleshoot a problem. A new diagnostic tool for Windows VMs helps collect all the necessary information so you can either troubleshoot the issue yourself or provide the necessary diagnostic information to Support. License reporting toolingIf you bring your own Windows licenses to Google Cloud, calculating licensing usage for Microsoft Enterprise Agreement True-ups and audits can be an onerous task. We have often seen customer procurement, engineering, and operations teams work for months to generate the data to satisfy complex licensing reporting needs. Further, these complex reports often need to be analyzed to identify high watermarks for physical server usage or understand license usage at any given time. For those of you running on sole-tenant nodes, the new Windows licensing reporting tool automates this process so you can quickly and comprehensively generate reports to quantify your physical server usage. The tool, which runs in a Windows environment, ingests log data and outputs graphical results and reports to users. Envision easier Windows managementTogether, we hope these new features will make it easier for you to troubleshoot problems, upgrade, and manage the license requirements of Windows workloads running on Google Cloud. And we’re not done yet—stay tuned as we work to make Google Cloud the best platform on which to migrate, optimize, and modernize your Windows workloads, all with enterprise-class, Microsoft-backed support. Click here to learn more about running Windows on Google Cloud.Related ArticleDriving change: How Geotab is modernizing applications with Google CloudOver time, Geotab converted production servers running Windows Server to containers and open source, saving hundreds of thousands of doll…Read Article
Quelle: Google Cloud Platform

Export data from Cloud SQL without performance overhead

While there are a variety of reasons to export data out of your databases – such as to maintain backups, meet regulatory data retention policies, or feed downstream analytics – exports can put undue strain on your production systems, making them challenging to schedule and manage. To eliminate that resource strain, we’ve launched a new feature for Cloud SQL: serverless exports. Serverless exports enables you to export data from your MySQL and PostgreSQL database instances without any impact on performance or risk to your production workloads.Cloud SQL exports, which offer portable data formats (SQL, CSV), can be triggered anytime and are written to Cloud Storage buckets that you control.If you need to meet regulatory requirements around data retention, you can easily send exports to buckets with Bucket Lock enabled. Bucket Lock allows you to configure a data retention policy for a Cloud Storage bucket that governs how long objects in the bucket must be retained. It also allows you to lock the data retention policy, permanently preventing the policy from being reduced or removed.As another example, you can export data to CSV based on a custom query, then import the data directly to BigQuery for analytics. And if this is for regular reporting, you can schedule a recurring import with Data Transfer Service or Cloud Scheduler.Using the new serverless export feature ensures these exports won’t bog down your Cloud SQL database instance, so you can continue to run predictably and reliably. And until February 2021, you can use serverless exports at no charge.What’s next for Cloud SQLWe’re excited to see what you build with the new serverless exports feature. Have more ideas? Let us know what other features and capabilities you need with our Issue Tracker and by joining the Cloud SQL discussion group. We’re glad you’re along for the ride, and we look forward to your feedback!Related ArticleMySQL 8 is ready for the enterprise with Cloud SQLCloud SQL, our fully managed database service for MySQL, PostgreSQL, and SQL Server, now supports MySQL 8. As a managed service, MySQL 8 …Read Article
Quelle: Google Cloud Platform

Better outcomes with AI: Frost & Sullivan names Microsoft the leading AI platform for healthcare IT

In early 2020, Frost & Sullivan recognized Microsoft as the “undisputed leader” in global Artificial Intelligence (AI) platforms for the Healthcare IT (HCIT) sector on the Frost Radar™. In a field of more than 200 global industry participants, Frost & Sullivan independently plotted the top 20 companies across various parameters indicative of growth and innovation, available for consumption here.

According to Frost & Sullivan, the global AI HCIT market is on a rapid growth trajectory, with sales of AI-enabled HCIT products expected to generate more than $34.83 billion globally by 2025. Government agencies will contribute almost 50.7 percent of the revenue (including public payers), followed by hospital providers (36.3 percent) and physician practices (13 percent). Clinical AI solutions will drive 40 percent of the market revenue, with financial AI solutions contributing the same, and the remaining 20 percent coming from sales of operational AI solutions. Globally, Microsoft earned the top spot because of its industry-leading effort to incorporate next-generation AI infrastructure to drive precision medicine workflows, aid population health analytics, propel evidence-based clinical research, and expedite drug and treatment discovery.

Figure 1: The Frost Radar, "Global AI for Healthcare IT Market", 2020

We’re seeing providers deploy chatbots in their virtual portals to extend 24/7, personalized care to patients, helping them triage a larger volume of inquiries and even extend care services to previously inaccessible remote areas. With the power of predictive analytics, care teams can predict patient volumes and provide preventative care to provide timely escalations of care and prevent unnecessary readmissions. AI has provided tools for scientists at the forefront of precision medicine, accelerating drug discovery, while aiding public health officials with modeling and predicting the progression of disease. In BioPharma and MedTech, AI is being used to provide real-time insights around equipment use for manufacturing R&D departments, while also deploying field technicians to service costly equipment via predictive maintenance and enabling healthcare customers to track inventory and medication across supply chains with greater transparency and agility.

The report cites numerous recent innovations from Microsoft, including the Microsoft Cloud for Healthcare offering, announced in 2020. The Microsoft Cloud for Healthcare brings together trusted and integrated capabilities for customers and partners that enrich patient engagement and connects health teams to help improve collaboration, decision-making, and operational efficiencies. It makes it faster and easier to provide more efficient care and help ensure end-to-end security, compliance, and accessibility of health data.

At Microsoft, we are focused on trust and on empowering our healthcare customers—never monetizing customer or patient data. The Microsoft Cloud for Healthcare also offers an infrastructure built on industry leading scale, with over $15 billion invested in cloud infrastructure and over 1 million physical servers across over 60 global regions. Furthermore, Microsoft has the largest partner ecosystem in the market, with global partners equipped to work with health organizations of all sizes.

Healthcare AI at Microsoft

Microsoft’s growing portfolio of healthcare AI offerings also includes specific services such as:

The Microsoft Health Bot enables health organizations to build and deploy AI-powered, compliant conversational healthcare experiences. With built-in medical intelligence and natural language capabilities, and extensibility tools, the Health bot enables health organizations to build personalized and trusted conversational experiences across digital health portals. Customers such as Premera Blue Cross have leveraged the Microsoft Health Bot to create their own chatbot, Premera Scout, to help customers quickly obtain information on claims, benefits, and other services offered by Premera across their digital portals. In another instance, Walgreens Boots Alliance (WBA) incorporated the Microsoft Healthcare Bot to add a COVID-19 Risk Assessment capability to their website, helping customers quickly find answers to common questions.
Text Analytics for Health is a feature of Azure Cognitive Services that helps health organizations process and extract insights from unstructured medical data (such as; doctor’s notes, medical publications, electronic health records, clinical trial protocols, and more). This enables researchers, analysts, and medical professionals to unlock scenarios based on entities in health data, such as matching patients to clinical trials and extracting insights from large bodies of clinical literature, as was the case when the University College London (UCL) leveraged Text Analytics for health to build a system that identifies relevant research for reviews as and when they are published.
Azure Cognitive Services offers easy-to-deploy AI tools for speech recognition, computer vision, and language understanding. Nuance, a leading provider of AI-powered clinical documentation and decision-making support for physicians, leveraged the Azure Cognitive Services platform to develop their Dragon Medical One platform, one of the leading services of Ambient Clinical Intelligence. The platform allows doctors to enter and search for relevant patient information in electronic health records, using dictation. This enables physicians to reduce time spent on administrative capabilities and redirect more time toward interacting with the patient. The platform can also mine a patient’s medical history with new reported symptoms at an appointment to provide recommendations of potential diagnoses for the doctor to consider.

Partners empowering healthcare AI

We’re also proud to see many of our healthcare partners recognized in the report, with whom we have partnered to design and build our portfolio of AI services and who, in turn, leverage our platforms to infuse AI in their solutions. These include, but are not limited to:

Nuance is partnering with Microsoft to deliver ambient clinical intelligence (ACI), paving the way for the exam room of the future. Take a look at our partner spotlight, Microsoft and Nuance partner to deliver ambient clinical intelligence.
GE Healthcare is developing advanced solutions for secure imaging and data exchange built on Azure.
Optum, the Health Services platform of UnitedHealth Group, joined forces with Microsoft to launch ProtectWell, a return-to-workplace protocol that enables employers to bring employees back to work in a safe environment. Leveraging clinical and data analytics capabilities, as well as the Microsoft Healthcare Bot service for AI-assisted Covid-19 triaging. Take a look at our partner spotlight, UnitedHealth Group and Microsoft join forces to launch ProtectWell.
Allscripts extended their long-term strategic alliance to harness the power of Microsoft’s platform to develop Sunrise, an integrated EHR that provides a clinician-friendly, evidenced-based platform with integrated analytics for delivering better health outcomes in hospitals. Connecting all aspects of care—including; acute, surgical, pharmacy, and laboratory services, to revenue and patient administration systems.
Philips is empowering providers through image-guided, minimally invasive therapies, bringing live imaging and other sources of data into 3D holographic environments controlled by physicians. Take a look at our partner spotlight, Microsoft HoloLens 2: Partner Spotlight with Philips.

We’re honored to have been recognized as a leader in the healthcare space and are proud to work with a growing ecosystem of partners and customers that are building the next generation of healthcare solutions. Together, we’re extending the reach of healthcare services, unlocking new clinical insights, and empowering care teams to drive better outcomes for the communities they serve. Innovation is a journey without end, and we’re committed to building the trusted tools and platforms to help healthcare organizations be future-ready and invent with purpose.

Next steps with Microsoft AI

To learn more about Microsoft AI offerings, explore the following resources:

Microsoft AI for Health page.
Learn more about the Azure AI platform.
Explore even more Azure for Health offerings, from IoT to Mixed Reality.
Read the latest updates on the Microsoft Healthcare blog.
Learn more about the Microsoft Cloud for Healthcare.

Quelle: Azure

Preparing for what’s next: Building landing zones for successful cloud migrations

As businesses look to the cloud to ensure business resiliency and to spur innovation, we continue to see customer migrations to Azure accelerate. Increasingly, we’ve heard from business leaders preparing to migrate that they could learn from our best practices and want general help thinking about migration, and we started a blog series to help share those even more broadly. In our kick-off blog for this series, we shared that landing zones are a key component to anticipating and mitigating complexities as part of your migration. In this blog, we will cover what landing zones are and the importance of getting cloud destinations ready in advance of the physical migration, as that generates significant benefit in the long-term.

IT and business leaders often ask us about how they can both enable their teams to innovate with agility in Azure and remain compliant within organizational governance, security, and efficiency guardrails. Getting this balance right is critical to cloud migration success. One of the most important questions to getting it right is how to set up destination Azure environments we call landing zones.

At Microsoft, we believe that cloud agility isn’t at odds with setting up the right foundation for migration initiatives—in fact, taking time to do the latter sets organizations up for a faster path to success. Our customers and partners have been using Azure landing zones—a set of architecture guidelines, reference implementations, and code samples based on proven practices—to prepare cloud environments.

“With everybody’s limited budget, especially during the pandemic, the support from both a financial perspective and with FastTrack for Azure backing. I very quickly realized that we could deliver in a quicker timeframe than initially planned. The landing zone was a great initiative because that focused everybody in terms of what are the deliverables? What are we looking to achieve? What technologies are we going to use to do that? Microsoft linked in seamlessly with SoftwareOne and as a customer of both of these companies, it was reassuring for us.” – Gavin Scott, Head of IT, Actavo

What are the key decisions to be made in setting-up your cloud destination?

At the onset of migration initiatives, we see customers and partners focus on the key considerations below to define their ideal operating environment in Azure. These considerations are abstracted as operating models, with “central operations” and “enterprise operations” as two options at different ends of the spectrum.

Old roles versus new opportunities: Migrating to the cloud can modernize many workloads as well as how IT operates. Azure can reduce the volume of repetitive maintenance tasks, unlocking opportunities to apply IT staff expertise in new ways. At the same time, Azure does offer options to preserve practices, controls, and structures that are proven to work. A key decision for leaders is where to land on this spectrum.
Change management versus democratized actions: With greater access to self-service deployment and flexibility for decisions, change management and change control can look different in the cloud. While workload teams typically prefer the agility to quickly make changes to workloads and environments, cloud centers of excellence seek to ensure changes are safe, compliant, and operationally efficient. The key decision for leaders here is how much of cloud governance requirements should be automated.
Standardized versus specialized operations: Creating multiple and connected levels of operational controls in Azure to accommodate specialized needs of various workloads is absolutely possible. Central IT, for instance, can ensure basic operational standards for all workloads, while empowering workload teams to set additional guardrails. The key question for leaders is which day-to-day operations will be performed by central IT teams and which by workload teams.
Architecture; as-is versus re-imagined: The first inclination for most teams might be to simply replicate on-premises design and architectures, “as-is” in Azure. When a low complexity and narrowly scoped estate is moving to cloud, that might be the optimal approach. In time, as migration scopes grow—spanning more applications, databases, and infrastructure components—achieving higher efficiency in Azure becomes even more attractive. A key decision for leaders is which path to take during iterative migration initiatives.

Azure landing zones appropriately guide customers and partners in setting up the desired operating model in Azure. Landing zones ensure that roles, change management, governance, and operations are all considered at the beginning of the journey to achieve the desired balance of agility and governance.

Why are Azure landing zones valuable in implementing your design decisions in the cloud?

Examples from two of our customers on each end of the operating model spectrum illustrate how landing zones guide destination decisions, as well as the implementation path.

The first example is a US-based large manufacturing and distribution company, with operations spanning four continents. This customer aimed to establish “central operations” while retiring a series of data centers that would have otherwise required expensive hardware upgrades. One of the complicating (though not uncommon) factors was each regional subsidiary had distinct governance, security, and operations requirements.

To accelerate this complex migration, with the help of our partners, we started by migrating a single subsidiary, enabling the customer to learn and iterate towards the desired centralized operating model. During the first four weeks, the customer migrated hundreds of low-risk VMs to an Azure landing zone. Within eight weeks, the customer established the final operating model, migrating mission-critical, and sensitive data workloads for their first subsidiary. Other subsidiaries then built on this initial operating model to meet their specific needs. The customer now uses Azure Blueprints and Azure Policy to deploy self-service landing zones to comply with global and local standards. Azure landing zones enabled the customer to successfully mitigate complexity and mold the cloud platform architecture to fit the centralized operating model they were looking for.

The second example comes from one of our customers in Germany preparing to move thousands of servers to Azure. Most of those servers hosted low-complexity, steady-state workloads governed by central operations on-premises. As part of the migration effort, the customer needed to transform and modernize IT operations, including adherence to high security and compliance requirements that were to take effect. In eight weeks, this customer was able to start an Azure environment in alignment with the transformation vision while meeting the new security and compliance requirements. The enterprise-scale flavor of Azure landing zones provided implementation options needed for the destination to meet stringent requirements and enabled the enterprise transformation vision.

For an overview of landing zone and considerations you should make to build your landing zone in Azure, view this Azure landing zones video. 

How are Azure landing zones constructed?

To construct Azure landing zones, customers and partners first clarify how they prefer to deploy their landing zones. Next up are decisions on “design area” configuration options. Let’s take a look at a couple of the “design areas” to demonstrate how they contribute to the construction of landing zones.

Deployment options: How to deploy Azure landing zones is an important early design decision. Each implementation option provides slightly different methods to match the skill level of your team and the operating model. User-interface based options and scripting-based methods, as well as deployments directly from GitHub are available.
Identity: Best practice guidance and enabling capabilities Azure Active Directory, Azure role-based access control (RBAC), and Azure Policy help establish and preserve the right levels of identity and access across the cloud platform. The best practices, decision guides, and references in Azure landing zones help design the foundation with a secure and compliant approach.
Resource organization: Sound governance starts with standards for organizing resources. Naming and tagging standards, subscription design (segmentation of resources), management group hierarchy (consistent organization of segments) are needed to reflect operating model preferences. Landing zones provide the guidance to get started.
Business continuity and disaster recovery (BCDR): Reliability and rapid recovery are essential for business continuity. Design areas within landing zones guide customers to set up destination environments with high degrees of protection and faster recovery options.

“The landing zone that serves as a foundation for customers’ identity, security, networking, operations and governance needs, tends to be a lynchpin of success for future migrations. Claranet prides on getting this right in addition to helping build an excellent post migration operational model. Our collaboration with the Azure Migration Program (AMP) team was tremendously helpful to our customers, bringing the best of what we have with Microsoft’s recommendations and focusing on landing zone to better prepare for their growing cloud portfolio.”—Mark Turner, Cloud Business Unit Director, Claranet

Getting started with Azure landing zones

To guide our customers and partners in getting cloud destination environments ready with Azure landing zones, ready section under the Cloud Adoption Framework (CAF) provides step-by-step, prescriptive guidance. We recommend that customers start with the following three steps within CAF to educate and activate their migration crews:

Begin by determining which cloud operating model reflects the right balance for your agility and governance needs.
Continue onto "design areas" for Azure landing zones for an overview of the configuration options available to achieve your operating model.
Select an Azure landing zone implementation option to match your selected operating model, migration scope, and velocity. Once you’ve identified the best option, deployment instructions and supporting scripts can automatically deploy reference implementations of each Azure landing zone.

Customers truly realize the value of migrations once they have started operating from the cloud. Cloud destinations that enable innovation and agility, while ensuring governance and security are key to accelerate that value realization. Azure landing zones are ready to guide customers and partners in setting-up cloud destinations and, more importantly, for setting-up post-migration success.
Quelle: Azure

NFS 4.1 support for Azure Files is now in preview

Azure Files is a distributed cloud file system serving file system SMB and REST protocols generally available since 2015. Customers love how Azure Files enables them to easily lift and shift their legacy workloads to the cloud without any modifications or changes in technology. SMB works great on both Windows and UNIX operating systems for most use cases. However, because some applications are written for POSIX compliant file systems, our customers wanted to have the same great experience on a fully POSIX compatible NFS file system. Today, it’s our pleasure to announce Azure Files support for NFS v4.1 protocol!

NFS 4.1 support for Azure Files will provide our users with a fully managed NFS file system as a service. This offer is built on a truly distributed resilient storage platform that serves Azure Blobs, Disks, and Queues, to name just a few components of Azure Storage. It is by nature highly available and highly durable. Azure Files also supports full file system access semantics such as strong consistency and advisory byte range locking, and can efficiently serve frequent in-place updates to your data.

Common use cases

Azure Files NFS v4.1 has a broad range of use cases. Most applications written for Linux file systems can run on NFS. Here is a subset of customer use cases we have seen during the limited preview:

Linux application storage:

Shared storage for applications like SAP, storage for images or videos, Internet of Things (IoT) signals, etc. In this context, one of our preview customers said:

“T-Systems is one of the leading SAP outsourcers. We were looking for a highly-performant, highly available, zone redundant Azure native solution to provide NFS file systems for our SAP landscape deployments. We were thrilled so see Azure Files exceeding our performance expectations. We also see a huge cost saving and a reduced complexity compared to other available cloud solutions.”  – Lars Micheel, Head of SAP Solution Delivery and CTO PU SAP.

End user storage:

Shared file storage for end user home directories and home directories for applications like Jupyter Notebooks. Also, some customers used it for lift-and-shift of datacenter NAS data to the cloud in order to reduce the on-premises footprint and expand to more geographic regions with agility. In this context, one of our preview customers said:

“Cloudera is well known for our machine learning capabilities, an industry analyst firm called us a “machine learning – machine” when they named us a leader in a recent report. We needed a high performance NFS file system to match our ML capabilities. Azure Files met all the requirements that Cloudera Machine Learning has for a real filesystem and outperformed all the alternatives. Because it is integrated with the Azure Storage stack, my expectation is that it’s going to be cheaper and far easier to manage than the alternatives as well.”  –  Sean Mackrory, Software Engineer, Cloudera

Container-based applications:

Persistent storage for Docker and Kubernetes environments. We are also launching the preview of CSI driver for Azure files Support for NFS today.

Databases:

Hosting Oracle databases and taking its backups using Recover Manager (RMAN). Azure Files premium tier was purpose-built for database kind of workloads with first parties taking dependencies on it.

Management

You get the same familiar share management experience on Azure Files through Azure portal, PowerShell, and CLI:

Create NFS file share with a few clicks in Azure portal

Security

Azure Files uses AES 256 for encryption at rest. You also have the option to encrypt all of your data using the keys that you own, managed by the Azure Key Vault. Your share can be accessed from within a region, from another region, or from on-premises by configuring secure virtual networks to allow NFS traffic privately between your volume and destination. Data coming to NFS shares has to emerge from a trusted VNet. All access to the NFS share is denied by default unless access is explicitly granted by configuring right network security rules.

Performance

The NFS protocol is available on Azure Files premium tier. Your performance will scale linearly with the provisioned capacity. You can get up to 100K IOPS and 80 Gibps throughput on a single 100 TiB volume.

Backup

Backing up your data on NFS shares can either be orchestrated using familiar tooling like rsync or products from one of our third-party backup partners. Multiple backup partners including Commvault, Veeam, and Veritas were part of our initial preview and have extended their solutions to work with both SMB 3.0 and NFS 4.1 for Azure Files.

Migration

For data migration, you can use standard tools like scp, rsync, or rsync. Because file storage can be accessed from multiple compute instances concurrently, you can improve copying speeds with parallel uploads. If you want to migrate data from outside of a region, use VNet peering, VPN or an ExpressRoute to connect to your file system from another Azure region or your on-premises data center.

Pricing

This offer will be charged based on premier tier pricing. You can provision shares as small as 100GiB and increase your capacity in 1GiB increments. See premium tier pricing on Azure Files pricing page.

Get started

NFS 4.1 support for Azure Files is in a select set of regions today and we will continually add more regions to this list in coming weeks. Get started today by following these simple step-by-step instructions!

Next steps

We would love to hear your feedback as we continue to heavily invest in adding more features and improving the performance of the NFS v 4.1 offer. For direct feedback and inquiries, please email us at: azurefilesnfs@microsoft.com
Quelle: Azure

Azure NetApp Files cross region replication and new enhancements in preview

As businesses continue to adapt to the realities of the current environment, operational resilience has never been more important. As a result, a growing number of customers have accelerated a move to the cloud, using Microsoft Azure NetApp Files to power critical pieces of their IT infrastructure, like Virtual Desktop Infrastructure, SAP applications, and mission-critical databases.

Today, we release the preview of Azure NetApp Files cross region replication. With this new disaster recovery capability, you can replicate your Azure NetApp Files volumes from one Azure region to another in a fast and cost-effective way, protecting your data from unforeseeable regional failures. We’re also introducing important new enhancements to Azure NetApp Files to provide you with more data security, operational agility, and cost-saving flexibility.

Azure NetApp Files cross region replication

Azure NetApp Files cross region replication leverages NetApp SnapMirror® technology therefore, only changed blocks are sent over the network in a compressed, efficient format. This proprietary technology minimizes the amount of data required to replicate across the regions, therefore saving data transfer costs. It also shortens the replication time so you can achieve a smaller Restore Point Objective (RPO).

Over the next few months of Azure NetApp Files cross region replication preview you can expect:

Multiple replication frequency options: you can replicate an Azure NetApp Files, NFS, or SMB volume across regions with replication frequency choice of every 10 minutes, every hour, or once a day.
Read from secondary: you can read from the secondary volume during active replication.
Failover on-demand: you can failover to the secondary volume at a time of your choice. After a failover, you can also resynchronize the primary volume from the secondary volume at a time of your choice.
Monitoring and alerting: you can monitor the health of volume replication and the health of the secondary volume through Azure NetApp Files metrics and receive alerts through Azure Monitor.
Automation: you can automate the configuration and management of Azure NetApp Files volume replication through standard Azure Rest API, SDKs, command-line tools, and ARM templates.

Supported region pairs

Azure NetApp Files cross region replication is available in popular regions from US, Canada, AMEA, and Asia at the start of public preview. Azure NetApp Files documentation will keep you up-to-date with the latest supported region pairs.

Getting started

Join the preview waitlist now. Once your subscription is enabled for the preview, you can find the feature from the portal (Figure 1) and within a few clicks, you'll be able to configure your first Azure NetApp Files cross region replication (Figure 2).

Figure 1: You can add cross region replication by selecting "Add data replication" from Azure NetApp Files volume management view.

Figure 2: Cross region replication is successfully configured for an Azure NetApp Files volume.

Learn more about Azure NetApp Files cross region replication through the Azure NetApp Files documentation.

Learn more about our pricing

During preview, Azure NetApp Files cross region replication will be offered at full price. Pricing information will be available on the Azure NetApp Files pricing page. You can learn more about the Azure NetApp Files cross region replication cost model through the Azure NetApp Files documentation.

Volume snapshot policy

Azure NetApp Files allows you to create point-in-time snapshots of your volumes. Starting now, you can create a snapshot policy to have Azure NetApp Files automatically create volume snapshots at a frequency of your choice. You can schedule the snapshots to be taken in hourly, daily, weekly or monthly cycles. You can also specify the maximum number of snapshots to keep as part of the snapshot policy. This feature is free of charge (normal Azure NetApp Files storage cost still applies) and is currently in preview. You can register for the feature preview by following the volume snapshot policy documentation.

Dynamic volume tier change

Cloud promises flexibility in IT spending. You can now change the service level of an existing Azure NetApp Files volume by moving the volume to another capacity pool that uses the service level you want for the volume. This in-place service-level change for the volume does not require that you migrate data. It also does not impact the data plane access to the volume. You can change an existing volume to use a higher service level for better performance, or to use a lower service level for cost optimization. This feature is free of charge (normal Azure NetApp Files storage cost still applies) and is currently in public preview. You can register for the feature preview by following the dynamic volume tier change documentation.

Simultaneous dual-protocol (NFS v3 and SMB) access

You can now create an Azure NetApp Files volume that allows simultaneous dual-protocol (NFS v3 and SMB) access with support for LDAP user mapping. This feature enables use cases where you may have a Linux-based workload that generates and stores data in an Azure NetApp Files volume. At the same time, your staff needs to use Windows-based clients and software to analyze the newly generated data from the same Azure NetApp Files volume. The simultaneous dual-protocol access feature removes the need to copy the workload-generated data to a separate volume with a different protocol for post-analysis, saving storage cost, and operational time. This feature is free of charge (normal Azure NetApp Files storage cost still applies) and is generally available. Learn more from the simultaneous dual-protocol access documentation.

NFS v4.1 Kerberos encryption in transit

Azure NetApp Files now supports NFS client encryption in Kerberos modes (krb5, krb5i, and krb5p) with AES-256 encryption, providing you with additional data security. This feature is free of charge (normal Azure NetApp Files storage cost still applies) and is generally available. Learn more from the NFS v4.1 Kerberos encryption documentation.

Azure Government regions

Lastly, we’re pleased to announce the general availability of Azure NetApp Files in Azure Government regions, starting with US Gov Virginia, and soon in US Gov Texas, and US Gov Arizona. Take a look at the latest Azure NetApp Files regional availability and region roadmap.

Get it, use it, and tell us about it

As with other previews, the public preview features should not be used for production workloads until they reach general availability.

We look forward to hearing your feedback on these new capabilities. You can email us feedback at ANFFeedback@microsoft.com. As always, we love to hear all of your ideas and suggestions about Azure NetApp Files, which you can post at Azure NetApp Files feedback forum.
Quelle: Azure