A Better Way to Discover Blogs and Get Inspired

WordPress is home to millions of sites across countless topics. It’s a big and beautiful world, and we want to make it easier for you to discover new voices. Over the past few months, the mobile team has been working hard to improve the experience of your WordPress Reader on the mobile app. In particular, we’ve been exploring different ways for you to discover new blogs and find inspiration.

The new Discover tab on your Reader will recommend blogs and posts based on topics you follow. These changes give you more control over what you see, making it easier for you to find interesting voices, while also giving you and your site the opportunity to find a bigger audience. 

How it works

Add appropriate tags and categories when drafting your blog posts — this helps us recommend your posts to the right audience. 

The topics you now see in your improved Reader are a combination of tags and categories. If you want to find interesting blogs, follow topics you care about. The Discover tab will then show you recommended blogs and posts based on those topics.

Each post on the Discover tab has a list of topics on top. If you want to go deeper into a topic, tap on it to see a feed of blog posts from that specific topic.

If you’d like to see more posts from a particular topic on your Discover feed, tap the Follow button from that topic feed.

Soon we’ll be rolling out improvements to posts on the Reader as well. To give blog posts more room to shine, the featured image will be more prominent. 

If you’d like to try the new Discover tab, make sure you update your WordPress app to the latest version. If you don’t have the app yet, you can download it for free, on both Android and iOS. We’d love to hear your thoughts on the new experience. For specific feedback on the updates, reach out to us from within the app by going to My Site, tapping your photo on the top right, tapping Help & Support → and then selecting Contact Support.
Quelle: RedHat Stack

Modern detection for modern threats: Changing the game on today’s threat actors

2020 has introduced complex challenges for enterprise IT environments. Data volumes have grown, attacker techniques have become complex yet more subtle, and existing detection and analytics tools struggle to keep up. In legacy security systems, it’s difficult to run many rules in parallel and at scale—so even if detection is possible, it may be too late. Most analytics tools use a data query language, making it difficult to write detection rules described in scenarios such as the Mitre ATT&CK framework. Finally, detections often require threat intelligence on attacker activity that many vendors simply don’t have. As a result, security tools are unable to detect many modern threats.To address these needs, today at Google Cloud Security Talks we’re announcing Chronicle Detect, a threat detection solution built on the power of Google’s infrastructure to help enterprises identify threats at unprecedented speed and scale. Earlier this year at RSA, we introduced the building blocks for Chronicle Detect: a data fusion model that stitches events into a unified timeline, a rules engine to handle common events, and a language for describing complex threat behaviors. With today’s announcement, we complete the rest of the solution.”The scale and SaaS deployment model of Google Chronicle drove NCR’s initial interest and investment. Their speed to deliver new features and integration have kept us productive and continued to impress. By operationalizing Chronicle for threat investigations, we have significantly improved our detection metrics. As an early design partner with Chronicle around its rules engine, Chronicle Detect, we see a clear opportunity to extend its benefits and impact to advanced threat detection.”—Bob Varnadoe, CISO at NCR CorporationIntroducing Chronicle’s next generation rules engineChronicle Detect brings modern threat detection to enterprises with the next generation of our rules engine that operates at the speed of search, a widely-used language designed specifically for describing threat behaviors, and a regular stream of new rules and indicators, built by our research team.Chronicle Detect makes it easy for enterprises to move from legacy security tools to a modern threat detection system. Using our Google-scale platform, security teams can send their security telemetry to Chronicle at a fixed cost so that diverse, high value security data can be taken into account for detections. We automatically make that security data useful by mapping it to a common data model across machines, users, and threat indicators, so that you can quickly apply powerful detection rules to a unified set of data.Detection rules trigger based on high value security telemetry sent to the Chronicle platform.With Chronicle Detect, you can use advanced rules out-of-the-box, build your own, or migrate rules over from legacy tools. The rules engine incorporates one of the most flexible and widely-used detection languages in the world, YARA, which makes it easy to build detections for tactics and techniques found in the commonly used MITRE ATT&CK security framework. YARA-L, a language for describing threat behaviors, is the foundation of the Chronicle Detect rules engine. Many organizations are also integrating Sigma-based rules that work across systems, or converting their legacy rules to Sigma for portability. Chronicle Detect includes a Sigma-YARA converter so that customers can port their rules to and from our platform.Using the YARA-L language, it’s easy to edit and build detection rules in the Chronicle interface.Get real-time threat indicators and automatic rules from Uppercase Chronicle customers can also take advantage of detection rules and threat indicators from Uppercase, Chronicle’s dedicated threat research team. Uppercase researchers leverage a variety of novel tools, techniques, and data sources (including Google threat intelligence and a number of industry feeds) to provide Chronicle customers with indicators spanning the latest crimeware, APTs, and unwanted malicious programs. The Uppercase-provided IOCs—such as high-risk IPs, hashes, domains, registry keys—are analyzed against all security telemetry in your Chronicle system, and let you know right away when high-risk threat indicators are present in your environment.“As an early adopter, Quanta has benefited from Chronicle’s scale, performance and economic benefits in security investigations and threat hunting. We are excited to see Chronicle extend the Google advantage to threat detection with the launch of Chronicle Detect backed by the Chronicle Uppercase research team.” —James Stinson, VP IT at Quanta Services, IncThe combination of these capabilities helps enterprises uncover multi-event attacks in their systems such as a new email sender followed by an HTTP post to a rare domain, or a suspiciously long powershell script accessing a low prevalence domain. Since joining Google Cloud over a year ago, the Chronicle team has been innovating on our investigation and hunting platform to bring a new set of capabilities to the security market—and we won’t stop here. Chronicle has also added new global availability and data localization options, including data center support for all capabilities in Europe and the Asia Pacific region.  We’ll continue to build out integrations and help enterprises uncover threats with Chronicle wherever their data and applications reside, on-premises, in Google Cloud, and even in other cloud environments. To learn more about Chronicle Detect, read the Chronicle blog or contact the Chronicle sales team.
Quelle: Google Cloud Platform

SRE Classroom: exercises for non-abstract large systems design

Have you ever tried your hand at designing a resilient distributed software system? If you have, you likely found that there are many factors that contribute to the overall reliability of a system. Different parts of the system can fail in varied and unexpected ways. Certain architecture patterns work well in some situations, but poorly in others. There are many tradeoffs to be made about which parts of the system to optimize and when to optimize them.Navigating the many nuances of designing a distributed system can be daunting. However, anyone can be equipped to tackle these problems with the right tools and practice. There are many ways to design distributed systems. One way involves growing systems organically, adding and rewriting components as the system handles more requests or changes scope. At Google, we use a method called non-abstract large system design (NALSD). NALSD is an iterative process for designing, assessing, and evaluating distributed systems such as the Borg cluster management for distributed computing and the Google distributed file system. With this in mind, we’ve developed exercises to provide hands-on experience with the NALSD techniques. NALSD exercises are designed to equip engineers with the foundational knowledge and problem-solving skills needed to design planet-scale systems. You’ll learn how to evaluate whether a particular design achieves a service’s required service-level objectives (SLOs). These workshops challenge you to translate abstract designs into concrete plans using back-of-the-envelope calculations. Most importantly, they provide a chance for you to put these abstract concepts into practice.Planet-scale system (noun): A system that delivers services to users, no matter where they are around the world. Such a system delivers its services reliably, with high performance and availability to all of its usersSRE Classroom and the first NALSD workshopDeveloped by Google engineers, SRE Classroom is a workshop series designed to drive understanding of concepts like NALSD and other core SRE principles. Over the past few years, these workshops—taught within Google and at external conferences—have helped numerous engineers improve their system design and thinking skills. Our mission is to ensure engineering teams everywhere can understand and apply these concepts and best practices to their own systems.We’re pleased to make available all of the materials for our Distributed Pub/Sub workshop—the first of our NALSD-focused exercises from SRE Classroom. You can now freely use and re-use this material, available under the Creative Commons CC-BY 4.0 license, as long as Google is credited as the original author. Run your own version of this workshop and teach your coworkers, customers, or conference attendees about how to design large-scale distributed systems!What’s covered in the Distributed PubSub workshopThe PubSub exercise is about designing a planet-scale asynchronous publish-subscribe communication system. The workshop presents the problem statement, describes the requirements and available infrastructure, and walks through a sample solution.The workshop and material is broken into three stages:Design a working solution for a single data center.Extend that design to multiple data centers.Provision the system (i.e., how much hardware and bandwidth do we need?).For each stage of the workshop, participants will work through their own solution first. After they have a chance to explore their own ideas, the workshop leader presents a sample solution along with reasons for why certain design decisions were made.The exercise covers a wide variety of topics related to distributed system design, including scaling, replication, sharding, consensus, availability, consistency, distributed architecture patterns (such as microservices), and more. We present these concepts in contexts where they are useful to solving the problem at hand: designing a system to meet specific requirements. This helps bring clarity to where and why a particular concept might be useful for solving a particular problem.Typically, when we run this workshop, we break participants up into groups of four to six to work collaboratively toward a solution. Each group is paired with an experienced SRE volunteer who facilitates the discussion, encourages participation, and keeps the group on track.Run your own PubSub workshop!If this sounds interesting, check out the Presenter Guide and the Facilitator Guide, which have a lot more information on how to organize a Distributed Pub/Sub workshop. If you don’t have a whole team to educate, you can also work through this exercise with a buddy or on your own. Exploring multiple solutions to the problem and identifying the pros and cons of each solution may also be a meaningful exercise.Learn more about SRE and industry-leading practices for service reliability.
Quelle: Google Cloud Platform

Cloud Run for Anthos brings eventing to your Kubernetes microservices

Building microservices on Google Kubernetes Engine (GKE) provides you with maximum flexibility to build your applications, while still benefiting from the scale and toolset that Google Cloud has to offer. But with great flexibility comes great responsibility. Orchestrating microservices can be difficult, requiring non-trivial implementation, customization, and maintenance of messaging systems. Cloud Run for Anthos now includes an events feature that allows you to easily build event-driven systems on Google Cloud. Now in beta, Cloud Run for Anthos’ event feature assumes responsibility for the implementation and management of eventing infrastructure, so you don’t have to.With events in Cloud Run for Anthos, you getThe ability to trigger a service on your GKE cluster without exposing a public HTTP endpointSupport for Google Cloud Storage, Cloud Scheduler, Pub/Sub, and 60+ Google services through Cloud Audit logsCustom events generated by your code to signal between services through a standardized eventing infrastructureA consistent developer experience, as all events, regardless of the source, follow the CloudEvents standardYou can use events for Cloud Run for Anthos for a number of exciting use cases, including:Use a Cloud Storage event to trigger a data processing pipeline, creating a loosely coupled system with the minimum effort.Use a BigQuery audit log event to initiate a process each time a data load completes, loosely coupling services through the data they write. Use a Cloud Scheduler event to trigger a batch job. This allows you to focus on the code of what that job is doing and not its scheduling.Use Custom Events to directly signal between microservices, leveraging the same standardized infrastructure for any asynchronous coordination of services.How it worksCloud Run for Anthos lets you run serverless workloads on Kubernetes, leveraging the power of GKE. This new events feature is no different, offering standardized infrastructure to manage the flow of events, letting you focus on what you do best: building great applications. The solution is based on open-source primitives (Knative), avoiding vendor-lock-in while still providing the convenience of a Google-managed solution. Let’s see events in action. This demo app builds a BigQuery processing pipeline to query a dataset on a schedule, create charts out of the data and then notify users about the new charts via SendGrid. You can find  the demo on github.You’ll notice in the example above that the services do not communicate directly with each other, instead we use events on Cloud Run for Anthos to ‘wire up’ coordination between these services, like so:Let’s break this demo down further. Step 1- Create the Trigger for Query Runner: First, create a trigger targeting the Query runner service based on a cloud scheduler job.Step 2- Handle the event in your code: In our example we need details provided in the trigger. These are delivered via the HTTP header and body of the request and can easily be unmarshalled using the CloudEvent SDK and libraries. In this example, we use C#:Read the event using CloudEvent SDK:Step 3 – Signal the Chart Creator with a custom event: Using custom events we can easily signal a downstream service without having to maintain a backend. In this example we raise an event of type dev.knative.samples.querycompletedThen we create a trigger for the Chart Creator service that fires when that custom event occurs. In this example we use the following gcloud command to create the trigger:Step 4 – Signal the notifier service based on a GCS event: We can trigger the notifier service once the charts have been written to the storage service by simply creating a Cloud Storage trigger.And there you have it! From this example you can see how with events for Cloud Run for Anthos, it’s easy to build a standardized event-based architecture, without having to manage the underlying infrastructure. To learn more and get started, you can:Get started with Events for Cloud Run for Anthos Follow along with our demo in our QwiklabView our recorded talk at Next 2020Related ArticleWhat’s new in Cloud Run for AnthosThe GA of Cloud Run for Anthos includes several new featuresRead Article
Quelle: Google Cloud Platform

Gain IT efficiency for Windows Server with new Azure innovation

Companies such as Church and Dwight and Altair are migrating their Windows Server apps to Microsoft Azure to transform how they run their business, optimize costs, and increase IT efficiency and security. Today, we are excited to share new capabilities that continue to make Azure the best place to run Windows Server apps.

Increase IT efficiency with Azure Automanage

We often hear from customers that maintaining and operating servers on-premises is complex. Windows Server admins are responsible for day-to-day administration tasks such as backup, disaster recovery, and security updates. With growing security threats every day, ensuring that apps and data remain secured compounds this administrative burden.

This week, we’re introducing the preview of Azure Automanage, a new Azure service that helps customers significantly reduce day-to-day management tasks with automated operations across the entire lifecycle of Windows Server virtual machines (VMs). IT admins can now manage the entire VM lifecycle with point-and-click simplicity, individually or at scale.

Azure Automanage works with any new or existing Windows Server VM on Azure. It automatically implements VM management best practices as defined in the Microsoft Cloud Adoption Framework for Azure. Azure Automanage eliminates the need for service discovery, enrollment, and configuration of VMs. For example, Azure Automanage enables customers to implement security best practices by offering an easy way to apply an operating system baseline to VMs per Microsoft’s baseline configuration. Services such as Azure Security Center are automatically onboarded per the configuration profile chosen by the customer. If the VM’s configuration drifts from the applied best practices, Azure Automanage detects and automatically brings the VM back to the desired configuration. Learn more about Azure Automanage and join the preview.

Manage Azure Virtual Machines with Windows Admin Center in the Azure portal

Windows Admin Center delivers a modern, integrated, and simplified browser-based interface to configure and troubleshoot servers. Customers can also connect their on-premises Windows Server to Azure and use Azure services for backup, disaster recovery, centralized security management, and threat protection. Customers use Windows Admin Center to manage millions of Windows Server nodes today, and we’re continuously making it better based on their feedback.

This week, we are introducing the preview of Windows Admin Center available natively in the Azure portal. It is a built-in capability that enables customers to take advantage of the familiar Windows Admin Center experience to manage Windows Server VMs right within Azure. Customers can now do detailed management, configuration, troubleshooting, and maintenance from a unified user experience. For example, customers can launch an in-browser Remote Desktop (RDP) session for Azure Virtual Machines in a few clicks or manage expired certificates, right from the Azure portal. This new capability will be available for Windows Server 2016 and Windows Server 2019 versions. Learn more about Windows Admin Center in Azure portal.

Bring Azure services on-premises with Azure Arc and Azure Kubernetes Service

We understand that customers cannot move all their Windows Server apps to the cloud due to compliance requirements. With Azure Arc enabled servers now generally available, admins can use the Azure portal to manage and govern Windows Server anywhere.

We also want to offer options for customers who want to containerize Windows Server apps on-premises. This week, we are introducing the preview of Azure Kubernetes Service (AKS) on Azure Stack HCI. This new service simplifies the Kubernetes cluster deployment on Azure Stack HCI. It offers a consistent and familiar Azure experience with built-in security. Customers can use Azure management and governance services such as Azure Monitor to manage on-premises Kubernetes clusters. With AKS on Azure Stack HCI, customers can consistently and easily deploy their modern apps anywhere—cloud, on-premises and edge. Learn more about Azure Kubernetes Service on Azure Stack HCI and register to join the preview.

We are excited to share new Azure innovation that will help you gain IT efficiency for Windows Server. Whether you are attending Microsoft Ignite live or accessing the on-demand content, make sure to check out Windows Server sessions to see these capabilities in action. You can also register for Windows Server Summit where we will dive deep into these capabilities and more.

Azure. Invent with purpose.
Quelle: Azure

Unlock cost savings and maximize value with new Azure infrastructure innovation

Organizations including ASOS, Keiser University, and Manulife trust and build services on Azure to run their business-critical workloads and support their customers across the world. It’s customers such as these that fuel our desire to innovate. While this desire is ever-present, given the impact of the pandemic in recent months, organizations now more than ever are looking to adopt Microsoft Azure more rapidly to enable remote work, optimize costs, increase efficiency, and innovate.

Today we’re announcing several new Azure infrastructure capabilities that unlock cost savings, increase efficiency, and extend innovation anywhere—directly addressing the challenges we have heard from customers like yourself.

Enable remote work and business continuity

Azure has more than 60 regions worldwide, enabling customers to connect their employees, customers, and partners. Organizations can easily connect their data centers and branch offices to the Azure network, taking advantage of one of the fastest, most reliable, and secure networks in the world. Recently, we’ve seen an increased adoption of Azure networking services, such as Azure VPN Gateway and Azure Firewall, which are helping customers quickly connect to their resources securely. Customers are also taking advantage of Azure Site Recovery and Azure Backup, offering unlimited scale, to recover their business services in the case of an outage, and to safeguard the recovery of their data in the event of accidental deletion, corruption, or ransomware.

There has also seen a surge in remote work powered by Windows Virtual Desktop. Windows Virtual Desktop delivers a secure and always up to date experience on Azure and provides the only multi-session Windows 10 desktop in the cloud. Customers can quickly and cost-effectively, deploy virtual desktops within minutes right from the Azure portal.

In less than two weeks, Keiser University completely transitioned from a traditional brick-and-mortar school to 100 percent online, by enabling remote work with Windows Virtual Desktop. As Keiser shifted its infrastructure to the cloud, their IT department was also able to achieve tighter security policies, accelerate performance, and lower costs.

Today, we’re highlighting some of the new features we’re making to enhance remote work and business continuity:

Preview of the Cisco SD-WAN native support within the Azure Virtual WAN hubs. This will enable customers to take advantage of SD-WAN (Software-Defined Wide Area Network) to improve performance while retaining existing investments and skills.
Preview of the global load balancer feature for Azure Load Balancer. Customers can now use this feature for latency-based traffic distribution across regional deployments or use it to improve application uptime with regional redundancy.
Coming soon in preview, new capabilities for Windows Virtual Desktop. Support of Microsoft Endpoint Manager for Windows 10 multi-session will enable a familiar method for securing and managing virtual desktops, the same ways as physical devices. Azure Monitor integration will provide customers with a workbook that captures all the relevant monitoring telemetry and rich visualizations to identify and troubleshooting issues quickly. The MSIX app attach portal integration with Windows Virtual Desktop will enable the ability to add application layers from the Azure portal—with just a few clicks.
Preview of Backup Center to enable customers with the capability to monitor, operate, govern, and optimize data protection at scale, with a consistent management in the Azure portal. Backup Center is also an action center from where you can trigger backup related activities, such as configuring backup, restore, creation of policies or vaults—all from a single place.
Preview of backup support for Azure PostgreSQL through Azure Backup to enable long-term retention for Azure PostgreSQL.
Preview of cross-region-restore capabilities for SQL and SAP HANA backups through Azure Backup to enable customers to restore backup data from a secondary region at any given time.

Migrate to Azure to save money and achieve cloud scale and performance

Customers are increasingly choosing Azure as the trusted destination for their most demanding Windows Server, SQL Server, and Linux applications and taking advantage of great offers that help customers save money. Our comprehensive infrastructure delivers choice and flexibility and an increase in scalability with great performance as your Azure footprint grows, making Azure the cloud to run business-critical applications. Manulife chose Azure as one of its cloud platforms, migrating and modernizing its business-critical applications to improve agility, scalability, risk management, and cost-efficiency and to accelerate the support of new business models.

This week, we’re also announcing new capabilities that make Azure a great cloud to run Windows Server and Linux workloads including:

Preview of Azure Automanage for Windows Server to help customers significantly reduce day-to-day management tasks with automated operations across the entire lifecycle of Windows Server virtual machines (VMs) on Azure. IT admins can now manage their VMs with point-and-click simplicity, individually or at scale.
Preview of the Windows Admin Center in Azure to enable customers to perform deep Windows Server OS management on their Azure Virtual Machines right from Azure.
Preview of Azure Hybrid Benefit with improved flexibility and enhanced user experience for Red Hat Enterprise Linux and SUSE Linux Enterprise Server customers migrating to Azure. Customers can convert their pay-as-you-go instances to bring their own subscription without any downtime and maintain business continuity.
General availability of Flatcar Container Linux, compatible with CoreOS (which reached its end-of-life on May 26, 2020). Flatcar is an immutable Linux distribution making Flatcar Container Linux a viable and straightforward migration choice for container workloads running on Azure.
Preview of the Azure Image Builder to streamline cloud native image building and customization process without the need of external IP addresses, providing customers better protection against vulnerabilities. This will be generally available by the end of this year.

In addition to the investments we’re making to support your Windows and Linux workloads, customers can migrate their business-critical applications to Azure with confidence by taking advantage of an expanded compute and storage portfolio, which offers improved performance and flexibility and support for your highly scalable apps:

General availability of Azure VMware Solution. Seamlessly extend or completely migrate existing on-premises VMware applications to Azure without the cost, effort, or risk of re-architecting the application. With Azure VMware Solution, customers experience the speed and agility of the cloud, while using existing VMware skills and tools, making Azure your one-stop shop to achieve cost savings and accelerate cloud adoption.
Preview of the ability to schedule Dedicated Host and isolated VM maintenance operations, giving customers more control over platform updates. Customers can also automate guest OS image updates on Virtual Machine Scale Sets, reducing manual upkeep.
Preview of two new Azure Dedicated Hosts features to simplify VM deployment at scale. When deploying Azure Virtual Machines in Dedicated Hosts, customers can enable the platform to select the host group to which the VM will be deployed. Customers can also use Virtual Machine Scale Sets in conjunction with Dedicated Hosts to enable use of scale sets across multiple dedicated hosts within a dedicated hosts group.
Preview of automatic VM guest patching to automate rollout of security patches and simplify application management, including enhanced monitoring capabilities.
Preview of the price history and associated eviction rates of Azure Spot Virtual Machines in the Azure portal to provide increased Azure costs transparency and predictability.
General availability of new Azure Virtual Machines. The Intel 2nd generation Intel Xeon Platinum processors offer up to 20 percent greater CPU performance and better overall price-per-core performance compared to the prior generation. The new AMD EPYC™-based Dav4 and Eav4 Azure Virtual Machine series provides increased scalability (up to 96vCPUs) in 18 regions.
Preview of the NC T4 series VM and the ND A100 Series to enable AI computing. These VMs offer powerful and massively scalable AI VMs. With these new VM sizes and capabilities, customer can benefit from a greater range in underlying processor technologies.
General availability of Azure Private Link integration with disks to enhance the security of disk storage. This provides secure imports and exports of data over a private virtual network.
General availability of support for 512E format on Ultra Disks to enable migration of on-premises legacy applications to Azure with Ultra Disks, giving customers the ability to benefit from best-in-class performance of Ultra Disks.
Preview of disk performance tiers to offer the flexibility to increase disk performance independent of size, reducing costs.

In addition to new Azure services and updates, we’re investing in tools and programs to help our customers move to Azure. Azure Migrate, a central hub of tools to migrate your apps to Azure—can now perform a comprehensive discovery and assessment of their server estate, including agentless software inventory and dependency mapping. Once that is complete, customers can migrate workloads at scale, with added support now for Azure Availability Zone and Unified Extensible Firmware Interface migrations.

We’re also announcing new additions to the Azure Migration Program and FastTrack for Azure. Both the Azure Migration Program and FastTrack for Azure now support Windows Virtual Desktop to help customers accelerate their virtual desktop infrastructure (VDI) deployments while enabling a secure, remote desktop experience from anywhere. In addition, the Azure Migration Program supports ASP.NET web app migration scenarios to help customers scale their websites and reduce operational burden with innovative, fully managed services like Azure App Service and Azure SQL.

Bring innovation anywhere to your hybrid and multi-cloud environments

More customers are adopting a hybrid cloud approach to operate across distributed IT environments, benefit from on-premises investments, and take advantage of edge computing. These hybrid cloud capabilities must evolve to enable innovation anywhere, while providing seamless management and ensuring uncompromised security. With new capabilities now generally available, Azure Arc offers a consistent approach to managing Windows Servers, Linux Servers, and Kubernetes clusters on any infrastructure across on-premises, multi-cloud, and edge. Customers can also use the latest in the Azure Stack portfolio to modernize their data centers, remote offices, and edge locations. Learn more about updates to our Azure hybrid capabilities.

Secure apps and networks from increased cyberattacks

As the need to support remote work grows, customers must ensure security across their entire organization to reduce potential threats regardless of where IT resources sit. Microsoft invests $1 billion annually and has over 3,500 global security experts to monitor and secure the environment of Azure resources. We provide built-in security controls across layers to protect your apps and data as it moves around both inside and outside of your organization and simplify security management with a unified multi-cloud view into your security estate. We also keep your organization up to date on the security state of your workloads with AI-enabled intelligent insights and recommendations on how to further strength your assets or respond to threats.

Yesterday, we announced significant innovation in our Azure security suite with a preview of behavioral intelligence and third-party threat intelligence sources to Azure Sentinel, the first cloud native SIEM in the market. We also announced the preview of Azure Defender, a new service within Azure Security Center, providing customers with more protection against threats entering the environment. Learn more about our new Azure security innovations.

These are just a few infrastructure innovations highlighted at Microsoft Ignite this week. Whether you’re attending the event live or accessing the recorded content, make sure to check out all of our Azure Infrastructure sessions and learn more about optimizing costs and maximizing value in our upcoming webinar miniseries. You can also take advantage of self-paced technical learning paths at Microsoft Learn.

We look forward to seeing you integrate these latest capabilities in your cloud adoption journey.

Azure. Invent with purpose.
Quelle: Azure

Achieving business resilience with cloud application development

Over the last six months, organizations of all shapes and sizes have had to suddenly pivot to serve customers, employees, and partners exclusively via digital channels. In this uncertain business environment, we have seen resilient organizations adapt in three dimensions: by supporting remote application development; improving business agility with a focus on Developer Velocity; and by driving cost savings. At Microsoft Ignite, we've shared new capabilities that enable developers and the teams they support to become more resilient with Microsoft Visual Studio, GitHub, Microsoft Azure, and Microsoft Power Apps.

Creating resilient development teams with remote development

At Build, we shared innovation in our developer tools and services that allow development teams to code, collaborate, and ship software from anywhere. Since then, we have seen how our customers have used these tools to adapt. The Academy of Motion Picture Arts and Sciences has moved their development process to the cloud using Visual Studio and Azure, and in doing so have made their developer team twice as productive as before. We’ve also shared our own stories about how development teams at Microsoft have met the challenge of shifting to remote work.  

In order to help developers meet today’s challenges, we’ve focused on making Visual Studio and Visual Studio Code the most productive developer tools for distributed development teams. Both have strong integration with GitHub, where over 50 million developers code together. With GitHub Codespaces, developers can create cloud-powered development environments right from Visual Studio and Visual Studio Code. The release of Visual Studio 2019 16.8 Preview 3.1 includes support for the GitHub Codespaces beta. Learn more about what's in the latest release so you can code in your own cloud-hosted dev box.

GitHub Codespaces integration with Visual Studio

In a remote context, development teams need to be able to communicate and collaborate in ways that are intuitive and natural. With Visual Studio and GitHub, developers can collaborate both asynchronously and in real-time. We have updated the Git tooling experience in Visual Studio to enable more async collaboration with other repo contributors, and the GitHub extension for Visual Studio Code enables developers to work with GitHub Issues and Pull Requests directly in the editor. For real-time communication, Visual Studio Live Share, is supported in Visual Studio, Visual Studio Code, and now in GitHub Codespaces, enabling developers to collaborate from anywhere.

With distributed team members pushing code changes more frequently, it's more important than ever that your DevOps platform makes it easy to create seamless, automated, and secure code-to-cloud deployments. The publish experience in Visual Studio now has an option to generate a GitHub Actions workflow for CI/CD to your preferred Azure resources, by using deployment secrets configured in your GitHub repository. We are alsor releasing new GitHub Actions for Azure to scan Azure resources for policy violations, check for vulnerabilities in container images, and for deploying ARM templates. These enable developers to create automated code-to-cloud workflows with integrated security and governance, and also help organizations adopt an “everything as-code” DevOps model for everything from infrastructure to compliance and security policies and build and release pipelines, enabling continuous improvement, better re-use and greater transparency. To learn how to incorporate these actions into your workflows, check out our GitHub Actions for Azure documentation 

Increasing Developer Velocity and agility

In a recent study published by McKinsey & Co, companies that have a higher Developer Velocity Index (DVI) score, experience up to five-fold increase in revenue growth and 55 percent higher innovation. Public cloud adoption and modern application development practices—using a mix of cloud native architectures with Containers/Kubernetes and serverless functions, DevOps, managed databases, and rapid application development with low-code platforms—can help organizations increase Developer Velocity.

When it comes to increasing agility, we have seen that development teams that adopt DevOps are able to ship new features faster. Although many organizations are adopting DevOps, implementing effective practices at enterprise-scale can be difficult. To help with this, we have now published the Enterprise DevOps Report 2020–2021, a Microsoft and Sogeti research study of more than 250 cloud and DevOps implementations. In this report, you can learn how to scale your DevOps practices to improve business metrics, customer satisfaction, and Developer Velocity, creating the right environment for developers to innovate.

There is increasing demand to accelerate line-of-business (LoB) application development. In fact, the demand is growing 5X faster than IT departments can deliver. To address this challenge, Power Apps offers low-code development experience for anyone to create web and mobile frontends and business processes in days instead of weeks or months. Combined with Azure services, Power Apps allows developer teams to scale to demand without needing to compromise on architectural fundamentals, compliance, quality, or scale. See how Priceline Australia gained insights from their 1,000+ retail stores using Power Apps and Azure. Today, we are announcing that developers can now build custom connectors with Azure API Management and Azure Functions to any Microsoft hosted third-party, legacy, or LoB apps. We are also announcing GitHub integration for Power Apps, that allows developers to streamline application lifecycle management using the CI/CD tool they are already familiar with. These features are now available in preview.

Integrations with existing enterprise applications play a key role in delivering new features faster. Azure Logic Apps, our workflow platform with more than 300 connectors to enterprise and SaaS applications, has enabled over 40,000 customers to build workflows seamlessly. Today, we are announcing the preview of a new containerized runtime for Logic Apps, the same runtime powering Azure Functions, offering hosting flexibility to run on App Service Plans, Kubernetes, or any cloud and enterprise features such as private endpoints, deployment slots, and more cost-effective Virtual network (VNET) access.

I'm also sharing that .NET 5 Release Candidate is now available, with general availability coming on November 10, 2020 at .NET Conf. This release continues the journey to unify the .NET platform across mobile, web, desktop, machine learning, big data and IoT workloads, enabling developers to use a single platform for all their application needs. .NET 5 also has several cloud and web investments, such as smaller, faster single file applications that use less memory, which are appropriate for microservices and containerized applications. This release includes significant performance improvements, support for Windows ARM64, and new releases of C# 9.0 and F# 5.0 languages. Developers can now download .NET 5 RC with a go-live license with support for production deployments.

Delivering cost savings with the cloud

With remote work and digital customer engagement resulting in increased website traffic, many customers are finding that their existing web applications and infrastructure are limited in capacity and lack the agility to address changing business demands. Azure App Service hosts over 2M web apps and processes over 50B requests every day. Combined with Azure SQL Database, App Service offers a fully managed environment to migrate and modernize all your web apps.

In a recent report on .NET app modernization, GigaOm found that customers migrating their .NET Apps to Azure App Service and Azure SQL Database can save up to 54 percent compared to on-premises. City National Bank migrated an integrated accounting and bill-pay client solution, built on ASP.NET and SQL Server, to Azure App Service and Azure SQL, with minimal code changes. This migration helped them get a clear understanding of the ROI, better cost optimization and increased agility to launch new web and mobile apps faster. This week we're announcing several major investments in App Service to make it easier and more cost effective to migrate and modernize your .NET web apps with Azure.

The new Premium v3 (Pv3) App Service Plan can handle large scale web apps, supporting more apps per instance and larger, memory intensive apps with up to 32GB per instance.​ We’re also making Windows Containers support in App Service generally available, enabling customers to run a broader range of .NET applications with COM+ or custom OS dependencies.​

Starting November 1, 2020, we will offer Reserved Instance (RI) Pricing for App Service, delivering up to 35 percent cost savings with a 1-year commitment and up to 55 percent for a 3-year commitment, compared to pay as you go prices. A fantastic way to save even more costs as you look to migrate existing Web Apps to the cloud. For customers that need an isolated environment to secure their most sensitive web apps, we are announcing the preview of App Service Isolated v2 plan, with a simplified deployment experience and no stamp fee, offering a 80 percent reduction in costs, compared to Isolated v1 plans.

Kubernetes has become the standard way for customers to orchestrate containers at scale. We recently shared how to optimize your costs with scale-to-zero configurations, leveraging spot node pools and by using resource quota policies with Azure Policy for Azure Kubernetes Service(AKS). Today, we are announcing the preview of AKS start/stop cluster feature, allowing customers to completely pause an AKS cluster and pick up where they left off later with a switch of a button, saving time and cost. Azure Policy add-on for AKS is now generally available enabling customers to audit and enforce policies and drive in-depth compliance across pods, namespaces, and other Kubernetes resources.

We also want developers to be able to work with cloud resources as easily as if they were local. The release of our Bridge to Kubernetes extensions for Visual Studio and Visual Studio Code allows you to develop against microservices within a running AKS cluster from your development environment. This enables debugging existing services without needing to configure or deploy a new cluster. Support for AKS is generally available today, and is in preview for all other Kubernetes platforms. 

Lastly, we believe that Azure is the best place for Open Source and have been working to give developers more control, confidence, and options to reduce costs. Yesterday, we announced the preview of a new deployment option for Azure Database for MySQL and Azure Database for PostgreSQL, Flexible Server. We also announced the preview of a new serverless pricing option for all Azure Cosmos DB APIs, which offers a cost effective option to get started with Azure Cosmos DB and is a perfect fit for applications with intermittent traffic patterns. Learn more about innovation on databases. 

 I have always been inspired by developers and I continue to be motivated to empower them and their teams. We have released new capabilities to help your team become more resilient with remote application development and increase agility and Developer Velocity while driving significant cost savings with Visual Studio, GitHub, Azure and Power Apps. I hope you will join us in the Azure Application Development Keynote, where we will share more information about these releases and share some awesome demos.

Happy coding!

Azure. Invent with purpose
Quelle: Azure

Achieve agility with Azure Data in a changing world

We live in demanding times, and change is happening faster than ever before. Data has always provided important insights into changing contexts, but to navigate the rapidly evolving landscape, adding layers of intelligence where AI advances decision making and applies predictive analytics at the edge to unlock new possibilities is pivotal.

At Microsoft Ignite, we shared a number of announcements to help organizations rebuild in a changing world. Following the announcement of Azure Arc in November 2019, we are announcing the preview launch of Azure Arc enabled data services. With this preview, customers can now bring Azure data services to any infrastructure across data centers, edge, or any cloud using Kubernetes on their hardware of choice. Data services now in preview include SQL Managed Instance and PostgreSQL Hyperscale. Data sources are often spread across diverse infrastructures which lead to challenges with data sovereignty, latency, and regulatory compliance. Customers can now benefit from Azure innovations such as always current evergreen SQL, elastic scale, and a unified management experience—while being able to run on any infrastructure.

We also announced the general availability of Azure SQL Edge, which brings the most secure SQL engine to IoT gateways and edge devices. Azure SQL Edge supports predictive intelligence with AI right where the action happens, with built-in data streaming and storage, packed into a small-footprint container—less than 500 megabytes—running in ARM and x64-based devices in connected, disconnected, or semi-connected environments. All with the same industry-leading security, the same familiar developer experience, and the same tooling that customers already know and trust in SQL Server and Azure SQL.

Fugro, a leading global geo-data specialist, helps clients in the energy and infrastructure sectors gain vital insights using Azure IoT Edge and Azure SQL Edge to boost efficiency and speed innovation. Reports that once took two weeks to compile now take eight minutes.

“We use this geo-data to ensure that anything we build or operate on this rapidly changing planet is done in a safe and sustainable way.” – Pim Peereboom, Global Project Manager of Integrated Marine Management at Fugro

Azure SQL announcements at Microsoft Ignite

Microsoft continues to invest in our Azure SQL family of SQL cloud databases including SQL Server on Azure Virtual Machines, Azure SQL Managed Instance, and Azure SQL Databases. We announced several new capabilities in SQL Managed Instance that make for more seamless migration and app modernization in Azure—including upcoming support for distributed transactions between multiple SQL Managed Instances and a preview of Azure Machine Learning Services for R and Python analytics. With general availability of global VNet peering, it’s now easy to connect virtual networks in different regions in an easy and performant way for enhanced business continuity and disaster recovery options. Sign up to learn more at our Azure SQL virtual event, Transform Your Applications with Azure SQL. 

We also made announcements that continue to deliver on our commitment to provide the best developer experience of any cloud. Azure Cosmos DB serverless, now in preview, offers consumption-based pricing with no minimums making it a cost-effective way to get started with the service or run small applications with light traffic.

Flexible Server, a new deployment option now in preview for Azure Database for PostgreSQL and Azure Database for MySQL, offers developers enhanced choice with greater performance and manageability by building on a new architecture with native Linux integration. Developers can also benefit from a guided experience that simplifies end-to-end deployment and reduces costs with stop and start capabilities.

Azure Synapse, Azure Cosmos DB, and Azure Cache for Redis

Azure Cache for Redis now has two new product tiers in preview: Enterprise and Enterprise Flash. These tiers, developed in partnership with Redis Labs, integrate features from their Redis Enterprise offering for the first time on a major cloud platform making caches larger, more reliable, and provide new deployment options to unlock new use cases such as data analytics.

We are excited to see the ability to deliver near real-time analytics over operational data become a reality with Azure Synapse Link for Azure Cosmos DB. With a single click, you can now analyze large volumes of operational data in Azure Cosmos DB in near real-time with no extraction, transformation, and loading (ETL) pipelines, and no performance impact on transactional workloads.

As we look beyond the horizon, the opportunity to reinvent your business and achieve agility with your data is substantial. Your data has so much potential. We look forward to seeing what you can do with it.

Azure. Invent with purpose.
Quelle: Azure