How Deutsche Bank is building cloud skills at scale

Deutsche Bank (DB) is the leading bank in Germany with strong European roots and a global network. DB was eager to reduce its workload for managing legacy infrastructure, so that their engineering community could instead focus on modernizing their financial service offerings. The bank’s desire for solutions that could dynamically scale to meet demand and reduced time to market for new applications was a key driver for migrating its infrastructure to the cloud. Deutsche Bank and Google Cloud signed a strategic partnership in late 2020 to accelerate the bank’s transition to the cloud and co-innovate the next generation of cloud-based financial services. This multi-year partnership is the first of its kind for the financial service industry. In the process of migrating its core on-premises systems to Google Cloud, Deutsche Bank became acutely aware of the need to increase its technical self-sufficiency internally through talent development and enterprise-wide upskilling. Demand for cloud computing expertise has been surging across all sectors, and growth in cloud skills and training has been unable to keep pace with industry-wide cloud migration initiatives. Asrecent reports suggest, organizations need to be taking proactive steps to grow these talent pools themselves. For Deutsche Bank, the scale of the skills and talent development challenge it was facing was significant. Following many years of drawing help from outside contractors, much of the bank’s engineering capability and domain knowledge was now concentrated outside their full-time workforce. This was exacerbated by fierce competition for cloud skills expertise across the industry as a whole. There was a clear and present need to reinvigorate DB’s engineering culture, so developing, attracting, and retaining talent became a key dimension of the bank’s cloud transformation journey. A recent IDC study1 demonstrates that comprehensively trained organizations drive developer productivity, boost innovation, and increase employee retention. With around 15,000 employees in their Technology, Data and Innovation (TDI) division across dozens of locations, DB needed to think strategically about how to deliver comprehensive learning experiences across multiple modalities, while still ensuring value for money. Through the strategic partnership, Deutsche Bank could now draw upon the expertise and resources of Google Cloud Customer Experience services, such as Google Cloud Premium Support, Consulting and Learning services, to develop a new structured learning program that could meet its businesses’ needs and target its specific skill gaps. With Premium Support, Deutsche Bank was able to collaborate with a Technical Account Manager (TAM) to receive proactive guidance on how to ensure the proposed learning program supported the bank’s wider cloud-migration processes. To guarantee this project’s success, the TAM supporting Deutsche Bank connected with a wide range of domains across the Deutsche Bank, including apps and data, infrastructure and architecture, and onboarding and controls. Cloud Consulting services also worked with DB to consider the long-term impacts of the program and how it could be continuously improved to help build a supportive, dynamic engineering culture across the business as whole. Google Cloud Learning services made this talent development initiative a reality by providing the necessary systems, expertise, and project management to help Deutsche Bank implement this enterprise-wide certification program. In a complex, regulated industry like financial services, the need for content specificity is particularly acute. This new Deutsche Bank Cloud Engineering program leverages expert-created content and a cohort approach to provide learners with content tailored to their business needs, while also enabling reflection, discussion, and debate between peers and subject matter experts. Instructor-led training is deliberately agile and is being iterated across multiple modalities to help close any emerging gaps in DB employees’ skill sets, and to ensure the right teams are prioritized for specific learning opportunities.Google Cloud Skills Boost is another essential component of Deutsche Bank’s strategy to increase its technical self-sufficiency. With Google Cloud’s help, Deutsche Bank was able to create curated learning paths designed to boost cloud skills in a particular area. Through a combination of on-demand courses, quests, and hands-on labs, DB provided specialized training across multiple teams simultaneously, each of whom have different needs and levels of technical expertise. Google Cloud Skills Boost also provides a unified learning profile so that individuals can easily track their learning journeys, while also providing easier cohort management for administrators. It was equally important to establish an ongoing, shared space for upskilling to reinforce a culture of continuous professional development. Every month Deutsche Bank now runs an “Engineering Day” dedicated to learning, where every technologist is encouraged to focus on developing new skills. Many of these sessions are led by DB subject matter experts, and they explore how the bank is using a certain Google Cloud product or service in their current projects. Alongside this broader enterprise-wide initiative, a more targeted approach was also taken to provide two back-to-back training cohorts with the opportunity to learn directly from Google Cloud’s own artificial intelligence (AI) and machine learning (ML) engineers via the Advanced Solutions Lab (ASL). This allowed DB’s own data science and machine learning (ML) experts to explore the use of MLOps onVertex AIfor the first time, allowing them to build end-to-end ML pipelines on Google Cloud, automating the whole ML process. “The Advanced Solutions Lab has really enabled us to accelerate our progress on innovation initiatives, developing prototypes to explore S&P stock prediction and how apps might be configured to help partially sighted people recognize currency in their hand. These ASL programs were a great infusion of creativity, as well as an opportunity to form relationships and build up our internal expertise.” — Mark Stokell, Head of Data & Analytics, Cloud & Innovation Network, Deutsche Bank In the first 18 months of the strategic partnership, over 5,000 individuals were trained —adding nearly 10 new Google Cloud Certifications a week—and over 1,400 engineers were supported to achieve their internal DB Cloud Engineering certification. Such high numbers of uptake and engagement with this new learning program signals its success and the value of continuing to invest in ongoing professional development for TDI employees. “Skill development is a critical enabler to our long-term success. Through a mix of instructor-led training, enhancing our events with gamified Cloud Hero events, and providing opportunities for continuous development with Google Cloud Skills Boost, it genuinely feels like we’ve been engaging with the whole firm. With our cohort-based programs, we are pioneering innovative ways to enable learning at scale, which motivate hundreds of employees to make tangible progress and achieve certifications. With consistently high satisfaction scores, our learners clearly love it.” — Andrey Tapekha, CTO of North America Technology Center, Deutsche BankAfter such a successful start to its talent development journey, Deutsche Bank is now better prepared to address the ongoing opportunities and challenges of its cloud transformation journey. Building on the shared resources and expertise of their strategic partnership, DB and Google Cloud are now turning their attention toassessing the impact of this learning program across the enterprise as a whole, and considering how the establishment of a supportive, dynamic learning culture can be leveraged to attract new talent to the company. To learn more about how Google Cloud Customer Experience services can support your organization’s talent transformation journey, visit: ● Google Cloud Premium Support to empower business innovation with expert-led technical guidance and support ● Google Cloud Training & Certification to expand and diversify your team’s cloud education ● Google Cloud Consulting services to ensure your solutions meet your business needs 1. IDC White paper, sponsored by Google Cloud Learning, To Maximize Your Cloud Benefits, Maximize Training, March 2022, IDC #US48867222.
Quelle: Google Cloud Platform

Best practices for migrating Hadoop to Dataproc by LiveRamp

AbstractIn this blog, we describe our journey to the cloud and share some lessons we learned along the way. Our hope is that you’ll find this information helpful as you go through the decision, execution, and completion of your own migration to the cloud.IntroductionLiveRamp is a data enablement platform powered by identity, centered on privacy, integrated everywhere. Everything we do centers on making data safe and easy for businesses to use. Our Safe Haven platform powers customer intelligence, engages customers at scale, and creates breakthrough opportunities for business growth.Businesses safely and securely bring us their data for enrichment and use the insights gained to deliver better customer experiences and generate more valuable business outcomes. Our fully interoperable and neutral infrastructure delivers end-to-end addressability for the world’s top brands, agencies, and publishers. Our platforms are designed to handle the variability and surge of the workload and guarantee service-level agreements (SLAs) to businesses. We process petabytes of batch and streaming data daily. We ingest, process (join and enhance), and distribute this data. We receive and distribute data from thousands of partners and customers on a daily basis. We maintain the world’s largest and most accurate identity graph and work with more than 50 leading demand-side and supply-side platforms.Our decision to migrate to Google Cloud and DataprocAs an early adopter of Apache Hadoop, we had a single on-prem production managed Hadoop cluster that was used to store all of LiveRamp’s persistent data (HDFS) and run the Hadoop jobs that make up our data pipeline (YARN). The cluster consisted of around 2500 physical machines with a total of 30PB or raw storage, ~90,000 vcores, and ~300TB of memory.  Engineering teams managed and ran multiple MapReduce jobs on these clusters. The sheer volume of applications that LiveRamp ran on this cluster caused frequent resource contention issues, not to mention potentially widespread outages if an application was tuned improperly. Our business was scaling and we were running into constraints related to data center space and power in our on-premises environment. These constraints restricted our ability to meet our business objectives so a strategic decision was made to leverage elastic environments and migrate to the cloud. The decision required financial analysis and a detailed understanding of the available options, from do-it-yourself and vendor-managed distributions to leveraging cloud-managed services. LiveRamp’s target architectureWe ultimately chose Google Cloud and Dataproc, a managed service for Hadoop, Spark, and other big data frameworks. During the migration we made a few fundamental changes to our Hadoop infrastructure:Instead of 1 large persistent cluster managed by a central team, we have decentralized the cluster ownership to individual teams. This gave the teams flexibility to recreate, perform upgrades or change configurations as they see fit. This also gives us better cost attribution, less blast radius for errors, and less chance that – a rogue job from one team will impact the rest of the workloads.Persistent data is no longer stored in HDFS on the clusters, it is in Google Cloud Storage, which, conveniently, served as a drop in replacement, as GCS is compatible with all the same APIs as HDFS. This means we can delete all the virtual machines that are part of the cluster without losing any data.Introduced autoscaling clusters to control compute cost, and to dramatically decrease request latency. On premise you’re paying for the machines so you might as well use them. Cloud compute is elastic so you want to burst when there is demand and scale down when you can.For example, one of our teams runs about 100,000 daily Spark jobs on 12 Dataproc clusters that each independently scale up to 1000 VMs. This gives that team a current peak capacity of about 256,000 cores. Because the team is bound to its own GCP Project inside of a GCP Organization, the cost attributed to that team is now very easy to report. The team uses architecture represented below to distribute the jobs across the clusters. This architecture allows them to bin similar workloads together so that they can be optimized together. Below is the logical architecture of the above workload:There will be a blog post in future that will talk about this workload in detail.Our approachOverall migration and post migration stabilization/optimization of the largest of our workloads took us about several years to complete. We broadly broke down the migration into multiple phases.Initial Proof-Of-ConceptWhen analyzing solutions for cloud-hosted big data services, any product had to meet our clear acceptance criteria:1. Cost: Dataproc is not particularly expensive compared to similar alternatives, but our discount with the existing managed Hadoop partner made it expensive. We have initially accepted that the cost would remain the same.  We did see cost benefits post migration, after several rounds of optimizations.2. Features: Some key features (compared to current state) that we were looking for are built-in autoscaler, ease of creating/updating/deleting clusters, managed big data technologies etc.3. Integration with GCP: As we had already decided to move other LiveRamp-owned services to GCP, a big data platform with robust integration with GCP was a must. Basically, we’d like to be able to leverage GCP features without a lot of effort on our end (custom vms, preemptible vms, etc).4. Performance: Cluster creation, deletion, scale up, and scale down should be fast. This will allow teams to iterate and react quickly. These are some rough estimates of how fast the cluster operations should be: Cluster creation:  <15 minutesCluster Deletion: <15 minutesAdding 50 nodes: <20 minutesRemoving 200 nodes: <10 minutes 5. Reliability: Bug free and low downtime software that has concrete SLAs on clusters and a strong commitment to the correct functioning of all of its features.An initial prototype to better understand Dataproc and Google Cloud helped us prove that target technologies and architecture will give us reliability and cost improvements. This also fed into our decisions around target architecture. This was then reviewed by the Google team before we embarked on the migration journey. Overall migrationTerraform moduleOur ultimate goal is to create self-service tooling that allows our data engineers to deploy infrastructure as easily and safely as possible. After defining some best practices around cluster creation and configuration, the central team’s first step was to build a terraform module that can be used by all the teams to create their own clusters. This module will create a dataproc cluster along with all supporting buckets, pods and datadog monitors:A dataproc cluster autoscaling policy that can be customizedA dataproc cluster with LiveRamp defaults preconfiguredSidecar applications for recording job metrics from the job history server and for monitoring the cluster healthPre configured datadog cluster health monitors for alertingThis Terraform module is also composed of multiple supporting modules underneath. This allows users to call the supporting modules directly in your project terraform as well if such a need arises.  The module can be used to create a cluster by just setting the parameters like project id, path to application source (Spark or Map/Reduce), subnet, VM instance type, auto scaling policy etc.Workload migrationBased on our analysis of Dataproc, discussions with GCP team and the POC, we used following criteria:We prioritized applications that can use preemptibles to achieve cost parity to our existing workloads We prioritized some of our smaller workloads initially to build momentum within the organization. For example, we left the single workload that accounted for ~40% of our overall batch volume to the end, after we had gained enough experience as an organization.We combined the migration to Spark along with the migration to Dataproc. This has initially resulted in some extra dev work but helped reduce the effort for testing and other activities.Our initial approach was to lift and shift from existing managed providers and Map/Reduce to Dataproc and Spark. We then later focused on optimizing the workloads for cost and reliability.What’s working wellCost AttributionAs is true with any business, it’s important to know where your cost centers are. Moving from a single cluster, made opaque by the number of teams loading work onto it, to GCP’s Organization/Project structure has made cost reporting very simple. The tool breaks down cost by project, but also allows us to attribute cost to a single cluster via tagging. As we sometimes deploy a single application to a cluster, this helps us to make strategic decisions on cost optimizations at an application level very easily. FlexibilityThe programmatic nature of deploying Hadoop clusters in a cloud like GCP dramatically reduces the time and effort involved in making infrastructure changes. LiveRamp’s use of a self-service Terraform module means that a data engineering team can very quickly iterate on cluster configurations. This allows a team to create a cluster that is best for their application while also adhering to our security and health monitoring standards. We also get all the benefits of infrastructure as code: highly complicated infrastructure state is version controlled and can be easily recreated and modified in a safe way.SupportWhen our teams face issues with services that run on Dataproc, the GCP team is always quick to respond. They work very closely with LiveRamp to develop new features for our needs. They proactively provide LiveRamp with preview access to new features that help LiveRamp to stay ahead of the curve in the Data Industry.Cost SavingsWe have achieved around 30% cost savings in certain clusters by achieving the right balance between on-demand and PVMs. The cost savings were a result of our engineers building efficient A/B testing frameworks that helped us run the clusters/jobs in several configurations to arrive at the most reliable, maintainable and cost efficient configuration. Also, one of the applications is now 10x + faster.Five lessons learnedMigration was a successful exercise that took about six months to complete, across all our teams and applications. While many aspects went really well, we also learned a few things along the way that we hope will help you when planning your own migration journey. 1. Benchmark, benchmark, benchmark It’s always a good idea to benchmark the current platform against the future platform to compare costs and performance. On-premises environments have a fixed capacity, while cloud platforms can scale to meet workload needs. Therefore, it’s essential to ensure that the current behavior of the key workload is clearly understood before the migration. 2. Focus on one thing at a timeWe initially focused on reliability while remaining cost-neutral during the migration process, and then focused on cost optimization post-migration. Google teams were very helpful and instrumental in identifying cost optimization opportunities. 3. Be aware of alpha and beta productsAlthough there usually aren’t any guarantees of a final feature set when it comes to pre-released products, you can still get a sense of their stability and create a partnership if you have a specific use case. In our specific use case, Enhanced Flexibility Mode was in alpha stage in April 2019, beta in August 2020, and released in July 2021. Therefore, it was helpful to check in on the product offering and understand its level of stability so we could carry out risk analysis and decide when we felt comfortable adopting it.4. Think about quotasOur Dataproc clusters could support much higher node counts than was possible with our previous vendor. This meant we often had to increase IP space and change quotas, especially as we tried out new VM and disk configurations.5. Preemptable and committed use discounts (CUDs)CUDs make compute less expensive while preemptables make compute significantly less expensive. However, preemptibles don’t count against your CUD purchases, so make sure you understand the impact on your CUD utilization when you start to migrate to preemptables.We hope these lessons will help you in your Data Cloud  journey.
Quelle: Google Cloud Platform

Introducing Vision Studio, a UI-based demo interface for Computer Vision

Are you looking to improve the analysis and management of images and videos? The Computer Vision API provides access to advanced algorithms for processing media and returning information. By uploading a media asset or specifying a media asset’s URL, Azure’s Computer Vision algorithms can analyze visual content in different ways based on inputs and user choices, tailored to your business.

Want to try out this service with samples that return data in a quick, straightforward manner, without technical support? We are happy to introduce Vision Studio in preview, a platform of UI-based tools that lets you explore, demo and evaluate features from Computer Vision, regardless of your coding experience. You can start experimenting with the services and learning what they offer, then when ready to deploy, use the available client libraries and REST APIs to get started embedding these services into your own applications.

Overview of Vision Studio

Each of the Computer Vision features has one or more try-it-out experiences in Vision Studio. To use your own images in Vision Studio, you'll need an Azure subscription and a resource for Cognitive Services for authentication. Otherwise, you can try Vision Studio without logging in, using our provided set of sample images. These experiences help you quickly test the features using a no-code approach that provides JSON and text responses. In Vision Studio, you can try out the following services:

What's new to try in Vision Studio

Optical Character Recognition (OCR)

The optical character recognition (OCR) service allows you to extract printed or handwritten text from images, such as photos of street signs and products, as well as from documents—invoices, bills, financial reports, articles, and more. Try it out in Vision Studio using your own images to extract text.

Spatial Analysis

The Spatial Analysis service analyzes the presence and movement of people on a video feed and produces events that other systems can respond to. Try it out in Vision Studio using samples we provide, to see how spatial analysis will improve retail operations.

Face

The Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identity verification, touchless access control, and face blurring for privacy. Apply for access to the Face API service to try out identity recognition and verification in Vision Studio.

Image Analysis

The Image Analysis service extracts many visual features from images, such as objects, faces, adult content, and auto-generated text descriptions to improve accessibility. Try it out in Vision Studio using your own images to accurately identify objects, moderate content and caption images.

Responsible AI in Vision

We offer guidance for the responsible use of these capabilities based on Microsoft AI’s principles of fairness, reliability and safety, privacy and security, inclusiveness, transparency, and human accountability. The Responsible AI Standard sets out our best thinking on how we will build AI systems to uphold these values and earn society’s trust. It provides specific, actionable guidance for our teams that goes beyond the high-level principles that have dominated the AI landscape to date.  Learn more about Responsible AI in Vision. 

Next steps

Go to Vision Studio to begin using features offered by the service.
For more information on the features offered, see the Azure Computer Vision overview.

Quelle: Azure

Microsoft Cost Management updates—October 2022

Whether you're a new student, a thriving startup, or the largest enterprise, you have financial constraints, and you need to know what you're spending where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Cost Management comes in.

We're always looking for ways to learn more about your challenges and how Microsoft Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Introducing Azure savings plans.
Group costs by Azure Virtual Desktop host pool.
Azure Advisor score now generally available.
Help shape the future of cost management for cloud services.
Cost optimization using Azure Migrate.
Drive efficiency through automation and AI.
What's new in Cost Management Labs.
New ways to save money with Microsoft Cloud.
New videos and learning opportunities.
Documentation updates.
Join the Microsoft Cost Management team.

Let's dig into the details.

Introducing Azure savings plans

As a cloud provider, we are committed to helping our customers get the most value out of their cloud investment through a comprehensive set of pricing models, offers and benefits that adapt to customer’s unique needs. Today, we are announcing Azure savings plan. With this new pricing offer, customers will have an easy and flexible way to save up to 65 percent on compute costs, compared to pay-as-you-go pricing, in addition to existing offers in market including Azure Hybrid Benefit and Reservations.

Azure savings plans lower prices on select Azure services with a commitment to spend a fixed hourly amount for one or three years. You choose whether to pay all upfront or monthly at no extra cost. As you use services such as virtual machines (VMs) and container instances across the world, their usage is covered by the plan at reduced prices, helping you get more value from your cloud budget. During the times when usage is above the hourly commitment, you’ll be billed at your regular on-demand rates.

Azure savings plan is available for the following services today:

Virtual machines
App Service
Azure Functions premium plan
Container instances
Dedicated hosts

To learn more, see Optimize and maximize cloud investment with Azure savings plan for compute.

Group costs by Azure Virtual Desktop host pool

Many organizations use Azure Virtual Desktop to virtualize applications, often as part of their cloud migration strategy. These applications can cover anything from pure virtual machines to SQL databases, web apps, and more. With such a broad set of connected services, you can imagine how difficult it might be to visualize and manage costs. To help streamline this process and deliver a holistic view of costs rolling up to your Azure Virtual Desktop host pools, Cost Management now supports tagging resource dependencies to group them under their logical parent within the cost analysis preview, making it easier than ever to see the cost of your Azure Virtual Desktop workloads.

To get started, simply apply the cm-resource-parent tag to the virtual machines and/or other child resources you want to see rolled up to your host pool. Set the tag value to be the full resource ID of the host pool. Once the tag is applied, all new usage data will start to be grouped under the parent resource.

For a guided walkthrough, check out the following videos:

The Real Cost Of Cloud Applications (6 minutes)—Walks through how to enable resource parenting manually in the portal.
If Only I Knew THIS About Azure 5 Years Ago (5 minutes)—Walks through how to enable resource parenting via Azure Policy.

To learn more, see Group costs by host pool with Cost Management now in Public Preview for Azure Virtual Desktop. To learn more about the cm-resource-parent tag and how to group resources of any type, see Group related resources in the cost analysis preview.

Azure Advisor score now generally available

Azure Advisor score offers you a way to prioritize the most impactful Advisor recommendations to optimize your deployments using the Azure Well-Architected Framework. Advisor displays your category scores and your overall Advisor score as percentages. A score of 100 percent in any category means all your resources, assessed by Advisor, follow the best practices that Advisor recommends. On the other end of the spectrum, a score of 0 percent means that none of your resources, assessed by Advisor, follow Advisor recommendations.

Advisor score now supports the ability to report on specific workloads using resource tag filters in addition to subscriptions. For example, you can now omit non-production resources from the score calculation. You can also track your progress over time to understand whether you are consistently maintaining healthy Azure deployments.

To learn more, see Optimize Azure workloads by using Advisor Score.

Help shape the future of cost management for cloud services

Are you responsible for managing purchases, cost, and commerce for your cloud services and SaaS (software as a service) products? Do you perform tasks such as acquisition, account management, cost management, billing, and cost optimization for those services? Do your job responsibilities cover scenarios such as understanding cloud solution spending, discovering resources/services needed, acquiring licenses/subscriptions, monitoring spending over time, analyzing resource utilization, updating licenses/subscriptions, and paying invoices?

If so, we are interested in having an hour-long conversation with you. Please send an email to CE_UXR@microsoft.com to highlight your interest and we will get back to you.

Cost optimization using Azure Migrate

During Microsoft Ignite, we highlighted our continued commitment to cost optimization through support for SQL Server assessments, prior to migration and modernization using Azure Migrate. Customers can now perform unified, at-scale, agentless discovery and assessment of SQL Servers on Microsoft Hyper-V, bare-metal servers, and infrastructure as a service (IaaS) of other public clouds, such as AWS EC2, in addition to VMware environments. The capability will allow customers to analyze existing configurations, performance, and feature compatibility to help with right-sizing and estimating cost. It will also check on readiness and blockers for migrating to Azure SQL Managed instance, SQL Server on Azure virtual machine, and Azure SQL Database. All this information can also be presented in a single coherent report for easy consumption while reducing cost for customers.

Please see our tech community blog for more details. The blog presents a step-by-step procedure to get started, followed by details on scaling and support. Post-assessment options and more details on related topics are covered as well.

Drive efficiency through automation and AI

This year at Microsoft Ignite we explore how organizations can activate AI and automation directly in their business workflows and empower developers to use those same intelligent building blocks to deliver their own differentiated experiences.

The global pandemic has created unprecedented levels of uncertainty, as well as the need to sense and reshape our physical and digital environments, sometimes in completely new ways. Leaders across industries recognize innovation as the only path forward. Critically, we’ve seen a shift from “innovation for innovation’s sake” toward a desire to lower operating costs, anticipate trends, reduce carbon footprints, and improve customer and employee experiences. We’re calling this commitment to innovation “digital perseverance.”

Read the full blog post to learn about automation opportunities through Microsoft Syntex and Power Platform.

What's new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what's coming in Microsoft Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

Forecast in the cost analysis preview. 
Show your forecast cost for the period at the top of the cost analysis preview. You can opt in using Try preview.
Group related resources in the cost analysis preview. 
Group related resources, like disks under VMs or web apps under App Service plans, by adding a “cm-resource-parent” tag to the child resources with a value of the parent resource ID.
Charts in the cost analysis preview. 
View your daily or monthly cost over time in the cost analysis preview. You can opt in using Try Preview.
View cost for your resources. 
The cost for your resources is one click away from the resource overview in the preview portal. Just click View cost to quickly jump to the cost of that resource.
Change scope from the menu. 
Change scope from the menu for quicker navigation. You can opt-in using Try Preview.

Of course, that's not all. Every change in Microsoft Cost Management is available in Cost Management Labs a week before it's in the full Azure portal or Microsoft 365 admin center. We're eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today.

New ways to save money in the Microsoft Cloud

New and updated general availability offers:

Microsoft Teams Premium.
Reserved capacity for Azure Backup Storage.
Azure Hybrid Benefit for AKS and Azure Stack HCI.
Azure Monitor Logs capabilities to add value and lower costs.
Zone-redundant storage support by Azure Backup.
Stream Analytics in Qatar Central.

New previews:

Include standard and Spot VMs in the same Virtual Machine Scale Set.
Azure Firewall Basic.
Azure NetApp Files backup in Southeast Asia and UK South.

New videos and learning opportunities

If you manage related resources and are looking for a simpler way to view costs across resources, you’ll want to check out these new videos:

The Real Cost Of Cloud Applications (6 minutes).
If Only I Knew THIS About Azure 5 Years Ago (5 minutes).

Follow the Microsoft Cost Management YouTube channel to stay in the loop with new videos as they’re released and let us know what you'd like to see next.

Want a more guided experience? Start with Control Azure spending and manage bills with Microsoft Cost Management.

Documentation updates

Here are two documentation updates you might be interested in if you use reservations or are interested in more flexible ways to save money in Azure:

New: Save with Azure savings plans.
Updated: Self-service exchanges and refunds for Azure Reservations.

Want to keep an eye on all documentation updates? Check out the Cost Management and Billing documentation change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request. You can also submit a GitHub issue. We welcome and appreciate all contributions!

Join the Microsoft Cost Management team

Are you excited about helping customers and partners better manage and optimize costs? We're looking for passionate, dedicated, and exceptional people to help build best in class cloud platforms and experiences to enable exactly that. If you have experience with big data infrastructure, reliable and scalable APIs, or rich and engaging user experiences, you'll find no better challenge than serving every Microsoft customer and partner in one of the most critical areas for driving cloud success.

Watch the video below to learn more about the Microsoft Cost Management team:

Join our team.

What's next?

These are just a few of the big updates from last month. Don't forget to check out the previous Microsoft Cost Management updates. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @MSCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. You can also share ideas and vote up others in the Cost Management feedback forum.

We know these are trying times for everyone. Best wishes from the Microsoft Cost Management team. Stay safe and stay healthy.
Quelle: Azure

How to Use the Node Docker Official Image

Topping Stack Overflow’s 2022 list of most popular web frameworks and technologies, Node.js continues to grow as a critical MERN stack component. And since Node applications are written in JavaScript — the world’s leading programming language — many developers will feel right at home using it. We introduced the Node Docker Official Image (DOI) due to Node.js’ popularity and to solve some common development challenges. 

The Node.js Foundation describes Node as “an open-source, cross-platform JavaScript runtime environment.” Developers use it to create performant, scalable server and networking applications. Despite Node’s advantages, building and deploying cross-platform services can be challenging with traditional workflows.

Conversely, the Node Docker Official Image accelerates and simplifies your development processes while allowing additional configuration. You can deploy containerized Node applications in minutes. Throughout this guide, we’ll discuss the Node Official Image, how to use it, and some valuable best practices. 

In this tutorial:

What is the Node Docker Official Image?Node.js use casesAbout Docker Official ImagesHow to run Node in DockerEnter a quick pull commandConfirm that Node is functionalCreate your Node image from a DockerfileOptimize your Node imageUsing Docker ComposeRunning a simple Node scriptDocker Node best practicesGet started with Node today

What is the Node Docker Official Image?

The Node Docker Official Image contains all source code, core dependencies, tools, and libraries your application needs to work correctly. 

This image supports multiple CPU architectures like amd64, arm32v6, arm32v7, arm64v8, ppc641le, and s390x. You can also choose between multiple tags (or image versions) for any project. Choosing a pinned version like node:19.0.0-slim locks you into a stable, streamlined version of Node.js. 

Node.js use cases

Node.js lets developers write server-side code in JavaScript. The runtime environment then transforms this JavaScript into hardware-friendly machine code. As a result, the CPU can process these low-level instructions. 

Node is event-driven (through user actions), non-blocking, and known for being lightweight while simultaneously handling numerous operations. As a result, you can use the Node DOI to create the following: 

Web server applicationsNetworking applications

Node works well here because it supports HTTP requests and socket connections. An asynchronous I/O library lets Node containers read and write various system files that support applications. 

You could use the Node DOI to build streaming apps, single-page applications, chat apps, to-do list apps, and microservices. Or — if you’re like Community All-Hands’ Kathleen Juell — you could use Node.js to help serve static content. Containerized Node will shine in any scenario dictated by numerous client-server requests. 

Docker Captain Bret Fisher also offered his thoughts on Dockerized Node.js during DockerCon 2022. He discussed best practices for managing Node.js projects while diving into optimization. 

Lastly, we also maintain some Node sample applications within our GitHub Awesome Compose library. You can learn to use Node with different databases or even incorporate an NGINX proxy. 

About Docker Official Images

We’ve curated the Node Docker Official Image as one of many core container images on Docker Hub. The Node.js community maintains this image alongside members of the Docker community. 

Like other Docker Official Images, the Node DOI offers a common starting point for Node and JavaScript developers. We also maintain an evolving list of Node best practices while regularly pushing critical security updates. This distinguishes Docker Official Images from alternatives on Docker Hub. 

How to run Node in Docker

Before getting started, download the latest Docker Desktop release and install it. Docker Desktop includes the Docker CLI, Docker Compose, and additional core development tools. The Docker Dashboard (Docker Desktop’s UI component) will help you manage images and containers. 

You’re then ready to Dockerize Node!

Enter a quick pull command

Pulling the Node DOI is the quickest way to begin. Enter docker pull node in your terminal to grab the default latest Node version from Docker Hub. You can readily use this tag for testing or local development. But, a pinned version might be safer for production use. Here’s how the pull process works: 

Your CLI will display a status message once it’s done. You can also double-check this within Docker Desktop! Click the Images tab on the left sidebar and scan through your listed images. Docker Desktop will display your node image:

Your node:latest image is a hefty 942.33 MB. If you inspect your Node image’s contents using docker sbom node, you’ll see that it currently includes 623 packages. The Node image contains numerous dependencies and modules that support Node and various applications. 

However, your final Node image can be much slimmer! We’ll tackle optimization while discussing Dockerfiles. After all, the Node DOI has 24 supported tags spread amongst four major Node versions. Each has its own impact on image size.  

Confirm that Node is functional

Want to run your new image as a container? Hover over your listed node image and click the blue “Run” button. In this state, your Node container will produce some minimal log entries and run continuously in case requests come through. 

Exit this container before moving on by clicking the square “stop” button in Docker Desktop or by entering docker stop YourContainerName in the CLI. 

Create your Node image from a Dockerfile

Building from a Dockerfile gives you ultimate control over image composition, configuration, and your overall application. However, Node requires very little to function properly. Here’s a barebones Dockerfile to get you up and running (using a pinned, Debian-based image version): 

FROM node:19-bullseye

Docker will build your image from your chosen Node version. 

It’s safest to use node:19-bullseye because this image supports numerous use cases. This version is also stable and prevents you from pulling in new breaking changes, which sometimes happens with latest tags. 

To build your image from a Dockerfile, run the docker build -t my-nodejs-app . command. You can then run your new image by entering docker run -it –rm –name my-running-app my-nodejs-app.

Optimize your Node image

The complete version of Node often includes extra packages that weigh your application down. This leaves plenty of room for optimization. 

For example, removing unneeded development dependencies reduces image bloat. You can do this by adding a RUN instruction to our previous file: 

FROM node:19-bullseye

RUN npm prune –production

This approach is pretty granular. It also relies on you knowing exactly what you do and don’t need for your project. Alternatively, switching to a slim image build offers the quickest results. You’ll encounter similar caveats but spend less time writing individual Dockerfile instructions. The easiest approach is to replace node:19-bullseye with its node:19-bullseye-slim counterpart. This alone shrinks image size by 75%. 

You can even pull node:19-alpine to save more disk space. However, this tag contains even fewer dependencies and isn’t officially supported by the Node.js Foundation. Keep this in mind while developing. 

Finally, multi-stage builds lead to smaller image sizes. These let you copy only what you need between build stages to combat bloat. 

Using Docker Compose

Say you have a start script, an existing package.json file, and (possibly) want to operate Node alongside other services. Spinning up Node containers with Docker Compose can be pretty handy in these situations.

Here’s a sample docker-compose.yml file: 

services:
node:
image: “node:19-bullseye”
user: “node”
working_dir: /home/node/app
environment:
– NODE_ENV=production
volumes:
– ./:/home/node/app
ports:
– “8888:8888″
command: “npm start”

You’ll see some parameters that we didn’t specify earlier in our Dockerfile. For example, the user parameter lets you run your container as an unprivileged user. This follows the principle of least privilege. 

To jumpstart your Node container, simply enter the docker compose up -d command. Like before, you can verify that Node is running within Docker Desktop. The docker container ls –all command also displays all existing containers within the CLI.  

Running a simple Node script

Your project doesn’t always need a  Dockerfile. In these cases, you can directly leverage the Node DOI with the following command: 

docker run -it –rm –name my-running-script -v “$PWD”:/usr/src/app -w /usr/src/app node:19-bullseye node your-daemon-or-script.js

This simplistic approach is ideal for single-file projects.

Docker Node best practices

It’s important to get the most out of Docker and the Node Official Image. We’ve briefly outlined the benefits of running as a non-root node user, but here are some useful tips for developing with Node: 

Easily pass secrets and other runtime configurations to your application by setting NODE_ENV to production, as seen here: -e “NODE_ENV=production”.Place any installed, global Node dependencies into a non-root user directory.Remember to manually install curl if using an alpine image tag, since it’s not included by default.Wrap your Node process in an init system with the –init flag, so it can successfully run as PID1. Set memory limitations for your containers that run on the same host. Include the package.json start command directly within your Dockerfile, to reduce active container processes and let Node properly receive exit signals. 

This isn’t an exhaustive list. To view more details, check out our best practices documentation.

Get started with Node today

As you’ve seen, spinning up a Node container from the Node Docker Official Image is quick and requires just a few steps depending on your workflow. You’ll no longer need to worry about platform-specific builds or get bogged down with complex development processes. 

We’ve also covered many ways to help your Node builds perform better. Check out our top containerization tips article to learn even more about optimization and security. 

Ready to get started? Swing by Docker Hub and pull our Node image to start experimenting. In no time, you’ll have your server and networking applications up and running. You can also learn more on our GitHub read.me page.
Quelle: https://blog.docker.com/feed/

October 2022 Newsletter

Going “Remocal” with Docker, Telepresence, & Kubernetes
Gone are the days of locally running and testing entire applications on your laptop before pushing to production. Join us with Ambassador on a tour of coding, testing, and shipping microservices using remote-to-local tools and techniques.

Register Now

News you can use and monthly highlights:
How did I shrink my NextJS Docker image by 90% – Learn how to improve the development and production lifecycle by optimizing your NextJS Docker images.
How To Create A Production Image For A Node.js + TypeScript App Using Docker Multi-Stage Builds – Keep your NodeJS Docker container images slim by using multistage builds to create TypeScript-based apps.
Oracle SQLDeveloper Docker Extension – Discover the Extension that lets you run the Oracle SQLDeveloper Web tool and connect with Oracle XE 21c or other RDBMS instances.
React and .NET Core 6.0 Sample Project with Docker – Learn how to use CRUD operations in ASP.NET Core 6.0 WEP API with the Entity Framework Core Code First approach.
Deploying FusionAuth + Docker on Fly.io – Find the perfect guide to self-hosting FusionAuth for timesaving authentication and access management using Docker.
How to containerize your ASP.NET Core application and SQL Server with Docker – Learn how to deploy a Dockerized .NET Web API application and connect it to a SQL Server container.

Introducing Hardened Docker Desktop
Looking for a better, more secure way to manage your dev environments? Our new security model, Hardened Docker Desktop, helps you cover all the bases!

Learn More

State of Application Development Survey We’re looking for feedback from developers like you. Take our survey for a chance to win prizes!

Take the Survey

Docker+Wasm Tech Preview
At KubeCon North America, we announced the Docker+Wasm Technical Preview. This lighter, faster alternative to linux containers lets developers build Wasm apps with the same ease as container apps.

Learn More

The latest tips and tricks from the community:

Creating Kubernetes Extensions in Docker Desktop
Simplified Deployment of Local Container Images to OpenShift
9 Tips for Containerizing Your Node.js Application
Adding Docker Compose Logs to Your CI Pipeline Is Worth It
Live Reload in Rust with Cargo Watch and Docker
Enabling Microservices using Docker and Docker-Compose

October Extensions Roundup: CI on Your Laptop and Hacktoberfest!
Find out what’s new in the Docker Extension Marketplace! Get CI on your laptop, find new tools from the open source community, and use categories to find the perfect Extension.

Learn More

Educational content created by the experts at Docker:

Security Advisory: CVE-2022-42889 “Text4Shell”
How to Use the Postgres Docker Official Image
How to Fix and Debug Docker Containers Like a Superhero
Developer Engagement in the Remote Work Era with RedMonk and Miva

Docker Captain: Sebastien Flochlay
Sebastien discovered Docker back in 2016 and has been a huge fan ever since. Find out why his favorite command is docker run and the importance of writing Dockerfiles — the right way.

Meet the Captain

Subscribe to our newsletter to get the latest news, blogs, tips, how-to guides, best practices, and more from Docker experts sent directly in your inbox once a month.

Quelle: https://blog.docker.com/feed/

AWS Snowball Edge Compute Optimized bietet die doppelte Rechenkapazität und ist jetzt vollständig SSD NVMe-Speicher

AWS hat ein verbessertes Snowball Edge Compute Optimized mit erweiterten Optionen für Rechenleistung, Arbeits- und Laufwerkspeicher angekündigt. Das AWS Snowball Edge Compute Optimized-Gerät hat die Rechenkapazität auf 104 vCPUs verdoppelt, die Speicherkapazität auf 416 GB RAM verdoppelt und ist jetzt vollständig SSD-basiert, mit 28 TB an NVMe-Speicher. AWS Snowball Edge Compute Optimized ist ein sicheres, robustes Gerät, das AWS-Rechen- und Speicherfunktionen wie Amazon EC2, Amazon EBS, Amazon S3, AWS IoT Greengrass, AWS Lambda-Funktionen und AWS IAM in Ihre Rugged-Edge-Umgebungen bringt.
Quelle: aws.amazon.com

PostgreSQL 15 Release Candidate 2 ist jetzt in der Amazon RDS Datenbank-Vorschauumgebung verfügbar

Amazon RDS für PostgreSQL Release Candidate 2 (RC2) ist jetzt in der Amazon RDS Database-Vorschauumgebung verfügbar und ermöglicht es Ihnen, den Release Candidate von PostgreSQL 15 auf Amazon RDS für PostgreSQL zu testen. PostgreSQL 15 RC2 kann für Entwicklungs- und Testzwecke in der Amazon RDS Database-Vorschauumgebung bereitgestellt werden, ohne dass man sich um das Installieren, Bereitstellen und Verwalten der Datenbank kümmern muss. 
Quelle: aws.amazon.com