Package management for Debian/Ubuntu operating systems on Google Cloud

Most customers are operating in a restrictive environment with limited egress connectivity to the Internet. This results in customers investing in third-party tools such as Jfrog Artifactory, Nexus, etc. to store operating system packages and libraries. There is a pressing need to download these dependencies without going to the internet and also avoid investing in a third-party tool if there are budgetary or time constraints. In this blog, we will describe how packages.cloud.google.com subdomain works and helps start to address these challenges. This solution focuses on addressing how to download Debian/Ubuntu packages from the Google-managed repositories; however, the repo does not contain packages for popular programming languages such as Python, Javascript, etc.So let’s get started….Apt package managerIf you create a Linux VM on Google Cloud with Debian or Ubuntu operating system, one of the first commands you have to run before downloading a package is to download package information from all configured sources.code_block[StructValue([(u’code’, u’sudo apt-get update’), (u’language’, u”)])]Apt is a package management tool that downloads packages from one or more software repositories (sources) and installs them onto your computer. A repository is generally a network server, such as the official DebianStable repository.The main Apt sources configuration file is at /etc/apt/sources.list. To add custom sources, creating separate files under /etc/apt/sources.list.d/ is preferred.Understanding the configuration fileLet’s take a look at the files in /etc/apt directory and /etc/apt/sources.list file to start with.In the screenshot above, the /etc/apt/sources.list file contains multiple entries that notably show the archive type, repository URL, distribution and component. For more details on each attribute for Debian distribution, please refer to this link.Archive type: The first word on each line, deb or deb-src, indicates the type of archive. deb indicates that the archive contains binary packages (deb), the pre-compiled packages that we normally use. deb-src indicates source packages, which are the original program sources plus the Debian control file (.dsc) and the diff.gz containing the changes needed for packaging the program. Source packages provide you with all of the necessary files to compile or otherwise, build the desired piece of software.Repository URL: The next entry on the line is a URL to the repository that you want to download the packages from. The main list of Debian repository mirrors is located here.Distribution: The ‘distribution’ can be either the release code name / alias ( stretch, buster, bullseye, bookworm, sid) or the release class (oldoldstable, oldstable, stable, testing, unstable) respectively. Component: mainconsists of DFSG-compliant packages, which do not rely on software outside this area to operate. These are the only packages considered part of the Debian distribution. Google startup processLet’s see what is under sources.list.d directory that Google adds as part of the startup process. There are a couple of files and both contain links to google managed repositories (packages.cloud.google.com)However, the repositories that are added by default will only help us download gcloud CLI components such as google-cloud-sdk-datalab , google-cloud-sdk-spanner-emulator and kubectl.For example, if you wanted to learn which repository a potential package were to be downloaded from, the command below shows which repository and version you would be directed to. The screenshots below show that apt will try to look for those packages in the repositories that are configured by default in gce_sdk.list and google-cloud.listBut, if we run a sudo apt-get update command, it will fail if we do not have egress connectivity to the internet. When it tries to connect to the external debian repository that is configured by default in the /etc/apt/sources.list file, it will timeout.Packages.cloud.google.com – Apt mirror repoPackages.cloud.google.com is a repository that Google maintains and hosts a mirror repository for popular Debian/ Ubuntu releases. See table below to understand the mapping between the OS release codenames indicated by the arrows and the OS versions.Please note that Ubuntu repositories are subdivided into base (no suffix), updates (-updates), security (-security), and backports (-backports), universe (-universe), security universe (-security-universe), and updates universe (-updates-universe). This subdivision has to be followed when configuring repositories on Ubuntu instances.Packages.cloud.google.com – demoFor the rest of this demo, I will be working out of a Debian OS VM. I will verify what version I am running on and modify the apt sources file accordingly to point to the right URLs. The approach shown in subsequent steps can be extended to Ubuntu OS as long as you point to the appropriate repository URLs following the Ubuntu specific repository structure described earlier.I will create a new file under the /etc/apt/sources.list.d directory as “google-packages.list” that points to the appropriate repository URLs based on the semantics explained in sources.listformat.code_block[StructValue([(u’code’, u’cat << EOF > google-packages.listrndeb https://packages.cloud.google.com/mirror/cloud-apt/bullseye bullseye mainrndeb https://packages.cloud.google.com/mirror/cloud-apt/bullseye-security bullseye-security mainrndeb https://packages.cloud.google.com/mirror/cloud-apt/bullseye-updates bullseye-updates mainrnEOF’), (u’language’, u”)])]Now that I have configured the alternate repository, let’s test by installing a debian package, htop.Since I still have a file at /etc/apt/sources.list that refers to debian mirrors, our Update will still check that location first before falling back on the new packages.cloud.google.com mirror repositories.When multiple Apt repositories are enabled, a package can exist in several of them. To know which one should be installed, Apt assigns priorities to packages. The default is 500. If the packages have the same priority, the package with a higher version number (most recent) wins. If packages have different priorities, the one with the higher priority wins.Since our package is now installed into the local OS, we see a priority (100) for locally installed packages.PrerequisitesTo utilize the packages.cloud.google.com, there are also other networking configurations that you need to configure in Google Cloud illustrated below.1. Ensure that the subnet where the VM is created has Private Google Access enabled2. Create a firewall rule that allows egress to private VIP. Packages.cloud.google.com is only supported by the Private Google API endpoint.3. Create DNS records to resolve to packages.cloud.google.com domain.4. Create a route to the private google api endpoints. This is necessary if you do not have the default route to the internet (0.0.0.0/0).SummaryIn this blog post, we provided an overview of the subdomain, packages.cloud.google.com and how it can be used to download software packages for Debian and Ubuntu distributions. We also covered the networking requirements that are needed to make it work in a tightly controlled environment. To view the contents of the repository, please refer to this link.Related ArticleVM Manager simplifies compliance with OS configuration management PreviewA new version of OS configuration management within VM Manager makes it easier to manage large fleets of Compute Engine virtual machines.Read Article
Quelle: Google Cloud Platform

Pride Month: Q&A with Beepboop founders about more creative, effective approaches to learning a new language

June is Pride Month—a time for us to come together to bring visibility and belonging, and to celebrate the diverse set of experiences, perspectives, and identities of the LGBTQ+ community. Over the next few weeks, Lindsey Scrase, Managing Director, Global SMB and Startups at Google Cloud, will showcase conversations with startups led by LGBTQ+ founders and how they use Google Cloud to grow their businesses. This feature highlights Beepboop and its co-founders, Devon Saliga, CEO, and Matt Douglass, CTO. Lindsey: Thank you so much Devon and Matt for taking the time to speak with me today. Let’s start by learning more about your company – what inspired you to foundBeepboop?Devon Saliga: As a closeted gay kid, langauge learning provided me an escape to another world. This passion led me to Dartmouth College’s Rassias Center, which is known for developing innovative language drills where a highly trained instructor guides full classrooms of students through rapid-paced, round-robin speaking exercises proven to be 40% more effective than traditional classroom techniques in helping students gain fluency. It was the opposite of boring lectures and rote memorization of vocab and grammar. Through these methods I learned Japanese, which opened up a world of opportunity, helping me to get my first job at Goldman Sachs. Sadly, not everyone has access to this type of language education. That’s why we created Beepboop, where our technology gives all language teachers the ability to run these effective exercises in both their online and in-person classrooms. Lindsey: What an incredible way to learn a new language, and clearly it’s quite effective. My wife is Austrian and I’ve been slowly trying to learn German and agree that conversation and engagement with the language is vastly more effective than trying to memorize! So from there, where did the name Beepboop come from and what makes the company unique?Devon: Our classes are like massive multiplayer games of language-learning hot potato where spoken challenges are passed from student to student. Students without any scheduling can hop into ongoing live classes and start playing. Speaking a language can be intimidating, so our instructors say “beepboop” to let students know when they’ve made a mistake. It’s a lighthearted word that puts a smile on everyone’s face.Lindsey: Maybe I’ll try out using “Beepboop” with my kids when they make a mistake, it has a nice ring to it! So what languages do you teach on Beepboop?Devon: Our go-to-market languages are English and Spanish and our curriculum is geared toward employers who want to recruit, retain and upskill their workforce through language learning. Globally, over $9 billion is spent on business English education alone. It’s a gigantic market and our innovative group-based learning approach enables companies to offer Beepboop to more of its employees for less. We have over 100,000 students and have carved out some really interesting niches for ourselves, like medical Spanish.Devon Saliga, CEO & Co-Founder of BeepboopLindsey: Clearly there is a market for this and it’s an incredible opportunity to have an impact in helping so many people. What major challenges did your team overcome in getting to where you are today?Matt Douglass: We adapted in-person language drills to support remote learning and developed unique techniques to quickly train new instructors. We initially struggled with lagged video and frozen screens because many students worldwide don’t have fast internet. In response, we built an inclusive, audio-only teaching platform that enables everyone to comfortably participate in conversational drills—without worrying about slow connections or how they look on camera. Lindsey: Ok, so it sounds like having the right platform and technology has been critical to supporting you in scaling and pivoting when needed. Why did Beepboop standardize on Google Cloud?  Devon: Before I partnered with an amazing CTO like Matt, I had to run product and engineering while focusing on business development and creating a compelling online curriculum. I honestly didn’t have the time or technical skills to create a minimal viable product (MVP) from the ground up. Fortunately,Google Cloud offered easy-to-use tools, APIs, and integrated solutions such asFirebase that enabled my small team of Dartmouth students to code an alpha version of Beepboop in just three months.The Startup Success Managers also provided much-needed technical guidance and credits so that we could affordably trial different solutions. Lindsey: You’re not alone and we hear time and time again from startups who appreciate the simplicity and speed of going from concept to MVP with Firebase and our tools and APIs. I’m so glad that was your experience and that our Startup Success team provided the support you needed to get going quickly. From there, how have Google Cloud solutions helped Beepboop grow? Matt: Beepboop now supports massive classes of up to 200 students with a customized WebRTC platform built on the highly secure Google Cloud. We useFirestore for all data that doesn’t require a real-time lookup and Firebase for our react apps. We also leverageFirebase Realtime Database to automatically message teachers when students need extra help, alert students and teachers when their internet connection slows, and even power peer-to-peer language-learning games that run autonomously without live teachers! Right now our instructors are fully responsible for tracking student performance in real time and then adjusting the intensity and the pace of their classes accordingly. This becomes more and more challenging with each additional student in a class. We’re aiming to simplify the process of teaching while giving our students more corrective feedback by using Google CloudAI and machine learning products to develop deep learning algorithms that automatically detect slight mispronunciations and monitor the melodic intonations of Spanish and English. Lindsey: It’s incredible to see what you’ve already done in such a short period of time and also your vision of what’s next. Before we switch topics, can you share what excites you the most about Beepboop? Devon: Seeing Beepboop positively disrupt the education industry and democratize foreign language instruction. We hear every day from our students how their language skills got them a promotion or how just after a few months on our platform they can now confidently interact with native speakers. Our high success rate speaks for itself, pun intended.Matt: It’s exciting to see how the technology behind Beepboop creates safe and supportive spaces for our students and instructors. Beepboop automatically mutes and unmutes microphones so that every student can equally participate in our conversational drills. Beepboop also gives students a chance to correct their mistakes—and alerts teachers if people need extra help or time to answer a question.Matt Douglass, CTO Co-Founder of BeepboopLindsey: One of the most rewarding parts of my job is seeing how companies are using our technology to drive incredible impact in the world and this is an amazing example of doing just that! Thank you for sharing more about Beepboop and your vision for the future.Now, given it’s Pride month, let’s shift gears. As a member of the LGBTQ+ community I am thrilled to see increasing visibility of LGBTQ+ founders. Can you talk about how being part of the LGBTQ+ community impacted your and Beepboop’s success?Matt: Even before the days ofHarvey Milk, the LGBTQ+ community always found creative ways to work together to further important causes. I’ve experienced the same support as an LGBTQ+ entrepreneur. Working through our community, I’ve met many other founders and shared ideas and strategies we’re incorporating to help Beepboop succeed. We also connected withStartOut, an organization focused on building a world where every LGBTQ+ entrepreneur has equal access to lead, succeed, and shape the workforce of the future. StartOut gives us further networking opportunities too, which is why I’m talking to you today.Devon: As part of StartOut, we joined their Growth Lab, which is a six-month accelerator that provides strategic guidance and mentorship. It was a game changer for us. Now we’re connected to tons of investors and are part of a dynamic and diverse community that continues to be supportive, understanding, and encouraging. Lindsey: I love seeing the community coming together to provide support – which as you mentioned is such a cornerstone of LGBTQ+ history. Do you have any advice for others in the LGBTQ+ community looking to start and grow their own companies?Devon: We’ve learned it takes a lot of networking, listening, and collaboration to build a successful company. Don’t be afraid to ask for help from family, friends, and your community—and don’t be afraid to ask yourself tough questions about what you’re doing and change course if needed.There are many organizations dedicated to helping the LGBTQ+ high tech community, including StartOut, Serif, and Gaingels.Google for Small Business also offers tools and resources for LGBTQ-friendly businesses such as LGBTQ+ friendly tags, transgender safespace attributes for business profiles, and tips to create more inclusive and innovative workplaces. Matt: Giving back to the LGBTQ+ community by mentoring new startups is equally important. Sharing your successes and failures can help others avoid similar mistakes and bring their ideas to market faster. We’re fortunate that the LGBTQ+ leaders—especially the startup organizations and founder networks—have been extremely supportive of Beepboop.Lindsey: Thank you so much for sharing those insights and resources, and I’ll add a couple of others – Lesbians who Tech and Out in Tech. I also want to thank you for all you’re doing to be visible and give back. I have no doubt you’re an inspiration for so many founders. So in closing Devon, what are the next steps for Beepboop?Devon: We look forward to working more with partners such as Google for Startups and StartOut to further democratize language learning and teach students around the world how to confidently speak a new language. Lindsey: Thank you. And we look forward to partnering with you to do just that!From left in back: Devon Saliga, CEO & Co-Founder, Matt Douglass, CTO & Co-Founder,  Alejandra Molina, Director of Marketing & Co-Founder. Front: Lucas Ogden-Davis, Founding EngineerIf you want to learn more about how Google Cloud can help your startup, visit our pagehere to get more information about our program, and sign up for our communications to get a look at our community activities, digital events, special offers, and more.
Quelle: Google Cloud Platform

GKE release channels: Balancing innovation and speed of change, now with more granular controls

If you run your business on Kubernetes, you know how important it is to perform regular patching and upgrades to maintain a healthy environment. At Google Cloud, we automatically upgrade the Google Kubernetes Engine (GKE) cluster control plane, and if you enable node auto-upgrade, we also automatically upgrade and patch your cluster nodes. Moreover, you can  subscribe your cluster to a release channel that meets your constraints and needs — rapid, regular or stable. This is just one of the ways that GKE supports enterprises running production workloads, and is broadly used by GKE customers as part of their continuous upgrade strategy. For enterprises, release channels provide the level of predictability needed for advanced planning, and the flexibility to orchestrate custom workflows automatically when a change is scheduled (e.g. informing their DevOps team when a new security patch is available).While automating upgrades with release channels simplifies keeping track of Kubernetes versions and OSS compatibility, you may want to upgrade your cluster at specific times, for business continuity or quality assurance reasons. You may also want to control the scope of the upgrades —  apply just patches, or avoid minor releases — allowed on your cluster. This is critical, especially when you feel that an upgrade requires more qualification, testing, or preparation before it can be rolled out to production.We recently enhanced the maintenance windows that you can set in GKE release channels. Previously, maintenance windows allowed you to specify “no upgrades” to the control plane or nodes for up to 30 days. Now, the upgrade-scope exclusion allows you to control and limit the scope of upgrades for up to 180 days, or end-of-life date, whichever comes first. Likewise, you can preclude minor upgrades and node upgrades from being applied to both your control planes and nodes with two new modes, “no_minor_upgrades” and “no_minor_or_nodes_upgrades”. And in both cases, you can “pin” your cluster to a specific minor k8s version (say 1.21) for a prolonged period of up to six months.GKE release channels in actionGerman food-delivery network Delivery Hero is one GKE customer that recently began using release channels. With 791 million orders processed in the third quarter of 2021, Delivery Hero initially chose to eliminate potential disruptions to its customers by relying on a manual process to control the timing of changes, and reduce the risk that an untested update might impact availability. But this was not an ideal solution: constantly monitoring the Kubernetes release schedule, tracking version skew compatibility, and applying security patches was cumbersome.In an effort to balance between risk mitigation and operational efficiency, Delivery Hero decided to subscribe their GKE clusters to a release channel. But in order to do it even more safely, they also defined the scope of auto-upgrades to include only patch versions, and to postpone minor upgrades. This way their GKE cluster is patched automatically to ensure security and compliance, but they hold back on minor version upgrades until they can be internally tested and qualified.“Before the option to control upgrade scope, and especially the ability to postpone minor upgrades up to 6 months, we were struggling to align our qualification timeline with the cadence of Kubernetes OSS releases, especially with the API deprecations in recent releases,” said Kumar Saurabh Sinha, Engineering Manager (Platform & SRE) at Delivery Hero. “With upgrade scope exclusions, we managed to safely migrate our clusters and subscribe them on release channels while still having the ability to mitigate the risk of untested minor releases.”Get started with GKE release channels today. When you create a new GKE cluster, it is automatically subscribed to the ‘regular’ release channel by default.You can also migrate existing clusters onto release channels from the Google Cloud Console, or the command line. Read more here.For example, if you want to subscribe a cluster to a release channel, and also to avoid minor Kubernetes upgrades for three months. You can follow these steps:Ensure that clusters are running a version supported by the channel.Exclude auto-upgrade for 24 hours. This is an optional safety step to avoid unplanned upgrades immediately after subscribing the cluster to a channel.Subscribe the clusters to your desired release channel.Set upgrade scope to no_minor_upgrades, allowing only patch versions to be applied to the cluster, while keeping the cluster on the same minor release.With GKE release channels, you have the power to decide not only when, but how, and what to upgrade in your clusters and nodes. You can learn more about release channels here, and about maintenance windows here. And for even more, tune into this episode of the Google Cloud Platform Podcast, where my colleague Abdelfettah Sghiouar and I discuss the past and future of GKE release channels with the hosts Kaslin Fields and Mark Mirchandani.Related ArticleHow a robotics startup switched clouds and reduced its Kubernetes ops costs with GKE AutopilotCompared with using AWS EKS, Brain Corp’s use of GKE Autopilot reduced the operational overhead involved with running 100,000 robots in p…Read Article
Quelle: Google Cloud Platform

Startup Highnote builds end-to-end embedded finance platform on Google Cloud

The ability to quickly introduce and evolve payment options for products or services is essential for businesses, as nearly50% of consumers who can’t use a preferred payment method abandon their purchase. At the same time, gift cards, branded credit cards and rewards programs are critical tools that companies rely on to build more loyal and lasting customer relationships. With Highnote, companies have an all-in-one embedded platform to quickly create payment cards and wallets, offer innovative rewards programs and credit, and provide sustainable wage access. It is the first platform that allows enterprises to make card issuance an embedded capability of their product without creating an entirely new (and costly) organization. Creating an exciting fintech future with Google CloudWhen thinking about building the industry’s first end-to-end embedded finance platform, we quickly realized Highnote would only be successful if it enabled companies to truly innovate and quickly roll out new programs. To do so, the platform would have to be built on scalable infrastructure capable of securely delivering services with speed and reliability while offering easy access to actionable Big Data analytics.Working closely with the team at the Google for Startups Cloud Program, we successfully implementedGoogle Cloud as a versatile, future-proof foundation of our platform—and built Highnote from the ground up in just one year. Highnote’s GraphQL-based API platform reinvents the card issuance process. Utilizing the developer-friendly Highnote platform, product and engineering teams at digital enterprises of all sizes can easily and efficiently embed virtual and physical payment cards (commercial and consumer prepaid, debit, credit, and charge), ledger, and wallet capabilities into their existing products. This creates compelling value while growing revenue and building a unique and differentiated brand. We leverageCloud Spanner,BigQuery, andGoogle Kubernetes Engine (GKE) to create a unified and highly secure PCI DSS-compliant platform with GraphQL APIs that provide rapid and flexible money transfers. This gives us a reliable platform to deliver and test customer experiences, respond to outcomes, and make better business decisions. Powered by Google Cloud, our data models and application domains are architected to support configurations and customizations that unlock a diverse set of new use cases across industries, including retail, travel, logistics, healthcare, and sustainable wage access programs.We are especially proud to highlight our enablement of sustainable wage access, as this program helps the 50% of Americans living paycheck to paycheck. Embedding this program within payroll systems provides a viable alternative to payday lenders who often charge exorbitant fees and interest rates. In real world terms, this means Highnote helps people access earned wages before payday at no cost. The other customer we just went live with was Tillful, and their Tillful card helps small businesses build their business credit. This program will help new and emerging businesses as well as underrepresented owners of small businesses by making the credit ecosystem accessible. Highnote’s platform is designed to support multiple use cases across many industries. For example, we also help the trucking and logistics companies to develop fleet and fuel cards, and spend management companies who are looking to uplevel offerings. Delivering high-performance transactions with Cloud SpannerBuilding one of the world’s most modern card platforms would not have been possible withoutCloud Spanner. We needed a solution that would keep our massive petabyte databases from buckling and more securely deliver data anywhere in the U.S. Cloud Spanner does all this and more, as it routinely connects purchases from millions of customers to tens of thousands of vendors. We also wanted to reduce overhead by 80% by eliminating manual sharding, partitioning, and optimization of data. These processes are automatic with Cloud Spanner so we can operate at maximum efficiency. We specifically selected Cloud Spanner as our distributed SQL database management and storage solution because of its outstanding availability, zero plan maintenance downtime, security certifications, and the highest consistency guarantees of any scale-out database. We continue to optimally scale without any downtime or compromises to the integrity or security of our data. This is key for us because we can address unexpected spikes, long-term growth, and new services without costly rearchitecting.Highnote is designed to perform over billions of transactions on Cloud Spanner, and the average latency of less than 250 ms is a testament to the robustness of Google Cloud services.Enabling actionable customer insights at scaleBigQuery is another key Google Cloud solution that we rely on to deliver deep insights and visibility for our customers on a highly secure and scalable platform. When building Highnote, we knew we needed a cost-effective solution that excelled at data analytics. This is particularly critical for accurately measuring the performance—whether profitability or efficacy—of any program or card.Using BigQuery, we successfully run analytics at scale with as much as a 34% lower three-year TCO than cloud data warehouse alternatives. Over the past year, BigQuery has enabled our customers to unlock data-rich capabilities with a ledger that tracks money in real time and serves up complete debit and credit entries for every event across their accounts. Companies also access real time balances for revenue, fees, customer accounts, and available funds management without complicated spreadsheets.To quickly and efficiently roll out Highnote to our customers, we needed a simple way to automatically deploy, scale, and manage Kubernetes. When selecting a Kubernetes management tool, our top priorities were rapidly spinning up and securely scaling across multiple sites. As part of Google Cloud’s expansive ecosystem,Google Kubernetes Engine (GKE) was the top choice due to seamless and automatic Kubernetes scaling and management.We quickly got off the ground with single-click clusters and scaled up by using the high-availability control plane—including multi-zonal and regional clusters—to easily accommodate multiple active-active regions (which other solutions cannot do). As an embedded finance platform, stringent security protocols were obviously a key consideration for us. GKE is secure by default and runs routine vulnerability scans of container images and data encryption. Further security assistance was provided by Google Cloud partners 66degrees and DoiT International to help us rapidly validate VPC PCI compliance and ensure the uninterrupted performance of thousands of transactions per second. Winning in fintech with Google for StartupsBuilding the industry’s first end-to-end embedded finance platform would have been extremely challenging without the extensive Google Cloud support. By working closely with our Startups team and Google partners, we had access to Google Cloud services to more easily validate VPC PCI compliance and address most issues before we exited stealth. Their responsiveness is incredible and stands out compared to support services we’ve seen from other technology providers.Our participation in the Google for Startups Cloud program has been instrumental to our success. With Google Cloud, we are making embedded payments accessible to our customers without a big budget price tag. By doing so, we help unleash the creativity of emerging enterprises by enabling them to innovate with payment services and rewards programs to reach new markets and customers. If companies can dream, we can enable them to realize it on Highnote. Our platform really is that flexible. We’re excited where we can go and grow with Google Cloud.If you want to learn more about how Google Cloud can help your startup, visit our pagehere to get more information about our program, and sign up for our communications to get a look at our community activities, digital events, special offers, and moreRelated ArticleWhy managed container services help startups and tech companies build smarterWhy managed container services such as GKE are crucial for startups and tech companies.Read Article
Quelle: Google Cloud Platform

Discover how you can innovate anywhere with Azure Arc

Welcome to Azure Hybrid, Multicloud, and Edge Day—please join us for the digital event. Today, we’re sharing how Azure Arc extends Azure platform capabilities to datacenters, edge, and multicloud environments through an impactful, 90-minute lineup of keynotes, breakouts, and technical sessions available live and on-demand. As part of today’s event, we’re announcing the general availability of Azure Machine Learning for hybrid and multicloud deployments with Azure Arc. Now you can build, train, and deploy your machine learning models right where the data lives, such as your new or existing hardware and IoT devices.

When I talk with customers, one of the things I hear most frequently is how new cloud-based applications drive business forward. And as these new applications are built, they need to take full advantage of the agility, efficiency, and speed of cloud innovation. However, not all applications and infrastructure they run on can physically reside in the cloud. That’s why 93 percent of enterprises are committed to hybrid deployments for their on-premises, multicloud, and edge workloads.1

With Azure, we meet you where you are, so you can innovate anywhere. The Azure cloud platform helps you bring new solutions to life—to solve today’s challenges and create the future. Azure Arc is a bridge that extends the Azure platform so you can build applications and services with the flexibility to run across datacenters, edge, and multicloud environments.

Azure Arc provides a consistent development, operations, and security model for both new and existing applications. Our customers are using it to revolutionize their businesses, whether they’re building on new and existing hardware, virtualization and Kubernetes platforms, IoT devices, or integrated systems.

I’m constantly amazed by the ways people are using Azure and Azure Arc to create innovative solutions, and at the same time, overcome longstanding security and governance challenges.

John Deere brings modern cloud benefits on-premises and at the edge with hybrid data services

The iconic green and yellow John Deere tractors are a familiar sight in fields around the world. With a well-stocked technology portfolio that spans cloud platforms, on-premises datacenters, and edge devices at factories, John Deere’s modernization strategy makes the most of its assets while cultivating a path for the future.

Together with Azure Arc–enabled SQL Managed Instance, John Deere helps connect the dots across all these environments and puts the power of the cloud to work in the company’s existing infrastructure. The result? A unified view of operations across platforms that pivots on Azure Arc, helping John Deere to optimize manufacturing operations. Together with Azure Arc–enabled SQL Managed Instance, the hybrid solution is helping John Deere drive down operational costs and accelerate innovation.

Another opportunity the cloud provides is to transform data insights into new products and services. For years, Azure has provided machine learning and IoT solutions to unlock signals and data from the physical world. Azure Arc brings data services from Azure, like SQL, PostgreSQL, and Machine Learning so you can harness data insights from edge to cloud with an end-to-end solution from local data collection, compute, storage, and real-time analysis.

We recently announced Azure Arc–enabled SQL Managed Instance Business Critical is now generally available. The Business Critical tier of Azure Arc–enabled SQL Managed Instance is built for mission-critical workloads requiring the most demanding performance, high availability, and security. Azure Arc–enabled SQL Managed Instance comes from the same evergreen SQL in Azure that is always up to date with no end of support.

Wolverine Worldwide analyzes sensitive data on-premises to optimize the supply chain

Wolverine Worldwide owns beloved activewear and lifestyle brands such as Chaco, Saucony, Merrell, Keds, Sperry, and more. When the pandemic created a new set of unanticipated supply chain challenges across the global economy, Wolverine turned to cloud innovation to help its 13 brands.

“Previously, data was a little tough to get at. It was either a gut feel, or the opportunity bypassed us while we were doing our analysis. With Azure Arc, Wolverine can use Azure Machine Learning and data services to analyze holistically data from the supply chain, manufacturing, and its ecommerce business while keeping sensitive data on-premises.”—Jason Miller, Vice President for Enterprise Data, Planning & Analytics, Wolverine Worldwide

Whether you want to secure and govern servers or create a self-service experience on VMware from Azure, Azure Arc is validated on a variety of infrastructures so you can always get your applications and data to run where you need them.

Businesses can start with Azure Stack HCI support for single-node clusters, which is generally available, for flexibility to deploy Azure Stack HCI in smaller spaces and with lower processing needs. Additionally, we’re announcing today that Windows Admin Center can now manage your Azure Arc–enabled servers and Azure Stack HCI clusters from the Azure Portal. Using this functionality, you can securely manage your servers and clusters from Azure—without needing a VPN, public IP address, or other inbound connectivity to your machine.

Greggs modernizes security and operations

A bakery and coffee shop in the UK with over 2,200 retail locations, Greggs is another customer using Azure Arc–enabled security and management tools. The company needed visibility across its digital estate from on-premises Windows Servers to Kubernetes running in AKS.

“By deploying Azure Arc, we can use Microsoft Defender for Cloud for our on-premises server estate, something we couldn’t do before. We’ve gained significant security benefits—like secure risk score, compliance scoring, and assessments. The central aggregation of logs shows us if a security event actually occurs across multiple devices so that we can pinpoint potential causes.”—Scott Clennell, Head of Infrastructure and Networks, Greggs

For customers like Greggs, we continue to innovate on Azure Arc–enabled servers. We recently announced Azure Arc–enabled servers support for private endpoints, a new servers monitoring workbook created in the public Azure Monitor GitHub repository, and a preview of SSH access to Azure Arc–enabled servers.

With Azure Arc, you have access today to a comprehensive set of Azure services, such as Microsoft Defender for Cloud, Microsoft Sentinel, Azure Policy, Azure Monitor, and more to secure and manage resources and data anywhere.

Millennium bcp streamlines multicloud app deployments with Azure Arc

“We needed…the ability to move a workload running in an Azure Kubernetes Service (AKS) cluster to a Google Cloud Platform or Amazon Web Services cluster, or vice versa, in case of emergency. We needed something that could help us turn those into an enterprise-level service. That’s where Azure Arc came in.”—Nuno Guedes, Cloud Compute Lead, Millennium bcp

Millennium bcp is the largest private bank in Portugal and uses Azure Arc for a standard approach to deploy containers to its multicloud environment. Azure Arc helps companies like Millennium build and modernize cloud-native apps on any Kubernetes using familiar developer tools, like Visual Studio Code and GitHub, as well as implement consistent GitOps and policy-driven deployments across environments.

To support our customers’ app development, we recently announced GitOps with Flux v2 in AKS and Azure Arc–enabled Kubernetes, general availability of Arc–enabled Open Service Mesh, general availability of Azure Key Vault Secrets Provider extension, and the landing zone accelerator for Azure Arc–enabled Kubernetes.

Finally, a huge thank you to our partners and customers in the Azure Arc community. We hope you will enjoy the event and learn how Azure Arc can benefit your organization. We look forward to connecting and listening to your feedback.

Azure Hybrid, Multicloud, and Edge Day highlights

You can access everything on-demand, and check out the additional demos and customer stories in the event portal. Enjoy the event experience. I can’t wait to see how you innovate anywhere.

1Hybrid & Multicloud Perceptions Survey, Microsoft.
Quelle: Azure

Simplify and centralize network security management with Azure Firewall Manager

We are excited to share that Azure Web Application Firewall (WAF) policy and Azure DDoS Protection plan management in Microsoft Azure Firewall Manager is now generally available.

With an increasing need to secure cloud deployments through a Zero Trust approach, the ability to manage network security policies and resources in one central place is a key security measure.

Today, you can now centrally manage Azure Web Application Firewall (WAF) to provide Layer 7 application security to your application delivery platforms, Azure Front Door, and Azure Application Gateway, in your networks and across subscriptions. You can also configure DDoS Protection Standard for protecting your virtual networks from Layer 3 and Layer 4 attacks.

Azure Firewall Manager is a central network security policy and route management service that allows administrators and organizations to protect their networks and cloud platforms at a scale, all in one central place. 

Azure Web Application Firewall is a cloud-native web application firewall (WAF) service that provides powerful protection for web apps from common hacking techniques such as SQL injection and security vulnerabilities such as cross-site scripting.

Azure DDoS Protection Standard provides enhanced Distributed Denial-of-Service (DDoS) mitigation features to defend against DDoS attacks. It is automatically tuned to protect all public IP addresses in virtual networks. Protection is simple to enable on any new or existing virtual network and does not require any application or resource changes. 

By utilizing both WAF policy and DDoS protection in your network, this provides multi-layered protection across all your essential workloads and applications.

WAF policy and DDoS Protection plan management are an addition to Azure Firewall management in Azure Firewall Manager.

Centrally protect your application delivery platforms using WAF policies 

In Azure Firewall Manager, you can now manage and protect your Azure Front Door or Application Gateway deployments by associating WAF policies, at scale. This allows you to view all your key deployments in one central place, alongside Azure Firewall deployments and DDoS Protection plans.

Upgrade from WAF configuration to WAF policy

In addition, the platform supports administrators to upgrade from a WAF config to WAF policies for Application Gateways, by selecting the service and Upgrade from WAF configuration. This allows for a more seamless process for migrating to WAF policies, which supports WAF policy settings, managed rulesets, exclusions, and disabled rule-groups.

As a note, all WAF configurations that were previously created in Application Gateway can be done through WAF policy.

Manage DDoS Protection plans for your virtual networks

You can enable DDoS Protection Plan Standard on your virtual networks listed in Azure Firewall Manager, across subscriptions and regions. This allows you to see which virtual networks have Azure Firewall and/or DDoS protection in a single place.

View and create WAF policies and DDoS Protection Plans in Azure Firewall Manager

You can view and create WAF policies and DDoS Protection Plans from the Azure Firewall Manager experience, alongside Azure Firewall policies.

In addition, you can import existing WAF policies to create a new WAF policy, so you do not need to start from scratch if you want to maintain similar settings.

Monitor your overall network security posture

Azure Firewall Manager provides monitoring of your overall network security posture. Here, you can easily see which virtual networks and virtual hubs are protected by Azure Firewall, a third-party security provider, or DDoS Protection Standard. This overview can help you identify and prioritize any security gaps that are in your Azure environment, across subscriptions or for the whole tenant.

Coming soon, you’ll also be able to view your Application Gateway and Azure Front Door monitors, for a full network security overview.

Learn more

To learn more about these features in Azure Firewall Manager, visit the Manage Web Application Firewall policies tutorial, WAF on Application Gateway documentation, and WAF on Azure Front Door documentation. For DDoS information, visit the Configure Azure DDoS Protection Plan using Azure Firewall Manager tutorial and Azure DDoS Protection documentation.

To learn more about Azure Firewall Manager, please visit the Azure Firewall Manager home page.
Quelle: Azure

Getting Started with Visual Studio Code and IntelliJ IDEA Docker Plugins

Today’s developers swear by IDEs that best support their workflows. Jumping repeatedly between windows and apps is highly inconvenient, which makes these programs so valuable. By remaining within your IDE, it’s possible to get more done in less time.
Today, we’ll take a look at two leading IDEs — VS Code and IntelliJ IDEA — and how they can mesh with your favorite Docker tools. We’ll borrow a sample ASP.NET application and interact with it throughout this guide. We’ll show you why Docker integrations are so useful during this process.
The Case for Integration
When working with Docker images, you’ll often need to perform repetitive tasks like building, tagging, and pushing each image — after creating unique Dockerfiles and Compose files.
In a typical workflow, you’d create a Dockerfile and then build your image using the docker build CLI command. Then, you’d tag the image using the docker tag command and upload it to your remote registry with docker push. This process is required each time you update your application. Additionally, you’ll frequently need to inspect your running containers, volumes, and networks.
Before the Docker, Docker Explorer, and “Remote – Containers” plugins debuted, (to name a few), you’d have to switch between your IDE and Docker Desktop to perform tasks. Now, Docker Desktop IDE integration unlocks Desktop’s functionality without compromising productivity. The user experience is seamless.
Integrating your favorite IDE with Docker Desktop enables you to be more productive without leaving either app. These extensions let you create Dockerfiles and Compose files based on your entered source code — letting you view and manage containers directly from within your IDE.
Now, let’s explore how to install and leverage various Docker plugins within each of these IDEs.
Prerequisites
You’ll need to download and install the following before getting started:

The latest version of Docker Desktop
Visual Studio Code
IntelliJ IDEA
Our sample ASP.NET Core app

 
Before beginning either part of the tutorial, you’ll first need to download and install Docker Desktop. This grabs all Docker dependencies and places them onto your machine — for both the CLI and GUI. After installing Desktop, launch it before proceeding.
Next, pull the Docker image from the ASP.NET Core app using the Docker CLI command:
docker pull mcr.microsoft.com/dotnet/samples:aspnetapp
 
However, our example is applicable to any image. You can find a simple image on Docker Hub and grab it using the appropriate docker pull command.
Integrations with VS Code
Depending on which version you’re running (since you might’ve installed it prior), VS Code’s welcome screen will automatically prompt you to install recommended Docker plugins. This is very convenient for quickly getting up and running:
 
VS Code displays an overlay in the bottom right, asking to install Docker-related extensions.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
If you want to install everything at once, simply click the Install button. However, it’s likely that you’ll want to know what VS Code is adding to your workspace. Click the Show Recommendations button. This summons a list of Docker and Docker-adjacent extensions — while displaying Microsoft’s native “Remote – Containers” extension front and center:
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
You can click any of these items in the sidebar and install them using the green Install button. Selecting the dropdown arrow attached to this button lets you install a release version or pre-release version depending on your preferences. Additionally, each extension may also install its own dependencies that let it work properly. You can click the Dependencies tab, if applicable, to view these sidekick programs.
However, you may have to manually open the Extensions pane if this prompt doesn’t appear. From the column of icons in the sidebar, click the Extensions icon that resembles a window pane, search for “Docker” in the search bar.
 

 
 
 
 
 
 
 
 
 
 
 
 
You’ll also see a wide variety of other Docker-related extensions, sorted by popularity and relevance. These are developed by community members and verified publishers.
Once your installation finishes, a “Getting Started with Docker” screen will greet you in the main window, letting you open a workspace Folder, run a container, and more:
 

 
 
 
 
 
 
 
 
 
 
 
The Docker whale icon will also appear in the left-hand pane. Clicking it shows a view similar to that shown below:
 

 
 
 
 
 
 
 
 
 
 
 
Each section expands to reveal more information. You can then check your running containers and images, stop or start them, connect to registries, plus inspect networks, volumes, and contexts.
Remember that ASP.NET image we pulled earlier? You can now expand the Images group and spin up a container using the ASP.NET Core image. Locate mcr.microsoft.com/dotnet/samples in the list, right click the aspnetapp tag, and choose “Run”:
 

 
 
 
 
 
 
 
 
 
 
You’ll then see your running container under the Containers group:
 

 
 
 
 
 
 
 
 
 
 
This method lets you easily preview container files right within VS Code.
Expand the Files group under the running container and select any file from the list. Our example below previews the site.css file from the app/wwwroot/css directory:
 

 
 
 
 
 
 
 
 
 
 
 
Finally, you may need to tag your local image before pushing it to the remote registry. You can do this by opening the Registries group and clicking “Connect Registry.”
VSCode will display a wizard that lets you choose your registry service — like Azure, Docker Hub, the Docker Registry, or GitLab. Let’s use Docker Hub by selecting it from the options list:
 

 
 
 
 
 
Now, Visual Studio will prompt you to enter credentials. Enter these to sign in. Once you’ve successfully logged in, your registry will appear within the group:
 

 
 
 
 
 
 
After connecting to Hub, you can tag local images using your remote repository name. For example:
YOUR_REPOSITORY_NAME/samples:aspnetapp
 
To do this, return to the Images group and right-click on the aspnetapp Docker image. Then, select the “Tag” option from the context menu. VS will display the wizard, where you can enter your desired tag.
Finally, right-click again on aspnetapp and select “Push” from the context menu:
 

 
 
 
 
 
 
 
 
 
 
This method is much faster than manually entering your code into the terminal.
However, this showcases just some of what you can achieve with the Docker extension for VS Code. For example, you can automatically generate Dockerfiles from within VS Code.
To create these, open the Command Palette (View > Command Palette…), and type “Docker” to view all available commands:
 

 
 
 
 
 
 
 
 
Next, click “Add Docker Files to Workspace…” You can now create your Dockerfiles from within VS Code.
Additionally, note the variety of Docker functions available from the Command Palette. The Docker extension integrates seamlessly with your development processes.
IntelliJ IDEA
In the IntelliJ IDEA Ultimate Edition, the Docker plugin is enabled by default. However, if you’re using the Community Edition, you’ll need to install the plugin manually.
You can either do this when the IDE starts (as shown below), or by clicking the Preferences window in the Plugins section.
 

 
 
 
 
 
 
 
 
 
 
 
 
 
Once you’ve installed the Docker plugin, you’ll need to connect it to Docker Desktop. Follow these steps:

Navigate to IntelliJ IDEA > Preferences.
Expand the Build, Execution, Deployment group. Click Docker, and then click the small  “+” icon to the right.
Choose the correct Docker daemon for your platform (for example, Docker for Mac).

 
The installation may take a few minutes. Once it’s complete, you’ll see the “Connection successful” message toward the middle-bottom of the Preferences pane:
 

 
 
 
 
 
 
 
 
 
 
 
 
 
Next, click “Apply” and then expand the Docker group from the left sidebar.
Select “Docker Registry” and add your preferred registry from there. Like our VS Code example, this demo also uses Docker Hub.
IntelliJ will prompt you to enter your credentials. You should again see the “Connection successful” message under the Test connection pane if you’re successful:
 

 
 
 
 
 
 
 
 
 
Now, click OK. Your Docker daemon and the Docker Registry connections will appear in the bottom portion of your IDE, in the Services pane:
 

 
 
 
 
 
 
This should closely resemble what happens within VS Code. Now, you can spin up another container!
To do this, click to expand the Images group. Locate your container image and select it to open the menu. Click the “Create Container” button from there.
 

 
 
 
 
 
This launches the “Create Docker Configuration” window, where you can configure port binding, entrypoints, command variables, and more.
You can otherwise interact with these options via the “Modify options” drop-down list — written in blue near the upper-right corner of the window:
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
After configuring your options, click “Run” to start the container. Now, the running container (test-container) will appear in the Services pane:
 

 
 
 
 
 
 
You can also inspect the running container just like you would in VS Code.
First, navigate back to the Dashboard tab. You’ll see additional buttons that let you quickly “Restart” or “Stop” the container:
 

 
 
 
 
 
Additionally, you can access the container command prompt by clicking “Terminal.” You’ll then use this CLI to inspect your container files.
 

 
 
 
 
 
Finally, you can now easily tag and push the image. Here’s how:

Expand the Images group, and click on your image. You’ll see the Tags list in the right-hand panel.
Click on “Add…” to create a new tag. This prompts the Tag image window to appear. Use this window to provide your repository name.
Click “Tag” to view your new tag in the list.

 

 
 
 
 
 
Click on your tag. Then use the “Push Image” button to send your image to the remote registry.
Wrapping Up
By following this tutorial, you’ve learned how easy it is to perform common, crucial Docker tasks within your IDE. The process of managing containers and images is much smoother. Accordingly, you no longer need to juggle multiple windows or programs while getting things done. Docker Desktop’s functionality is baked seamlessly into VS Code and IntelliJ IDEA.
To enjoy streamlined workflows yourself, remember to download Docker Desktop and add Docker plugins and extensions to your favorite IDE.
Want to harness these Docker integrations? Read VS Code’s docs to learn how to use a Docker container as a fully-featured dev environment, or customize the official VS Code Docker extension. You can learn more about how Docker and IntelliJ team up here.
Quelle: https://blog.docker.com/feed/