Take Bookings, Set Up Subscriptions, and Automate Your Store With New Premium Plugins

Plugins are the building blocks and the rocket fuel for your WordPress.com website. They can help make your site faster and easier to manage but also bring essential elements to your fingertips. From email opt-ins and contact forms, to SEO, site speed optimization, calendars, and booking options — the list is nearly endless.

If you can imagine it, there’s a high likelihood that there’s a plugin to help you accomplish whatever your endeavor on your WP.com website.

And because of the vast community of WordPress developers, there are always new plugins being added to our marketplace. To better help you select the plugins for your business or passion, we are listing three of our hand-picked, most popular premium plugins below, in addition to several others, recently added to our marketplace.

WooCommerce Bookings

This invaluable plugin allows customers to book appointments, make reservations, or rent equipment. Both you and your customers can save valuable time since there’s no need for phone calls, emails, or other forms of communication to handle bookings. Prospects simply fill out the information themselves.

The benefits of WooCommerce Bookings are hard to overstate. For example, you can: 

Define set options, like fixed time slots for a class, appointment, or guided tourLet customers choose the times that work best by giving them the flexibility to book whatever range they need, like checking into a hotelSet certain time periods as off limits and un-bookable, providing yourself a buffer between bookings

Perhaps best of all, the plugin integrates seamlessly with your Google calendar. Use the calendar view to see how your day or month is shaping up. Update existing bookings or availability, or filter to view specific services or resources. And if you have customers who insist on calling in to make bookings the old-fashioned way, you can add them manually from the calendar while you’re on the phone.

Whether you’re running a small bed and breakfast, a fishing guide service, or anything in between, WooCommerce Bookings can give you back valuable time, ensuring your customers have a friction-free booking process, which allows you to focus your energies elsewhere. 

View the live demo here.Purchase WooCommerce Bookings

WooCommerce Subscriptions

With WooCommerce Subscriptions, customers can subscribe to your products or services and then pay for them at the frequency of their choosing — weekly, monthly, or even annually. This level of freedom can be a boon to the bottom line, as it easily sets your business up to enjoy the fruits of recurring revenue. 

Better yet, WooCommerce Subscriptions not only allows you to create and manage products with recurring payments, but you can also introduce a variety of subscriptions for physical or virtual products and services, too. For example, you can offer weekly service subscriptions, create product-of-the-month clubs, or even yearly software billing packages. 

Additional features include:

Multiple billing schedules available to suit your store’s needsIntegrates with more than 25 payment gateways for automatic recurring paymentsAccessible through any WooCommerce payment gateway, allowing for manual renewal payments along with automatic email invoices and receiptsPrevents lost revenue by supporting automatic rebilling on failed subscription payments

Additionally, this plugin offers built-in renewal notifications and automatic emails, which makes you and your customers aware of subscription payments being processed. Your customers can also manage their own subscriptions using the Subscriber’s View page. The page also allows subscribers to suspend or cancel a subscription, change the shipping address or payment method for future renewals, and upgrade or downgrade. 

WooCommerce Subscriptions really do put your customers first, giving them the control they want and will appreciate while allowing you to automate a process and experience that saves time and strengthens your relationship with customers. 

Learn more and purchase WooCommerce Subscriptions 

AutomateWoo

High on every business owner’s list of goals is the ability to grow the company and earn more revenue. Well, AutomateWoo makes that task much simpler. This powerful, feature-rich plugin delivers a plethora of automated workflows to help you grow your business.  

​With​ ​AutomateWoo, you can create workflows using various triggers, rules, and actions, and then schedule them to run automatically.

​For example, you can set up abandoned cart emails, which have been shown to increase the chance of recovering the sale by 63%.

One of the key features small business owners are sure to enjoy is the ability to design and send emails using a pre-installed template created for WooCommerce emails in the WordPress editor. 

This easy-to-appreciate feature makes it a breeze to send targeted, multi-step campaigns that include incentives for customers. AutomateWoo gives you complete control over campaigns. For example, you can schedule different emails to be sent at intervals or after specific customer interactions; you can also offer incentives using the personalized coupon system.

Also, you can track all emails via a detailed log of every email sent and conversion recorded. Furthermore, with AutomateWoo’s intelligent tracking, you can capture guest emails during checkout.  

This premium plugin comes packed with a host of other features as well, including, but not limited to:

Follow-up emails: Automatically email customers who buy specific products and ask for a review or suggest other products they might likeSMS notifications: Send text messages to customers or admins for any of AutomateWoo’s wide range of triggersWishlist marketing: Send timed wishlist reminder emails and notify when a wished-for product goes on sale; integrates with WooCommerce Wishlists or YITH WishlistsPersonalized coupons: Generate dynamic customized coupons for customers to raise purchase rates

AutomateWoo will make an indispensable asset for any business looking to create better synergies between their brand’s products or services and the overall experience customers have with them. 

Learn more and purchase AutomateWoo

Additional Business-Boosting Plugins

In addition to WooCommerce Bookings, WooCommerce Subscriptions, and AutomateWoo, our marketplace has also launched a number of additional premium plugins, including:

WooCommerce Points and Rewards: Allows you to reward your customers for purchases and other actions with points that can be redeemed for discountsWooCommerce One Page Checkout: Gives you the ability to create special pages for customers to select products, checkout, and pay, all in one placeWooCommerce Deposits: Customers can place a deposit or use a payment plan for products. Min/Max Quantities: Make it possible to define minimum/maximum thresholds and multiple/group amounts per product (including variations) to restrict the quantities of items that can be purchased. Product Vendors: Enables multiple vendors to sell via your site, and in return take a commission on the sales.  USPS Shipping Method: Provides shipping rates from the USPS API, with the ability to accurately cover both domestic and international parcels.

Get the Most Out of Your Website 

Keep an eye on the plugin marketplace, as we’re continuing to offer premium plugins that help you best serve your site visitors and customers. At WordPress.com, we’re committed to helping you achieve your goals. 

To get the most out of your WordPress.com website, upgrade to WordPress Pro, which puts the power of these plugins at your fingertips. Currently, only WordPress Pro plans or legacy Business + eComm customers can purchase plugins.
Quelle: RedHat Stack

Even more pi in the sky: Calculating 100 trillion digits of pi on Google Cloud

Records are made to be broken. In 2019, we calculated 31.4 trillion digits of π — a world record at the time. Then, in 2021, scientists at the University of Applied Sciences of the Grisons calculated another 31.4 trillion digits of the constant, bringing the total up to 62.8 trillion decimal places. Today we’re announcing yet another record: 100 trillion digits of π.This is the second time we’ve used Google Cloud to calculate a record number1 of digits for the mathematical constant, tripling the number of digits in just three years. This achievement is a testament to how much faster Google Cloud infrastructure gets, year in, year out. The underlying technology that made this possible is Compute Engine, Google Cloud’s secure and customizable compute service, and its several recent additions and improvements: the Compute Engine N2 machine family, 100 Gbps egress bandwidth, Google Virtual NIC, and balanced Persistent Disks. It’s a long list, but we’ll explain each feature one by one.Before we dive into the tech, here’s an overview of the job we ran to calculate our 100 trillion digits of π. Program: y-cruncher v0.7.8, by Alexander J. YeeAlgorithm: Chudnovsky algorithmCompute node: n2-highmem-128 with 128 vCPUs and 864 GB RAMStart time: Thu Oct 14 04:45:44 2021 UTCEnd time: Mon Mar 21 04:16:52 2022 UTCTotal elapsed time: 157 days, 23 hours, 31 minutes and 7.651 secondsTotal storage size: 663 TB available, 515 TB usedTotal I/O: 43.5 PB read, 38.5 PB written, 82 PB totalHistory of π computation from ancient times through today. You can see that we’re adding digits of π exponentially, thanks to computers getting exponentially faster.Architecture overviewCalculating π is compute-, storage-, and network-intensive. Here’s how we configured our Compute Engine environment for the challenge.   For storage, we estimated the size of the temporary storage required for the calculation to be around 554 TB. The maximum persistent disk capacity that you can attach to a single virtual machine is 257 TB, which is often enough for traditional single node applications, but not in this case. We designed a cluster of one computational node and 32 storage nodes, for a total of 64 iSCSI block storage targets.The main compute node is a n2-highmem-128 machine running Debian Linux 11, with 128 vCPUs and 864 GB of memory, and 100 Gbps egress bandwidth support. The higher bandwidth support is a critical requirement for the system as we adopted a network-based shared storage architecture.Each storage server is a n2-highcpu-16 machine configured with two 10,359 GB zonal balanced persistent disks. The N2 machine series provides balanced price/performance, and when configured with 16 vCPUs it provides a network bandwidth of 32 Gbps, with an option to use the latest Intel Ice Lake CPU platform, which makes it a good choice for high-performance storage servers.Automating the solutionWe used Terraform to set up and manage the cluster. We also wrote a couple of shell scripts to automate critical tasks such as deleting old snapshots, and restarting from snapshots (we didn’t need to use this though). The Terraform scripts created OS guest policies to help ensure that the required software packages were automatically installed. Part of the guest OS setup process was handled by startup scripts. In this way, we were able to recreate the entire cluster with just a few commands.We knew the calculation would run for several months and even a small performance difference could change the runtime by days or possibly weeks. There are also a number of combinations of parameters in the operating system, infrastructure, and application itself. Terraform helped us test dozens of different infrastructure options in a short time. We also developed a small program that runs y-cruncher with different parameters and automated a significant portion of the measurement. Overall, the final design for this calculation was about twice as fast as our first design. In other words, the calculation could’ve taken 300 days instead of 157 days!The scripts we used are available on GitHub if you want to look at the actual code that we used to calculate the 100 trillion digits.Choosing the right machine type for the jobCompute Engine offers machine types that support compute- and I/O-intensive workloads. The amount of available memory and network bandwidth were the two most important factors, so we selected n2-highmem-128 (Intel Xeon, 128 vCPUs and 864 GB RAM). It satisfied our requirements: high-performance CPU, large memory, and 100 Gbps egress bandwidth. This VM shape is part of the most popular general purpose VM family in Google Cloud.   100 Gbps networkingThe n2-highmem-128 machine type’s support for up to 100 Gbps of egress throughput was also critical. Back in 2019 when we did our 31.4-trillion digit calculation, egress throughput was only 16 Gbps, meaning that bandwidth has increased by 600% in just three years. This increase was a big factor that made this 100-trillion experiment possible, allowing us to move 82.0 PB of data for the calculation, up from 19.1 PB in 2019.We also changed the network driver from virtio to the new Google Virtual NIC (gVNIC). gVNIC is a new device driver and tightly integrates with Google’s Andromeda virtual network stack to help achieve higher throughput and lower latency. It is also a requirement for 100 Gbps egress bandwidth.Storage designOur choice of storage was crucial to the success of this cluster – in terms of capacity, performance, reliability, cost and more. Because the dataset doesn’t fit into main memory, the speed of the storage system was the bottleneck of the calculation. We needed a robust, durable storage system that could handle petabytes of data without any loss or corruption, while fully utilizing the 100 Gbps bandwidth.Persistent Disk (PD) is a durable high-performance storage option for Compute Engine virtual machines. For this job we decided to use balanced PD, a new type of persistent disk that offers up to 1,200 MB/s read and write throughput and 15-80k IOPS, for about 60% of the cost of SSD PDs. This storage profile is a sweet spot for y-cruncher, which needs high throughput and medium IOPS.Using Terraform, we tested different combinations of storage node counts, iSCSI targets per node, machine types, and disk size. From those tests, we determined that 32 nodes and 64 disks would likely achieve the best performance for this particular workload.We scheduled backups automatically every two days using a shell script that checks the time since the last snapshots, runs the fstrim command to discard all unused blocks, and runs the gcloud compute disks snapshot command to create PD snapshots. The gcloud command returns and y-cruncher resumes calculations after a few seconds while the Compute Engine infrastructure copies the data blocks asynchronously in the background, minimizing downtime for the backups.To store the final results, we attached two 50 TB disks directly to the compute node. Those disks weren’t used until the very last moment, so we didn’t allocate the full capacity until y-cruncher reached the final steps of the calculation, saving four months worth of storage costs for 100 TB.ResultsAll this fine tuning and benchmarking got us to the one-hundred trillionth digit of π — 0. We verified the final numbers with another algorithm (Bailey–Borwein–Plouffe formula) when the calculation was completed. This verification was the scariest moment of the entire process because there is no sure way of knowing whether or not the calculation was successful until it finished, five months after it began. Happily, the Bailey-Borwein-Plouffe formula found that our results were valid. Woo-hoo! Here are the last 100 digits of the result:code_block[StructValue([(u’code’, u’4658718895 1242883556 4671544483 9873493812 1206904813 rn2656719174 5255431487 2142102057 7077336434 3095295560′), (u’language’, u”)])]You can also access the entire sequence of numbers on our demo site.So what?You may not need to calculate trillions of decimals of π, but this massive calculation demonstrates how Google Cloud’s flexible infrastructure lets teams around the world push the boundaries of scientific experimentation. It’s also an example of the reliability of our products – the program ran for more than five months without node failures, and handled every bit in the 82 PB of disk I/O correctly. The improvements to our infrastructure and products over the last three years made this calculation possible. Running this calculation was great fun, and we hope that this blog post has given you some ideas about how to use Google Cloud’s scalable compute, networking, and storage infrastructure for your own high performance computing workloads. To get started, we’ve created a codelab where you can create and calculate pi on a Compute Engine virtual machine with step-by-step instructions. And for more on the history of calculating pi, check out this post on The Keyword. Here’s to breaking the next record!1. We are actively working with Guinness World Records to secure their official validation of this feat as a “World Record”, but we couldn’t wait to share it with the world. This record has been reviewed and validated by Alexander J. Yee, the author of y-cruncher.Related ArticlePi in the sky: Calculating a record-breaking 31.4 trillion digits of Archimedes’ constant on Google Cloud[Editor’s note: It’s been two years since Googler Emma Haruka Iwao set a world record for calculating the most digits of Pi using Google …Read Article
Quelle: Google Cloud Platform

Google Cloud supports higher education with Cloud Digital Leader program

College and university faculty can now easily teach cloud literacy and digital transformation with the Cloud Digital Leader track, part of the Google Cloud career readiness program. The new track is available for eligible faculty who are preparing their students for a cloud-first workforce. As part of the track, students will build their cloud literacy and learn the value of Google Cloud in driving digital transformation, while also preparing for the Cloud Digital Leader certification exam. Apply today!Cloud Digital Leader career readiness trackThe Cloud Digital Leader career readiness track is designed to equip eligible faculty with the resources needed to prepare their students for the Cloud Digital Leader certification. This Google Cloud certification requires no previous cloud computing knowledge or hands-on experience. The training path enables students to build cloud literacy and learn how to evaluate the capabilities of Google Cloud in preparation for future job roles. The curriculumFaculty members can access this curriculum as part of the Google Cloud Career Readiness program. Faculty from eligible institutions can apply to lead students through the no-cost  program which provides access to the four-course on-demand training, hands-on practice to supplement the learning, and additional exam prep resources. Students who complete the entire program are eligible to apply for a certification exam discount. The Cloud Digital Leader track is the third program available for classroom use, joining the Associate Cloud Engineer and Data Analyst tracks. Cloud resources for your classroomReady to get started? Apply today to access the Cloud Digital Leader career readiness track for your classroom. Read the eligibility criteria for faculty. You can preview the course content at no cost.Related ArticleRead Article
Quelle: Google Cloud Platform

Palexy empowers retailers to increase in-store sales with the help of Google Cloud

Many people are again crowding store aisles as they look for their favorite products and eagerly try on clothing, shoes, and jewelry. Although some shoppers purchase multiple items, others leave the store empty handed. As retailers know, there are many possible reasons why some people only window shop. Perhaps a favorite item is too expensive, out of stock, or too hard to find in the store.The problem for many retailers, though, is that they often lack real insights into why shoppers leave without ever buying anything. That’s why we builtPalexy. With the Palexy platform, any retailer can easily use in-store video feeds combined with point of service (POS) data to gain actionable insights about customer shopping behavior, preferences, and interactions. The real time insights enable retailers to improve store layouts, stock popular items, set more competitive prices, and train more responsive staff. Today, hundreds of retailers worldwide use Palexy to create exciting in-store experiences that boost customer engagement and increase sales. As we continue to grow, Palexy will introduce new features and services to analyze and perfect every step of a customer’s journey so brick-and-mortar stores can more effectively compete against online shopping.Building a comprehensive retail analytics platformWe started Palexy with a small and dedicated team based in Southeast Asia. From the beginning, we were determined to positively disrupt the retail market. However, as a new startup with a limited budget, we quickly realized we couldn’t affordably or efficiently scale without a reliable technology partner.We looked at the options and identifiedGoogle Cloud, including theGoogle for Startups Cloud Program, as the best choice for us. In just a year we created a comprehensive retail analytics platform that delivers solutions for management, operations, merchandising, marketing, and loss prevention. We now have hundreds of customers around the world—and recently made theCBInsights list of top 10 global indoor mapping analytics vendors! We accomplished all this on thehighly secure-by-design infrastructure of Google Cloud.To accurately analyze the in-store customer journey with our computer vision and AI technology, we built our own model and processing pipeline from scratch, and we use a lot of T-4 GPUs from Google Cloud for our processing pipeline. These solutions enable Palexy to leverage existing store cameras to intelligently track how many customers enter the store, what they try on, how they interact with staff, and which aisles they visit. We also rely onGoogle Kubernetes Engine (GKE) to rapidly build, test, deploy, and manage containerized applications. We optimize GKE performance by streaming custom metrics fromPub/Sub to automatically select and scale different node pools. Since we started using GKE, we’ve lowered our application deployment costs by 30%. We’re also seeingTau VMs reduce video decoding costs by up to 40%.We use additional Google Cloud solutions to power the Palexy platform. We store and analyze customer data withCloud SQL for PostgreSQL, build API gateways onCloud Endpoints, create mobile applications withFirebase, coordinateCloud Run onCloud Scheduler, and archive processed videos onCloud Storage.Perfecting the in-store customer journeyThe Google for Startups Cloud Program has helped us to rapidly build a comprehensive retail analytics platform that is used by thousands of stores around the world. We continue to tap the deep technical knowledge of the dedicated Google for Startups Success Team who work closely with us to roll out new features and services. We also use Google Cloud credits to affordably explore additional solutions to manage and analyze the terabytes of videos, images, and data generated by our customers. Our customers are seeing incredible success with Palexy. For example, a major sporting goods retailer in Southeast Asia increased sales 59% after rearranging store shelves, redesigning window displays, and retraining staff. Point of sale data (POS) combined with video analysis also helped a fashion chain boost customer interaction rates 38% and raise conversion rates 24%.Worldwide demand for Palexy continues to grow at an impressive pace. As we expand our team, we look forward to launching Palexy in new markets and empowering retailers to perfect in-store shopping experiences. If you want to learn more about how Google Cloud can help your startup, visit our pagehere to get more information about our program, and sign up for our communications to get a look at our community activities, digital events, special offers, and more.Related ArticleQuintoAndar becomes largest housing platform in Latin America with help from Google CloudLearn how Brazilian startup QuintoAndar leveraged Google for Startups and Google Cloud for ultimate growth.Read Article
Quelle: Google Cloud Platform

How Google Cloud can help secure your software supply chain

With the recent announcement of Assured Open Source Software service, Google Cloud can help customers secure their open source software by providing them with the same open source packages that Google uses. By getting security assurances from using these open source packages, Google Cloud customers can enhance their security posture and build their own software using the same tools that we use such as Cloud Build, Artifact Registry and Container/Artifact Analysis. Here’s how Assured OSS can be incorporated into your software supply chain to provide additional software security assurances during the software development and delivery process.Building security into your software supply chainOut of the gate, the software development process begins with assurances from Google Cloud as developers are able to use open-source software packages from the Assured OSS service through their integrated development environment (IDE). When developers commit their code to their Git code repository, Cloud Build is triggered to build their application in the same way Assured OSS packages are built. This includes Cloud Build automatically generating, signing and storing the build provenance, which can provide up to SLSA level 2 assurance. As part of the build pipeline, the built artifacts are stored in Artifact Registry and automatically scanned for vulnerabilities, similar to how Assured OSS packages are scanned. Vulnerability scanning can be further enhanced using Kristis Signer policies that define acceptable vulnerability criteria which can be validated by the build pipeline.It’s important that only vetted applications be permitted into runtime environments like Google Kubernetes Engine (GKE) and Cloud Run. Google Cloud provides the Binary Authorization policy framework for defining and enforcing requirements on applications before they are admitted into these runtimes. Trust is accumulated in the form of attestations, which can be based on a broad range of factors including the use of blessed tools and repositories, vulnerability scanning requirements, or even manual processes such as code review and QA testing.Once the application has been successfully built and stored with passing vulnerability scans and trust-establishing attestations, it’s ready to be deployed. Google Cloud Deploy can help streamline the continuous delivery process to GKE, with built-in delivery metrics and security and auditing capabilities. Rollouts to GKE can be configured with approval gates to ensure that the appropriate stakeholders or systems have approved application deployments to target environments.When the application is deployed to the runtime, Binary Authorization is used to ensure that only applications that previously have been signed by Cloud Build or have otherwise successfully collected required attestations throughout the supply chain are permitted to run.This software supply chain allows you to build your applications in a similar manner as our Assured OSS packages, and securely delivers them to a runtime with added assurances provided by Cloud Deploy and Binary Authorization. As a result, you’re able validate the integrity of the application that you developed, built, and deployed—and have a greater level of confidence in the security of running applications.Take the next stepWe are thrilled to provide you with a growing set of capabilities across our services to help secure your software supply chain.To get started, try out Cloud Build, Artifact Registry, Container/Artifact Analysis, Cloud Deploy and Binary Authorization. To learn more about Assured OSS, please fill out this form.Related ArticleIntroducing Google Cloud’s new Assured Open Source Software serviceAnnouncing Google Cloud’s new Assured Open Source Software Service, which can help organizations add the same software that Google uses i…Read Article
Quelle: Google Cloud Platform

SUSECON 2022: Powering Business Critical Linux workloads on Azure

Since 2009, Microsoft and SUSE have partnered to provide Azure-optimized solutions for SUSE Linux Enterprise Server (SLES). SLES for SAP Applications is the leading platform for SAP solutions on Linux, with over 90 percent of SAP HANA deployments and 70 percent of SAP NetWeaver applications running on SUSE. Microsoft and SUSE jointly offer agility and flexibility for next-generation SAP landscape powered by SAP HANA.

Microsoft is sponsoring SUSECON Digital 2022 to bring the latest technological advancements to customers and the open-source community at large. In keeping with SUSECON Digital 2022’s Future Forward theme, we’ll be shining light on the latest and greatest, unraveling the worlds of business-critical Linux, enterprise container management, and edge innovation. In the three-day event from June 7 to 9, Microsoft will participate in several activities such as keynotes, demo sessions, and virtual booths. When you need a break, there are wellness sessions, games, and opportunities to win exciting prizes.

The need for innovation has never been greater. To become more agile and accelerate innovation, organizations are embarking on journeys of digital transformation. These transformations require modernizing legacy infrastructure and applications, adopting cloud-native technologies, and pushing organizational boundaries beyond the data center and cloud, all the way to the edge.

Hiren Shah, Head of Products, SAP on Azure Core Platform, Microsoft will join Markus Noga, GM Linux, SUSE at the Business Critical Linux keynote session to highlight how our partnership powers SAP workloads in critical business functions such as finance, supply chain, procurement, and enhances customer experience. Downtime caused by infrastructure outages can result in business disruption and lost revenue. A few areas where joint development is currently in progress include:

High availability with SUSE pacemaker cluster for SAP HANA workloads.
Automated resource migration when infrastructure fails.
Automation of deployment for SAP HANA workloads and Operating System configuration including High Availability.
Monitoring of high availability (HA) setup with Azure monitor for SAP Solutions.
Identifying common configuration errors through customer engagements
Live patching, balanced uptime, and security needs

In addition to the Cornerstone Keynote discussion, the Microsoft team will deliver six breakout sessions, covering topics such as Azure high-performance computing software platform, SLES for SAP applications, Azure Hybrid Benefit, SQL server, automotive software development, and more. We will focus on best practices for SQL Server SLES-based Azure Virtual Machines. Many of our customers are now deploying SQL Server containers as part of their data estate modernization strategy; the sessions will cover how Rancher can be used to deploy SQL Server containers and manage production workloads.

Migrating mission-critical SAP workloads can be complex for enterprises. Microsoft’s open source SAP Deployment Automation Framework can help customers deploy infrastructure using Terraform (infrastructure as code) and install SAP using Ansible (configuration as code). SUSE has been a co-development partner with Microsoft in developing this open source framework. This framework enables the accelerated deployment of SAP and is aligned with reference architecture and best practices. We are excited to continue our partnership with SUSE as we explore synergies within SAP operations (and beyond) to see an increasing number of customers and partners leveraging our framework.

Learn more about the latest updates

Join us at SUSECON Digital 2022 to learn more and bring your questions to our experts. Learn more about deploying secure, reliable, flexible hybrid cloud environments using SUSE solutions on Azure.
Explore how Azure Hybrid Benefit extends SLES workloads on Azure removing migration friction with integrated support provided by Microsoft.

Resources

Learn how Azure Monitor for SAP Solutions monitors products for customers.
Read more about SQL Server on Virtual Machines and how to migrate to the cloud.
Discover how to use SUSE SAP automation solution on Azure.

Quelle: Azure

Top 5 reasons to attend Azure Hybrid, Multicloud, and Edge Day

Infrastructure and app development are becoming more complex as organizations span a combination of on-premises, cloud, and edge environments. Such complexities arise when:

Organizations want to maximize their existing on-premises investments like traditional apps and datacenters.
Workloads can’t be moved to public clouds due to regulatory or data sovereignty requirements.
Low latency is required, especially for edge workloads.
Organizations need innovative ways to transform their data insights into new products and services.

Operating across disparate environments presents management and security complexities. But comprehensive hybrid solutions can not only address these complexities but also offer new opportunities for innovation. For example, organizations can innovate anywhere across hybrid, multicloud, and edge environments by bringing Azure security and cloud-native services to those environments with a solution like Azure Arc.

That’s why we’re excited to present Azure Hybrid, Multicloud, and Edge Day—your chance to see how to innovate anywhere with Azure Arc. Join us at this free digital event on Wednesday, June 15, 2022, from 9:00 AM‒10:30 AM Pacific Time.

Here are five reasons to attend Azure Hybrid, Multicloud, and Edge Day:

Hear real-world success stories, tips, and best practices from customers using Azure Arc. IT leaders from current customers will share how they use Azure Arc to enable IT, database, and developer teams to deliver value to their users faster, quickly mine business data for deeper insights, modernize existing on-premises apps, and easily keep environments and systems up to date.
Be among the first to hear Microsoft product experts present innovations, news, and announcements for Azure Arc. Get the latest updates on the most comprehensive portfolio of hybrid solutions available.
See hybrid solutions in action. Watch demos and technical deep dives—led by Microsoft engineers—on hybrid and multicloud solutions, including Azure Arc and Azure Stack HCI. You’ll also hear product leaders present demos on Azure Arc–enabled SQL Managed Instance, Business Critical—a service tier that just recently became generally available. Business Critical is built for mission-critical workloads that require the most demanding performance, high availability, and security.
Get answers to your questions. Use the live Q&A chat to ask your questions and get insights on your specific scenario from Microsoft product experts and engineers.
Discover new skill-building opportunities. Learn how you can expand your hybrid and multicloud skillset with the latest trainings and certifications from Microsoft, including the Windows Server Hybrid Administrator Associate certification.

And here’s a first look at one of the Azure customers sharing their perspective at this digital event: Greggs

A United Kingdom favorite for breakfast, lunch, and coffee on the go, Greggs has been modernizing their 80-year-old business through digital transformation. When they needed to consolidate their sprawl between their on-premises server estate and their virtual machines, their IT team turned to Azure Arc.

“One of the advantages of Arc was that we could use one strategy across both on-premises and off-premises architecture,” says Scott Clennell, Head of Infrastructure and Networks at Greggs. “We deployed Azure Arc on our on-premises architecture, then throughout the rest of the infrastructure very rapidly—a matter of a couple of weeks.”

Not only has Azure Arc helped the IT team manage their digital estate better—it’s transformed their team culture. By uniting their entire IT team around Azure Arc, they can work better with their developers using common systems and collaboration tools.

Hear from Greggs and more featured customers at Azure Hybrid, Multicloud, and Edge Day. We hope you can attend!

Azure Hybrid, Multicloud, and Edge Day

June 15, 2022
9:00 AM‒10:30 AM Pacific Time

Delivered in partnership with Intel.

Quelle: Azure

Improve outbound connectivity with Azure Virtual Network NAT

For many customers, making outbound connections to the internet from their virtual networks is a fundamental requirement of their Azure solution architectures. Factors such as security, resiliency, and scalability are important to consider when designing how outbound connectivity will work for a given architecture. Luckily, Azure has just the solution for ensuring highly available and secure outbound connectivity to the internet: Virtual Network NAT. Virtual Network NAT, also known as NAT gateway, is a fully managed and highly resilient service that is easy to scale and specifically designed to handle large-scale and variable workloads.

NAT gateway provides outbound connectivity to the internet through its attachment to a subnet and public IP address. NAT stands for network address translation, and as its name implies, when NAT gateway is associated to a subnet, all of the private IPs of a subnet’s resources (such as, virtual machines) are translated to NAT gateway’s public IP address. The NAT gateway public IP address then serves as the source IP address for the subnet’s resources. NAT gateway can be attached to a total of 16 IP addresses from any combination of public IP addresses and prefixes.

Figure 1: NAT gateway configuration with a subnet and a public IP address and prefix.

Customer is halted by connection timeouts while trying to make thousands of connections to the same destination endpoint

Customers in industries like finance, retail, or other scenarios that require leveraging large sets of data from the same source need a reliable and scalable method to connect to this data source.

In this blog, we’re going to walk through one such example that was made possible by leveraging NAT gateway.

Customer background

A customer collects a high volume of data to track, analyze, and ultimately make business decisions for one of their primary workloads. This data is collected over the internet from a service provider’s REST APIs, hosted in a data center they own. Because the data sets the customer is interested in may change daily, a recurring report can’t be relied on—they must request the data sets each day. Because of the volume of data, results are paginated and shared in chunks. This means that the customer must make tens of thousands of API requests for this one workload each day, typically taking from one to two hours. Each request correlates to its own separate HTTP connection, similar to their previous on-premises setup.

The starting architecture

In this scenario, the customer connects to REST APIs in the service provider’s on-premises network from their Azure virtual network. The service provider’s on-premises network sits behind a firewall. The customer started to notice that sometimes one or more virtual machines waited for long periods of time for responses from the REST API endpoint. These connections waiting for a response would eventually time out and result in connection failures.

Figure 2: The customer sends traffic from their virtual machine scale set (VMSS) in their Azure virtual network over the internet to an on-premises service provider’s data center server (REST API) that is fronted by a firewall.

The investigation

Upon deeper inspection with packet captures, it was found that the service provider’s firewall was silently dropping incoming connections from their Azure network. Since the customer’s architecture in Azure was specifically designed and scaled to handle the volume of connections going to the service provider’s REST APIs for collecting the data they required, this seemed puzzling. So, what exactly was causing the issue?

The customer, the service provider, and Microsoft support engineers collectively investigated why connections from the Azure network were being sporadically dropped, and made a key discovery. Only connections coming from a source port and IP address that were recently used (on the order of 20 seconds) were dropped by the service provider’s firewall. This is because the service provider’s firewall enforces a 20-second cooldown period on new connections coming from the same source IP and port. Any connections using a new source port on the same public IP were not impacted by the firewall’s cooldown timer. From these findings, it was concluded that source network address translation (SNAT) ports from the customer’s Azure virtual network were being reused too quickly to make new connections to the service provider’s REST API. When ports were reused before the cooldown timer completed, the connection would timeout and ultimately fail. The customer was then confronted with the question of, how do we prevent ports from being reused too quickly to make connections to the service provider’s REST API? Since the firewall’s cooldown timer could not be changed, the customer had to work within its constraints.

NAT gateway to the rescue

Based on this data, NAT gateway was introduced into the customer’s setup in Azure as a proof of concept. With this one change, connection timeout issues became a thing of the past.

NAT gateway was able to resolve this customer’s outbound connectivity issue to the service provider’s REST APIs for two reasons. One, NAT gateway selects ports at random from a large inventory of ports. The source port selected to make a new connection has a high probability of being new and therefore will pass through the firewall without issue. This large inventory of ports available to NAT gateway is derived from the public IPs attached to it. Each public IP address attached to NAT gateway provides 64,512 SNAT ports to a subnet’s resources and up to 16 public IP addresses can be attached to NAT gateway. That means a customer can have over 1 million SNAT ports available to a subnet for making outbound connections. Secondly, source ports being reused by NAT gateway to connect to the service provider’s REST APIs are not impacted by the firewall’s 20-second cooldown timer. This is because the source ports are set on their own cooldown timer by NAT gateway for at least as long as the firewall’s cooldown timer before they can be reused. See our public article on NAT gateway SNAT port reuse timers to learn more.

Stay tuned for our next blog where we’ll do a deep dive into how NAT gateway solves for SNAT port exhaustion through not only its SNAT port reuse behavior but also through how it dynamically allocates SNAT ports across a subnet’s resources.

Learn more

Through the customer scenario above, we learned how NAT gateway’s selection and reuse of SNAT ports proves why it is Azure’s recommended option for connecting outbound to the internet. Because NAT gateway is not only able to mitigate risk of SNAT port exhaustion but also connection timeouts through its randomized port selection, NAT gateway ultimately serves as the best option when connecting outbound to the internet from your Azure network.

To learn more about NAT gateway, see Design virtual networks with NAT gateway.
Quelle: Azure