Star Trek: Dritte Staffel von Lower Decks startet am 25. August
Die animierte Star-Trek-Serie Lower Decks geht in die dritte Runde: Ab dem 25. August startet die neue Staffel mit zehn Folgen. (Star Trek, Amazon)
Quelle: Golem
Die animierte Star-Trek-Serie Lower Decks geht in die dritte Runde: Ab dem 25. August startet die neue Staffel mit zehn Folgen. (Star Trek, Amazon)
Quelle: Golem
Anders als AWS mit seinen eigenen Graviton-Chips setzt Google auf die ARM-Server-Chips von Ampere – wie der Rest der Konkurrenz. (Google, Prozessor)
Quelle: Golem
In ice hockey’s earlier days, National Hockey League (NHL) coaches made their most important decisions based on gut instincts. Today, experience and instincts are still vital, but NHL coaches now have another essential tool at their disposal: powerful data analytics. Before and after every game, coaches and even players meticulously pore over game data and review detailed statistics to improve performance and strategy. And while this is a win for the NHL, higher-end data analytics tools have typically been out of reach for youth hockey teams largely because capturing game performance data on the ice is expensive, complicated, and time consuming.We built Drive Hockey Analytics to democratize pro-level analytics and help young players develop their gameplay and build a higher hockey IQ. Coaches and parents can now easily and affordably track 3,000 data points per second from players, sticks, and pucks. Drive Hockey Analytics—which takes 15 minutes to set up at the rink after initial calibration—converts these raw data points into actionable statistics and insights to improve player performance in real time and boost post-game training.Scaling a market-ready stick and puck tracking platform on Google CloudDrive Hockey Analytics began as an engineering project in the MAKE+ prototype lab of the British Columbia Institute of Technology (BCIT). We quickly realized that we couldn’t transform Drive Hockey Analytics into a market-ready stick and puck tracking platform without shifting more resources to R&D. After meeting with the dedicated Google Startup Success Managers from theGoogle for Startups Cloud Program, with this support, we decided to migrate from AWS toGoogle Cloud so our small team could reduce IT costs and accelerate time to market. Google Cloud solutions make everything easier to build, scale, and secure. We immediately took advantage of Google Cloud’s highly secure-by-design infrastructure to implement robust user authentication and institute strict privacy controls to comply with the Children’s Online Privacy Protection Act (COPPA). In just days, we enabled coaches and players to access individual analytics dashboards and more securely share key statistics such as speed, acceleration, agility and edgework, zone time, positioning, among many others with teammates and family.We also separated performance and personalstorage data on Google Cloud, encrypted containers withGoogle Kubernetes Engine (GKE), and wrote third-party applications and pipelines that autoscale withSpark on Google Cloud. These processes could have taken us weeks or even months if we had to manually design and integrate all these security capabilities on our own.To build our interactive player analytics engine, we leveragedTensorFlow,BigQuery, andMongoDB Atlas on Google Cloud. With the simple and flexible architecture offered in Google Cloud, we quickly moved from concept to code, and from code to state-of-the-art predictive models. We now collect and analyze thousands of data points every second to identify key performance metrics, break-out game intelligence, and deliver actionable recommendations. Coaches and players can leverage this data to increase team possession of the puck, optimize player positions, reduce shot attempts, and score more goals.In the future, we plan to explore additional Google products and services such asGoogle Cloud Tensor Processing Units (TPUs),Google Cloud Endpoints for OpenAPI, andGoogle Ads. These solutions will enable us to further expand our ML stack, leverage streaming data from wearables and cameras, and reach new markets.Bringing pro-level sports analytics to youth hockeyThe Startup Success team has been instrumental in helping us rapidly transform Drive Hockey Analytics from a university engineering project into a top shelf player and puck tracking system. Their guidance and responsiveness are amazing, with a human touch that stands out compared to services from other technology providers. We especially want to highlight the Google Cloud research credits that help us affordably explore new solutions to address extremely large dataset challenges. Thanks to these credits, we successfully process thousands of data points in streams and batches, apply ML-driven logic, and run resource-efficient queries. Google Cloud research credits also give us access to dedicated startup experts, managed compute power, vast amounts of secure storage, and potential for joining the Google Cloud Marketplace.Demand for Drive Hockey Analytics continues to grow, and we constantly evolve our platform based on input from youth teams and coaches. We’re looking to go fully to market in 2023. With Drive Hockey Analytics, youth teams are putting on their mitts and taking control of the puck as they improve real-time player performance and help their team count more wins. We can’t wait to see what we accomplish next as we continue transforming dusters into barnburners by democratizing advanced analytics that were once only available to pro-sports teams.If you want to learn more about how Google Cloud can help your startup, visit our pagehere to get more information about our program, andsign up for our communications to get a look at our community activities, digital events, special offers, and more.Related ArticleBlack Kite runs millions of cyber-risk assessments at scale on Google CloudLearn how Black Kite flawlessly runs millions of cyber-risk assessments on Google Cloud.Read Article
Quelle: Google Cloud Platform
Organizations are driving the complete transformation of their business by inventing new ways to accomplish their objectives using the cloud; from making core processes more efficient, to improving how they reach and better serve their customers, to achieving insights through data that fuel innovation. Cloud infrastructure belongs at the center of every organization’s transformation strategy. We see a vast landscape of opportunity to innovate in our cloud’s core capabilities that will have long-standing impact on the speed and simplicity of building solutions on Google Cloud. From data management and machine learning to security and sustainability, we continue to invest deeply in infrastructure innovation that generates value from the foundation upward. We focus on three defining attributes of our infrastructure that help our customers accelerate through innovation:Optimized: Customers want solutions that meet their specific needs. They want to build and run apps where they need them, tailored for popular workloads, industry solutions, and for specific outcomes whether it is high performance, cost savings, or a balance of both. Their workloads should just run better on Google Cloud.Transformative: Transformation is more than “lifting and shifting” infrastructure to the cloud for cost saving and convenience. Transformative infrastructure integrates the best of Google’s AI and ML capabilities to drive faster innovation, while meeting the most stringent security, sovereignty, and compliance needs.Easy: As cloud platforms become more versatile, they can become very complex to adopt and operate. Reducing your operational burden is possible with an easy-to-use cloud platform. Our customers often tell us that Google Cloud makes complex tasks seem simple, and this is a product of intentional engineering. Google’s 20+ years of technology leadership is built on a culture of innovation and focus on our customers. Here are some examples of new innovation we are bringing in these areas. Solutions that are optimized for what matters most to youLet’s start with optimizing for price-performance. Last year, we launched Tau VMs optimized for cost-effective performance of scale-out workloads. Tau T2D leapfrogged every leading public cloud provider in both performance and total cost of ownership delivering up to 42% better price performance versus comparable VMs from any other leading cloud. Today, we are delighted to announce that we are offering more choice to customers, with the addition of Arm-based machines to the Tau VM family. Powered by Ampere® Altra® Arm-based processors, T2A VMs deliver exceptional single-threaded performance at a compelling price, making them ideal for scale-out, cloud-native workloads. Developers now have the option of choosing the optimal architecture to test, develop and run their workloads.Cost optimization is a major goal for many of our customers. Spot VMs enable you to take advantage of our idle machine cycles at deep discounts with a guaranteed 60% off and up to 91% savings off on-demand pricing. These are the perfect choice for batch jobs and fault-tolerant workloads in high performance computing, big data and analytics. Customers told us that they would like to see less variability and more predictability in the pricing of Spot VMs. We have heard you loud and clear. Our Spot VMs offer the least variability (once per month price changes) and more predictability in pricing compared to other leading clouds. Optimizing for global scale is critical to meet the high demands of today’s consumers — especially when it comes to video streaming. Launched in May 2022, Media CDN is optimized to deliver immersive video streaming experience at a global scale. Available in more than 1,300 cities, Media CDN leverages the same infrastructure that YouTube uses to deliver content to over 2 billion users around the world. Customers including U-NEXT and Stan have quickly rolled out Media CDN to deliver a modern, high quality experience to their viewers. Another emerging opportunity is the rise of distributed systems and distributed workers, and the ability to build and run apps wherever needed. With Google Distributed Cloud, we now extend Google Cloud infrastructure and services to different physical locations (or distributed environments) including on premises or co-location data centers and a variety of edge environments. Anthos powers all Google Distributed Cloud offerings, to deliver a common control plane for building, deploying and running your modern containerized applications at scale, wherever you choose.For greater choice, we have designed Google Distributed Cloud as a portfolio of hardware, software, and services with multiple offerings to address the specific requirements of your workloads and use cases. You can choose from our Edge, Virtual, and Hosted offerings to meet the needs of your business.Driving transformation through AI/ML and securityThe pace of innovation in the field of machine learning continues to accelerate and Google has been a long time pioneer. From Search and YouTube to Play and Maps, ML has helped bring out the best that our products have to offer. We’ve made it a point to make the best of Google available to our customers, and JAX and Cloud TPU v4 are two great examples. JAX is a cutting edge open source ML framework developed by Google researchers. It’s designed to give ML practitioners more flexibility and allow them to more easily scale their models to the largest of scales. We recently made Cloud TPU v4 pods available to all our customers through our new ML hub. This cluster of Cloud TPU v4 pods offers 9 exaflops of peak aggregate performance and runs at 90% carbon-free energy, making it one of the fastest, most efficient, and most sustainable ML infrastructure hubs in the world. Cloud TPU v4 has enabled researchers to train a variety of sophisticated models including natural language processing models and recommender models to name a few. Customers are already seeing the benefits, including Cohere who saw a 70% improvement in training times and LG Research who used Cloud TPU v4 to train their large multi-modal 300 billion parameter model.On the security front, increasing cybersecurity threats has every company rethinking its security posture. Our investments in our planet-scale network that’s secure, performant and reliable is matched with our lead in defining industry wide frameworks and standards to help customers better secure their software supply chain. Google last year introduced SLSA (supply chain levels for software artifacts), an end-to- end framework for ensuring the integrity of artifacts throughout the software supply chain. It is an open-source equivalent of many of the processes we have been implementing internally at Google. We challenge ourselves to enable security without complex configuration or performance degradation. One example of this is our Confidential VMs – where data is stored in the trusted execution environment outside of which it is impossible to view the data or operations performed on it, even with a debugger. Another is Cloud Intrusion Detection System (Cloud IDS), which provides network threat detection built on ML-powered threat analysis which processes over 15 Trillion transactions per day to identify new threats with 4.3M unique security updates made each day. With the highest possible rating of AAA by CyberRatings.org, Cloud IDS has proven efficacy to block virtually all evasions. Developer-first ease of useMaking your transformation journey simpler, with easy-to-use tools to accelerate your innovation is our priority. Today, we are introducing Batch in preview, a fully managed job scheduler to help customers run thousands of batch jobs with just a single command. It’s easy to set up, and supports throughput oriented workloads including those requiring MPI libraries. Jobs run on auto-scalable resources, giving you more time to work on the greatest areas of value. This improves the developer experience for executing HPC, AI/ML, and data processing workloads such as genomics sequencing, media rendering, financial risk modeling, and electronic design automation.Continuing innovation for greater ease, we recently announced the availability of the new HPC toolkit. This is an open source tool from Google Cloud that enables you to easily create repeatable, turnkey HPC clusters based on proven best practices, in minutes. It comes with several blueprints and broad support for third party components such as the Slurm scheduler and Intel DAOS and DDN Lustre storage. System performance and awareness of what infrastructure is doing is closely tied to security, but to do this well, it needs to be easy. We recently introduced Network Analyzer to help customers transform reactive workflows into proactive processes and reduce network and service downtime by automatically monitoring VPC network configurations. Network Analyzer is part of our Network Intelligence Center, providing a single console for Google Cloud network observability, monitoring, and troubleshooting. This is just a sample of what we are doing in Google Cloud to provide infrastructure that gives customers the freedom to securely innovate and scale from on-premises, to edge, to cloud on an easy, transformative, and optimized platform. To learn more about how customers such as Broadcom and Snap are using Google Cloud’s flexible infrastructure to solve their biggest challenges, be sure to watch our Infrastructure Spotlight event, aired today.
Quelle: Google Cloud Platform
Organizations that are developing ever larger, scale-out applications will leave no stone unturned in their search for a compute platform that meets their needs. For some, that means looking to the Arm® architecture. Known for delivering excellent performance per watt efficiency, Arm-based chips are already ubiquitous in mobile devices, and have proven themselves for supercomputing workloads. At Google Cloud, we’re also excited about using Arm chips for the next generation of scale-out, cloud-native workloads.Last year, we added Tau VMs to Compute Engine, offering a new family of VMs optimized for cost-effective performance for scale-out workloads. Today we are thrilled to announce the Preview release of our first VM family based on the Arm architecture, Tau T2A. Powered by Ampere® Altra® Arm-based processors, T2A VMs deliver exceptional single-threaded performance at a compelling price. Tau T2A VMs come in multiple predefined VM shapes, with up to 48 vCPUs per VM, and 4GB of memory per vCPU. They offer up to 32 Gbps networking bandwidth and a wide range of network attached storage options, making Tau T2A VMs suitable for scale-out workloads including web servers, containerized microservices, data-logging processing, media transcoding, and Java applications.Google Cloud customers and developers now have the option of choosing an Arm-based Google Cloud VM to test, develop and run their workloads on the optimal architecture for their workload. Several of our customers have had private preview access to T2A VMs for the last few months and have had a great experience with these new VMs. Below is what few of them have to say about T2A VMs.“Our drug discovery research at Harvard includes several compute-intensive workloads that run on SLURM using VirtualFlow1. The ability to run our workloads on tens of thousands of VMs in parallel is critical to optimize compute time. We ported our workload to the new T2A VM family from Google and were up and running with minimal effort. The improved price-performance of the T2A will help us screen more compounds and therefore discover more promising drug candidates.” – Christoph Gorgulla, Research Associate, Harvard University“In recent years, we have come to rely on Arm-based servers to power our engineering activity at lower cost and higher performance compared to legacy environments. The introduction of the Arm Neoverse N1-based T2A instance allows us to diversify our use of cloud compute on Arm-based hardware and leverage the Google Compute Engine to build the exact virtual machine types we need, with the convenience of Google Kubernetes Engine for containerized workloads.” – Mark Galbraith, Vice President, Productivity Engineering, Arm.Ampere Computing has been a key partner for Google Cloud and delivering this VM. “Ampere® Altra® Cloud Native Processors were designed from the ground up to meet the demands of modern cloud applications,” said Jeff Wittich, Chief Product Officer, Ampere Computing. “Our close collaboration with Google Cloud has resulted in the launch of the new price-performance optimized Tau T2A instances, which enable demanding scale-out applications to be deployed rapidly and efficiently.”Integration with Google Cloud services Google Cloud is ramping up its support for Arm. T2A VMs support most popular Linux operating systems such as RHEL, CentOS, Ubuntu, and Rocky Linux. In addition, T2A VMs also support Container-optimized OS to bring up Docker containers quickly, efficiently and securely. Further, developers building applications on Google Cloud can already use several Google Cloud services with T2A VMs — with more coming later this year: Google Kubernetes Engine – Google Kubernetes Engine (GKE) is the leading platform for organizations looking for advanced container orchestration. Starting today, GKE customers can run their containerized workloads using the Arm architecture on T2A. Arm nodes come packed with key GKE features, including the ability to run in GKE Autopilot mode for a hands-off experience. Read more about running your Arm workloads with GKE here. Batch – Our newly launched Batch service supports T2A. As of today users will be able to run batch jobs on T2A instances to optimize their cost of running workloads.Dataflow – Dataflow is a fully managed streaming analytics service that minimizes latency, processing time, and cost through autoscaling and batch processing. You can now use T2A VMs with your Dataflow workloads.Extensive ISV partner ecosystemWhile Arm chips are relative newcomers to data center workloads, there’s already a robust ecosystem of ISV support for Tau T2A VMs. In fact, Ampere lists more than 100 applications, databases, cloud-native software and programming languages that are already running on Ampere-based T2A VMs, with more being added all the time. Further, ISV partners that have validated their solutions on T2A VMs have been impressed by the ease with which they were able to port their software to Tau T2A VMs. “Momento’s serverless cache enables developers to accelerate database and application performance at scale. Over the past few months, we have become intimately familiar with Google Cloud’s new T2A VMs. We were pleasantly surprised with the ease of portability to Arm instance from day one. The maturity of the T2A platform gives us the confidence to start using these VMs in production. Innovations like T2A VMs in Google Cloud help us continuously innovate on behalf of our customers.” – Khawaja Shams, CEO, Momento. Learn more about Momento’s T2A experience.“SchedMD’s Slurm open-source workload manager is designed specifically to satisfy the demanding needs of compute-intensive workloads. We are thrilled with the introduction of the T2A VMs on Compute Engine. The introduction of T2A will give our customers more choice of virtual machines for their demanding workload management needs using Slurm.” – Nick Ihli, Director of cloud and Solutions Engineering, SchedMD.”At Rescale, we help our customers deliver innovations faster with high performance computing built for the cloud. We are excited to now offer T2A VMs to our customers, with compelling price-performance to further drive engineering and scientific breakthroughs. With Arm-based VMs on Google Cloud, we are able to offer our customers a larger portfolio of solutions for computational discovery.” – Joris Poort, CEO, Rescale“Canonical Ubuntu is a popular choice for developers seeking a third party server operating system running on Google Cloud, and we are very happy to provide Ubuntu as the guest OS for users of Compute Engine on Google Cloud’s new Arm-based VMs, which supports our most recent long-term supported versions. Once migrated, users will find a completely familiar environment with all the packages and libraries they know and rely on to manage their workloads.” – Alexander Gallagher, VP of Cloud Sales at CanonicalTo help you get started, we’re providing customers, ISV and ecosystem partners access to T2A VMs at no charge for a trial period, to help jumpstart development on Ampere Arm-based processors. When Tau T2A reaches General Availability later this year, we’ll continue to offer a generous trial program that offers up to 8 vCPUs and 32 GB of RAM at no cost.Pricing and availabilityTau T2A VMs are price-performance optimized for your cloud-native applications. A 32vCPU VM with 128GB RAM will be priced at $1.232 per hour for on-demand usage in us-central1. T2A VMs are currently in preview in several Google Cloud regions: us-central (Iowa – Zone A,B,F), europe-west4 (Netherlands – Zone A,B,C) and asia-southeast1 (Singapore – Zone B,C) and will be in General Availability in the coming months. We look forward to working with you as you explore using Ampere Arm-based T2A VMs for your next scale-out workload in the cloud.To learn more about Tau T2A VMs or other Compute Engine VM options, check out our machine types and pricing pages. To get started, go to the Google Cloud Console and select T2A for your VMs.1. https://www.nature.com/articles/s41586-020-2117-zRelated ArticleRun your Arm workloads on Google Kubernetes Engine with Tau T2A VMsWith Google Kubernetes Engine’s (GKE) support for the new Tau VM T2A, you can run your containerized workloads on the Arm architecture.Read Article
Quelle: Google Cloud Platform
At Google Kubernetes Engine (GKE), we obsess over customer success. One major way we continue to meet the evolving demands of our customers is by driving innovations on the underlying compute infrastructure. We are excited to now give our customers the ability to run their containerized workloads using the Arm® architecture! Earlier today, we announced Google Cloud’s virtual machines (VMs) based on the Arm architecture on Compute Engine. Called Tau T2A, these VMs are the newest addition to the Tau VM family that offers VMs optimized for cost-effective performance for scale-out workloads. We are also thrilled to announce that you can run your containerized workloads on the Arm architecture using GKE. Arm nodes come packed with the key GKE features you love on the x86 architecture, including the ability to run in GKE Autopilot mode for a hands-off experience, or on GKE Standard clusters where you manage your own node pools. See the ‘Key GKE features’ below for more details.”The new Arm-based T2A virtual machines (VMs) supported on the Google Kubernetes Engine (GKE) are providing cloud customers with the higher performance and energy efficient options required to run their modern containerized workloads. The Arm engineering team has collaborated on Kubernetes CI/CD enablement and we look forward to seeing the ease-of-use and ecosystem support that comes with Arm support on GKE.”– Bhumik Patel, Director of Software Ecosystem Development, Infrastructure Line of Business, Arm.Starting today, Google Cloud customers and developers can run their Arm workloads on GKE in Preview1 by selecting a T2A machine shape during cluster or node pool creation either through gcloud or the Google Cloud console. Check out our tutorial video to get started!Some of our customers who had early access to T2A VMs highlighted the ease of use in working with their Arm workloads on GKE.”Arcules offers cloud-based video surveillance as a service for multi-site customers that’s easy-to-use, scalable, and reliable – all within an open platform and supported by customer service that truly cares. We are excited to run our workloads using Arm-based T2A VMs with Google Kubernetes Engine (GKE). We were thoroughly impressed by how easily we could provision Arm nodes on a GKE cluster independently and alongside x86-based nodes. We believe that this multi-processor architecture will help us reduce costs while providing a better experience for our customers.”—Benjamin Rowe, Cloud and Security Architect, ArculesKey GKE features supported with Arm-based VMsWhile the T2A is Google Cloud’s first VM based on the Arm architecture, we’ve ensured that it comes with support for some of the most critical GKE features — with more on the way. Arm Pods on GKE Autopilot – Arm workloads can be easily deployed on Autopilot with GKE version 1.24.1-gke.1400 or later in supported regions1 by specifying both the scale-out compute class (which also enters Preview today), and the Arm architecture using node selectors or node affinity. See the docs for an example Arm workload deployment on Autopilot.Ease-of-use in creating GKE nodes – You can provision Arm nodes with GKE version 1.24 or later using the Container-optimized OS (COS) with containerd node image and selecting the T2A machine series. In other words, GKE automatically provisions the correct node image to be compatible with your choice of x86 or Arm machine series. Multi-architecture clusters – GKE clusters support scheduling workloads on multiple compute (x86 and Arm) architectures. A single cluster can either have only x86 nodes, only Arm nodes, or a combination of both x86 and Arm nodes. You can even run the same workloads on both architectures in order to evaluate the optimal architecture for your workloads.Networking and security features – Arm nodes support the latest in GKE networking features such as GKE Dataplane V2 and creating and enforcing a GKE network policy. GKE’s security features such as workload identity and shielded nodes are also supported on Arm nodes.Scalability features – When running your Arm workloads, you can use GKE’s best-in-class scalability features such as cluster autoscaler (CA), node auto provisioning (NAP), and horizontal and vertical pod autoscaling (HPA / VPA).Support for Spot VMs – GKE supports T2A Spot VMs out-of-the-box to help save costs on fault-tolerant workloads. Enhanced developer toolsWe’ve updated many popular Google Cloud developer tools to let you create containerized workloads that run on GKE nodes with both Arm and x86 architectures, simplifying the transition to developing for Arm or multi-architecture GKE clusters. When using Cloud Code IDE extensions or Skaffold on the command line, you can build Arm containers locally using Dockerfiles, Jib, or Ko, then iteratively run and debug your applications on GKE. With Cloud Code and Skaffold, building locally for GKE works automatically regardless of whether you’re developing on an x86- or Arm-based machine. Whether you build Arm or multi-architecture images, Artifact Registry can be used to securely store and manage your build artifacts before deploying them. If you develop on Arm-based local workstations, you can use Minikube to emulate GKE clusters with Arm nodes locally while taking advantage of simplified authentication with Google Cloud using the gcp-auth addon. Finally, Google Cloud Deploy makes it easy to set up continuous delivery to Arm and multi-architecture GKE clusters just like it does with x86 GKE clusters. Updating a pipeline for these Arm-inclusive clusters is as simple as pointing your Google Cloud Deploy pipeline to an image registry with the appropriate architecture image. A robust DevOps, security, and observability ecosystemWe’ve also partnered with leading CI/CD, observability, and security ISVs to ensure that our partner solutions and tooling are compatible with Arm workloads on GKE. You can use the following partner solutions to run your Arm workloads on GKE straight out-of-the-box.Datadog provides comprehensive visibility into all your containerized apps running on GKE by collecting metrics, logs and traces to help to surface performance issues and provide context when troubleshooting. Starting today, you can use Datadog when running your Arm workloads on GKE. Learn more.Dynatrace uses its software intelligence platform to track the availability, health and utilization of applications running on GKE, thereby helping surface anomalies and determine their root causes. You can now use these features of Dynatrace with GKE Arm nodes. Learn more.Palo Alto Networks’ Prisma Cloud Daemonset Defenders enforce security policies for your cloud workloads, while Prisma Cloud Radar displays a comprehensive visualization of your GKE clusters as well as the containers and nodes, so you can easily identify risks and investigate incidents. Use Prisma Cloud Daemonset Defenders with GKE Arm nodes for enhanced cloud workload security. Learn more.Splunk Observability Cloud provides developers and operators with deep visibility into the composition, state, and ongoing issues within a cluster. You can now use Splunk Observability Cloud when running your Arm workloads on GKE. Learn more.Agones is an open source platform built on top of Kubernetes that helps you deploy, host, scale, and orchestrate dedicated game servers for large scale multiplayer games. Through a combination of efforts from the community and Google Cloud, Agones now supports the Arm architecture starting with the 1.24.0 release of Agones. Learn more. Try out GKE Arm today!To help you make the most of your experience with GKE Arm nodes, we are providing guides to help you with learning more about Arm workloads on GKE, creating clusters and node pools with Arm nodes, building multi-arch images for Arm workloads, and preparing an Arm workload for deployment to your GKE cluster. To get started with running Arm workloads on GKE, check out the tutorial video! 1. T2A VMs are currently in preview in several Google Cloud regions: us-central (Iowa – Zone A,B,F), europe-west4 (Netherlands – Zone A,B,C) and asia-southeast1 (Singapore – Zone B,C).Related ArticleExpanding the Tau VM family with Arm-based processorsThe Tau T2A is Google Cloud’s first VM family based on the Arm architecture and designed for organizations building cloud-native, scale-o…Read Article
Quelle: Google Cloud Platform
As CentOS 7 reaches end of life, many enterprises are considering their options for an enterprise-grade, downstream Linux distribution on which to run their production applications. Rocky Linux has emerged as a strong alternative that, like CentOS, is 100% compatible with Red Hat Enterprise Linux. In April 2022, we announced a customer support partnership with CIQ, the official support and services partner and sponsor of Rocky Linux, as the first step in providing a best-in-class enterprise-grade supported experience for Rocky Linux on Google Cloud. Today we’re excited to announce the general availability of Rocky Linux Optimized for Google Cloud. We developed this collection of Compute Engine virtual machine images in close collaboration with CIQ so that you get optimal performance when using Rocky Linux on Compute Engine to run your CentOS workloads.These new images contain customized variants of the Rocky Linux kernel and modules that optimize networking performance on Compute Engine infrastructure, while retaining bug-for-bug compatibility with Community Rocky Linux and Red Hat Enterprise Linux. The high bandwidth networking enabled by these customizations will be beneficial to virtually any workload, and are especially valuable for clustered workloads such as HPC (see this page for more details on configuring a VM with high bandwidth).Going forward, we’ll collaborate with CIQ to publish both the community and Optimized for Google Cloud editions of Rocky Linux for every major release, and both sets of images will receive the latest kernel and security updates provided by CIQ and the Rocky Linux community. And of course, we’ll offer support with CIQ for both these images, per our partnership. Rocky Linux Optimized for Google Cloud lets you take advantage of everything Compute Engine has to offer, including day-one support for our latest VM families, GPUs, and high-bandwidth networking. And for customers building for a multi-cloud deployment environment, the community Rocky images have you covered.Starting today, Rocky Linux 8 Optimized for Google Cloud is available for all x86-based Compute Engine VM families (and soon for the new Arm-based Tau T2A), with version 9 soon to follow. Give it a try and let us know what you think.Related ArticleGoogle Cloud partners with CIQ to provide an enterprise-grade experience for Rocky LinuxGoogle announces CIQ-backed support for Rocky Linux, and pre-announces performance-tuning, new migration tools, and out-of-the-box suppor…Read Article
Quelle: Google Cloud Platform
With the pandemic mostly behind us, several large economies have opened in some shape or form. This, despite the uneven supply of goods and services and higher than usual energy costs. The higher energy cost and the resulting increase in the cost of doing business, has led to a tighter economic outlook. Coupled with long lead times for required parts and continued remote work, datacenter management is harder and costlier than it has been. However, maintaining and growing any business requires additional information technology (IT) resources. Thus, there is an increased need for IT solutions to maintain business continuity and sustain innovation. Hyperscalers such as Microsoft’s Azure fill this need and are less affected by these constraints due to the economies of scale. Further, the cloud consumption model allows customers to quickly scale resources up or down to support agile businesses. This is why public cloud spend continues to accelerate and the top cloud initiatives for all organizations are migrating more loads, optimizing existing use, and modernizing through platform as a service (PaaS) or software as a service (SaaS)1.
Customer requirements
The customer requirement is to stay competitive, both on the technical and business fronts, to ensure continued success. Technical competency requires an agile and innovative IT platform with data analytics to provide insights that can help differentiate from the competition. It would be ideal if such an innovative platform is available at a lower cost. Incidentally, modernizing existing IT infrastructure, applications, and data to PaaS/SaaS models in the cloud, delivers on all these requirements, leading to a higher return on investment (ROI) for the customer.
The higher efficiency and lower cost due to the adoption of modern cloud-native architectures, such as PaaS and SaaS, also leads to greater levels of flexibility. Thus, setting the stage for the customer to realize greater value as they progress from IaaS to PaaS and onto SaaS models. Please download our analyst report for details on options and value due to application modernization in Azure.
Microsoft’s commitment to modernization
This week at Microsoft Inspire, we are highlighting our commitment to modernization with integrated, at-scale modernization of ASP.NET applications to Azure Application Service. Also, in preview is Azure Migrate’s support of discovery and assessment of SQL Server running in Microsoft Hyper-V and Physical environments and IaaS services of other public clouds. Please see our tech community blog for more details on this, and other Azure Migrate features available for Linux and Windows workloads.
Enabling deeper integration with our ISV partners
Azure Migrate’s extensible framework is ideal for deeper integration of first-party features to drive automation, while also leveraging third-party tools. Here is a brief view of partner capabilities that can be added to this flexible framework:
Over the years, enterprises have built and expanded custom applications, which require modernization to better support fast-changing business needs. See how Microsoft and CAST partner by combining Azure Migrate and software intelligence produced by CAST technology to automate migration and modernization under the Azure Migration and Modernization Program (AMMP).
Operability of your cloud infrastructure and workloads is key to cloud adoption success and Azure landing zones provide prescriptive guidance to set a well architected foundation for your Azure infrastructure. In partnership with HashiCorp and our Terraform Azure community, we now have the reference implementation for deploying and managing Azure resources at enterprise scale.
Learn more
Attend this Microsoft Inspire on-demand session to learn more about cloud migration and modernization. Check out this FastTrack link for moving to Azure efficiently and get best practice guidance from the Azure migration and modernization center (AMMC). AMMP is now one comprehensive program for all migration and modernization needs of our customers. Learn more and join AMMP today.
Sources:
1. Trends in Cloud Computing: 2022 State of the Cloud Report, Flexera.com.
Quelle: Azure
The financial services industry is constantly evolving to meet customer and regulatory demands. It is facing a variety of challenges spanning people, processes, and technology. Financial institutions (FIs) need to continuously accelerate to achieve technology and innovation while maintaining scale, quality, speed, and safety. Simultaneously, they need to handle evolving regulatory frameworks, manage risk, digitally transform, process financial transaction volumes, and accelerate cost reductions and restructuring efforts.
Murex is a leading global software provider of trading, risk management, processing operations, and post-trade solutions for capital markets. FIs around the world deploy Murex’s MX.3 platform to better manage risk, accelerate transformation, and simplify compliance while driving revenue growth.
Murex MX.3 on Azure
Murex MX.3 has been certified for Microsoft Azure since version 3.1.35. We have been collaborating with Murex and global strategic partners like Accenture and DXC to provide Murex customers with a simple way to create and scale MX.3 infrastructure and achieve agility in business transformation. With the recent version 3.1.48, SQL Server is supported and customers can now benefit from the performance, scalability, resilience, and cost savings facilitated by SQL Server. With SQL Server IaaS Extension, Murex customers can run SQL Server virtual machines (VMs) in Azure with PaaS capabilities for Windows OS (with automated patching setting disabled in order to prevent the installation of a cumulative update that may not yet be supported by MX3).
Architecture
Murex customers can now refer to the architecture to implement MX.3 application on Azure. Azure enables a secure, reliable, and efficient environment, significantly reducing the infrastructure cost needed to operate the MX.3 environment and providing scalability and a highly performant environment. Customers running MX.3 on Azure can take advantage of multilayered security provided by Microsoft across physical data centers, infrastructure, and operations in Azure. They can benefit from the Compliance Program that helps accelerate cloud adoption with proactive compliance assurance for highly critical and regulated workloads. Customers can maximize their existing on-premises investments using an effective hybrid approach. Azure provides a holistic, seamless, and more secure approach to innovation across customers’ on-premises, multicloud, and edge environments.
The architecture is designed to provide high availability and disaster recovery. Murex customers can achieve threat intelligence and traffic control using Azure Firewall, cost optimization using Reserved Instances and VM scale sets, and high storage throughout using Azure NetApp Files Ultra Storage.
“With the deployment of large scale—originally specialized platform-based—Murex workloads, Azure NetApp Files has proven to deliver the ideal Azure landing zone for storage-performance intensive, mission-critical enterprise applications and to live up to its promise to Migrate the Un-migratable," says Geert van Teylingen, Azure NetApp Files Principal Product Manager from NetApp.
Customers running Murex on Azure
Customers around the world are migrating the Murex platform from on-premises to Azure.
ABN AMRO has moved their MX.3 trading and treasury front-to-back-to-risk platform to Azure, achieving flexibility, agility, and improved time to market. ABN AMRO’s journey to Azure progressed from proof of concept to production, with the Murex MX.3 platform now entirely operational on Azure.
“The key focus for us was always to make sure that we could automate most processes while preserving its operational excellence and key features,” says Kees van Duin, IT Integrator at ABN AMRO.
“Thanks to Microsoft, we were able to preserve nearly 90 percent of our original design and move our platform to the cloud, while in-production, as efficiently as possible. We couldn’t be happier with the result,” he continues.
For Pavilion Energy, Upskills helped drive implementation for Murex Trading in Azure, helping reduce the risk of errors, increase the volume of trading activities, and optimize the management of their Murex MX.3 platform environments.
“We have been working on the Murex technology for over 10 years. Implementing Murex Trading Platform fully into Azure has proven to be the right decision to reduce the risk of delivery, optimize the environments management, and provide sustainable solutions and support to Pavilion Energy” says Thong Tran, Chief Executive Officer (CEO) of Upskills.
Strategic partners helping accelerate Murex workloads
Murex customers can modernize MX.3 workloads, reduce time-to-market and operational costs, and increase acceleration, leveraging accelerators, scripts, and blueprints from our partners—Accenture and DXC.
Accenture and Microsoft have decades of experience partnering with each other and building joint solutions that help customers achieve their goals. Leveraging our strategic alliance to better serve our customers, Accenture has designed and created specific accelerators, tools, and methodologies for MX.3 on Azure that could help organizations develop richer DevOps and become more agile while controlling costs.
Luxoft, a DXC Technology Company, with Microsoft as a global strategic partner for more than 30 years and Murex as a top-tier alliance partner for more than 13 years, helps modernize solutions to connect people, data, and processes with tangible business results. DXC has developed execution frameworks that adopt market best practices to accelerate and minimize risks of cloud migration of MX.3 to Azure.
Keeping pace with the changing regulatory and compliance constraints, financial innovation, computation complexity, and cyber threats is essential for FIs. FIs around the world are relying on Murex MX.3 to accelerate transformation and drive growth and innovation while complying with complex regulations. Customers are using Azure to enhance business agility and operation efficiency, reduce risk and total cost of ownership, and achieve scalability and robustness.
Additional resources
Microsoft and Murex team to help FIs move to Azure
Murex MX.3 architecture
ABN AMRO digital transformation journey with Murex
Quelle: Azure
Containers optimize our daily development work. They’re standardized, so that we can easily switch between development environments — either migrating to testing or reusing container images for production workloads.
However, a challenge arises when you need more than one container. For example, you may develop a web frontend connected to a database backend with both running inside containers. While possible, this approach risks negating some (or all) of that container magic, since we must also consider storage interaction, network interaction, and port configurations. Those added complexities are tricky to navigate.
How Docker Compose Can Help
Docker Compose streamlines many development workloads based around multi-container implementations. One such example is a WordPress website that’s protected with an NGINX reverse proxy, and requires a MySQL database backend.
Alternatively, consider an eCommerce platform with a complex microservices architecture. Each cycle runs inside its own container — from the product catalog, to the shopping cart, to payment processing, and, finally, product shipping. These processes rely on the same database backend container runtime, using a Redis container for caching and performance.
Maintaining a functional eCommerce platform means running several container instances. This doesn’t fully address the additional challenges of scalability or reliable performance.
While Docker Compose lets us create our own solutions, building the necessary Dockerfile scripts and YAML files can take some time. To simplify these processes, Docker introduced the open source Awesome Compose library in March 2020. Developers can now access pre-built samples to kickstart their Docker Compose projects.
What does that look like in practice? Let’s first take a more detailed look at Docker Compose. Next, we’ll explore step-by-step how to spin up a new development project using Awesome Compose.
Having some practical knowledge of Docker concepts and base commands is helpful while following along. However, this isn’t required! If you’d like to brush up or become familiarized with Docker, check out our orientation page and our CLI reference page.
How Docker Compose Works
Docker Compose is based on a compose.yaml file. This file specifies the platform’s building blocks — typically referencing active ports and the necessary, standalone Docker container images.
The diagram below represents snippets of a compose.yaml file for a WordPress site with a MySQL database, a WordPress frontend, and an NGINX reverse proxy:
We’re using three separate Docker images in this example: MySQL, WordPress, and NGINX. Each of these three containers has its own characteristics, such as network ports and volumes.
mysql:
image: mysql:8.0.28
container_name: demomysql
networks:
– network
wordpress:
depends_on:
– mysql
image: wordpress:5.9.1-fpm-alpine
container_name: demowordpress
networks:
– network
nginx:
depends_on:
– wordpress
image: nginx:1.21.4-alpine
container_name: nginx
ports:
– 80:80
volumes:
– wordpress:/var/www/html
Originally, you’d have to use the docker run command to start each individual container. However, this introduces hiccups while managing interactions across each container related to network and storage. It’s much more efficient to consolidate all necessary objects into a docker compose scenario.
To help developers deploy baseline scenarios faster, Docker provides a GitHub repository with several environments, available for you to reuse, called Docker Awesome Compose. Let’s explore how to run these on your own machine.
How to Use Docker Compose
Getting Started
First, you’ll need to download and install Docker Desktop (for macOS, Windows, or Linux). Note that all example outputs in this article, however, come from a Windows Docker host.
You can verify that Docker is installed by running a simple docker run hello-world command:
C:>docker run hello-world
This should produce the following output, indicating that things are working correctly:
You’ll also need to install Docker Compose on your machine. Similarly, you can verify this installation by running a basic docker compose command, which triggers a corresponding response:
C:>docker compose
Next, either locally download or clone the Awesome Compose GitHub repository. If you have Git running locally, simply enter the following command:
git clone https://github.com/docker/awesome-compose.git
If you’re not running Git, you can download the Awesome Compose repository as a ZIP file. You’ll then extract it within its own folder.
Adjusting Your Awesome Compose Code
After downloading Awesome Compose, jump into the appropriate subfolder and spin up your sample environment. For this example, we’ll use WordPress with MariaDB. You’ll then want to access your wordpress-mysql subfolder.
Next, open your compose.yaml file within your favorite editor and inspect its contents. Make the following changes in your provided YAML file:
Update line 9: volumes: – mariadb:/var/lib/mysql
Provide a complex password for the following variables:
MYSQL_ROOT_PASSWORD (line 12)
MYSQL_PASSWORD (line 15)
WORDPRESS_DB_PASSWORD (line 27)
Update line 30: volumes: mariadb (to reflect the name used in line 9 for this volume)
While this example has mariadb enabled, you can switch to a mysql example by commenting out image: mariadb:10.7 and uncommenting #image: mysql:8.0.27.
Your updated file should look like this:
services:
db:
# We use a mariadb image which supports both amd64 & arm64 architecture
image: mariadb:10.7
# If you really want to use MySQL, uncomment the following line
#image: mysql:8.0.27
#command: ‘–default-authentication-plugin=mysql_native_password’
volumes:
– mariadb:/var/lib/mysql
restart: always
environment:
– MYSQL_ROOT_PASSWORD=P@55W.RD123
– MYSQL_DATABASE=wordpress
– MYSQL_USER=wordpress
– MYSQL_PASSWORD=P@55W.RD123
expose:
– 3306
– 33060
wordpress:
image: wordpress:latest
ports:
– 80:80
restart: always
environment:
– WORDPRESS_DB_HOST=db
– WORDPRESS_DB_USER=wordpress
– WORDPRESS_DB_PASSWORD=P@55W.RD123
– WORDPRESS_DB_NAME=wordpress
volumes:
mariadb:
Save these file changes and close your editor.
Running Docker Compose
Starting up Docker Compose is easy. To begin, ensure you’re in the wordpress-mysql folder and run the following from the Command Prompt:
docker compose up -d
This command kicks off the startup process. It downloads and soon runs your various container images from Docker Hub. Now, enter the following Docker command to confirm your containers are running as intended:
docker compose ps
This command should show all running containers and their active ports:
Verify that your WordPress app is active by navigating to http://localhost:80 in your browser — which should display the WordPress welcome page.
If you complete the required fields, it’ll redirect you to the WordPress dashboard, where you can start using WordPress. This experience is identical to running on a server or hosting environment.
Once testing is complete (or you’ve finished your daily development work), you can shut down your environment by entering the docker compose down command.
Reusing Your Environment
If you want to continue developing in this environment later, simply re-enter docker compose up -d. This action displays the development setup containing all of the previous information in the MySQL database. This takes just a few seconds.
However, what if you want to reuse the same environment with a fresh database?
To bring down the environment and remove the volume — which we defined within compose.yaml — run the following command:
docker compose down -v
Now, if you restart your environment with docker compose up, Docker Compose will summon a new WordPress instance. WordPress will have you configure your settings again, including the WordPress user, password, and website name:
While Awesome Compose sample projects work out of the box, always start with the README.md instructions file. You’ll typically need to update your sample YAML file with some environmental specifics — such as a password, username, or chosen database name. If you skip this step, the runtime won’t start correctly.
Awesome Compose Simplifies Multi-Container Management
Agile developers always need access to various application development-and-testing environments. Containers have been immensely helpful in providing this. However, more complex microservices architectures — which rely on containers running in tandem — are still quite challenging. Luckily, Docker Compose makes these management processes far more approachable.
Awesome Compose is Docker’s open-source library of sample workloads that empowers developers to quickly start using Docker Compose. The extensive library includes popular industry workloads such as ASP.NET, WordPress, and React web frontends. These can connect to MySQL, MariaDB, or MongoDB backends.
You can spin up samples from the Awesome Compose library in minutes. This lets you quickly deploy new environments locally or virtually. Our example also highlighted how easy customizing your Docker Compose YAML files and getting started are.
Now that you understand the basics of Awesome Compose, check out our other samples and explore how Docker Compose can streamline your next development project.
Quelle: https://blog.docker.com/feed/