Black Widow: Marvel-Film erscheint parallel zum Kinostart auf Disney+
Black Widow kann ab Juli 2021 gegen eine Zusatzgebühr über Disney+ geschaut werden. (Disney+, Disney)
Quelle: Golem
Black Widow kann ab Juli 2021 gegen eine Zusatzgebühr über Disney+ geschaut werden. (Disney+, Disney)
Quelle: Golem
Before coming to Red Hat, I spent nearly a decade as a Systems Administrator. After all that time, I’m still continually discovering tools that would make life as a SysAdmin much easier. One of these utilities is the redhat-support-tool. In this post we’ll walk you through using the tool in some real-world scenarios.
Quelle: CloudForms
Red Hat is proud to be a founding strategic member of the Adoptium Working Group as AdoptOpenJDK transitions to its new home at the Eclipse Foundation.
Quelle: CloudForms
The Fedora Project announces the availability of Fedora Linux 34 Beta, the latest version of the Fedora Linux operating system.
Quelle: CloudForms
As people limit activities outside their homes in response to COVID-19, many organizations are looking to the cloud to create a new generation of engaging digital experiences that customers can access right from their personal screens. This is especially important for the automotive industry, where prospective buyers and fans are used to interacting with vehicles in-person at auto shows and the dealership. That’s why we’ve teamed up with Unreal Engine, the open and advanced real-time 3D creation game engine, and NVIDIA, inventor of the GPU, to launch new virtual showroom experiences for automakers. Taking advantage of the NVIDIA RTX platform on Google Cloud, these showrooms provide interactive 3D experiences, photorealistic materials and environments, and up to 4K cloud streaming on mobile and connected devices. The showroom solution runs on the NVIDIA T4 Tensor Core GPU on Google Cloud instance of a Quadro Virtual Workstation, enabling the latest advancements in computer graphics.Today, in collaboration with MHP, the Porsche IT consulting firm, and MONKEYWAY, a real-time 3D streaming solution provider, you can see our first virtual showroom, the Pagani Immersive Experience Platform, created for Italian Luxury Hypercar manufacturer Pagani Automobili—with many more to come.Each virtual showroom offers granular personalization options, and real-time ray tracing, or light ray simulation, sets all-new visual benchmarks for this medium. Within the virtual showroom, viewers have a highly customizable experience, with the ability to select dozens of different interior and exterior design features—from paint color to wheel anodization to interior finish, and more. Viewers can also explore cars through an expansive cinematic 3D experience, powered by NVIDIA RTX real-time ray-tracing technology in Unreal Engine. Each showroom can support thousands of concurrent users worldwide, thanks to the scalability of Google Cloud. Specifically within the Pagani Immersive Experience Platform, viewers can customize the hypercar’s unique details, including a wide selection of tinted exposed carbon fibre options, stripes patterns, wheels, brake calipers, interior leather and stitching, instrument cluster’s dials and luggage set. Users can animate car features, including opening doors, rolling down windows, and removing the hardtop. In addition to changing time settings from day or night, there are four unique physical environments, including salt flats, a museum, a coastal road, and the exclusive showroom at the Pagani Automobili headquarters. And once you’ve perfectly custom configured your car, take it for a virtual drive on a real race track.”The Pagani virtual showroom is a glimpse at the future of online retail environments,” said Marc Petit, VP, General Manager, Unreal Engine, Epic Games. “The use of Unreal Engine pixel streaming and ray tracing in the cloud delivers an experience that is photorealistic, interactive, and personalized—and can reach customers with the highest level of quality on any type of device.” “These showrooms will be a game-changer for how consumers interact with their favorite products online, especially now during a pandemic,” said Sean Young, Director of Global Business Development, Manufacturing at NVIDIA. “Automakers can offer more personalized previews while potential buyers no longer have to wait in lines or worry about large crowds at physical showrooms. Most importantly, customers can interact with and customize a virtual car model at their leisure, any time of the day, and from any location.”“We’re always looking for new ways to engage and delight our clients, and today we are very proud to be able to extend this opportunity to our largest fan base, offering them a new insight into the Pagani products,” said Carlo Stola, Customer Relations Manager at Pagani. “Working with MHP, Google Cloud, Unreal Engine, and NVIDIA has enabled us to provide an even more sophisticated, yet incredibly simple, customization experience, and to show off our models’ state-of-the-art features.” Although we’ve started with the automotive industry, our goal is to expand these showroom experiences into other industries as well, such as retail, hospitality, and more. In the meantime, check out the virtual showroom here. Automakers interested in launching a virtual showroom can get started by engaging with the Google Cloud sales team.Related ArticleShifting gears: How the cloud drives digital transformation in the automotive industryMegatrends in the automotive industry like autonomous driving and ridesharing are driving digital transformation. Here how Google Cloud i…Read Article
Quelle: Google Cloud Platform
As you shift to a cloud-first strategy, you have to manage the complexity of connecting resources across on-prem locations and the cloud to ensure access to all workloads from anywhere. This often means network teams have to configure and manage multiple networks while delivering consistent access, policies, and services across global cloud regions. To offer networking services everywhere, you need a simple programmatic model that seamlessly spans the cloud, on-prem data centers, and branch locations. Today, we are announcing the preview of Network Connectivity Center, which delivers planet-scale network management to simplify complex networks for your on-prem and cloud connectivity needs. Network Connectivity Center provides a single management experience to easily create, connect, and manage heterogeneous on-prem and cloud networks leveraging Google’s global infrastructure. Network Connectivity Center serves as a vantage point to seamlessly connect VPNs, partner and dedicated interconnects, as well as third-party routers and Software-Defined WANs, helping you optimize connectivity, reduce operational burden and lower costs—wherever your applications or users may be.“As an industry leader, a fast & reliable network is essential to maintain our productivity across a large number of our globally distributed work centers,” said Miguel Mejia, WAN Engineering Lead, Colgate-Palmolive. “We understand Google Cloud’s Network Connectivity Center (NCC) could help us achieve broader access to Google’s global network and also enable us to connect our remote site users & applications in a consistent manner.”Network Connectivity Center offers an easy way to create, expand, and manage on-prem and cloud networks all in one place. Network Connectivity Center delivers the following: 1. Single connectivity modelNetwork Connectivity Center offers the unique ability to easily connect and manage VPNs, interconnects, and SD-WANs to enable users to access workloads seamlessly. Network Connectivity Center resources run on Google’s global infrastructure to deliver high performance, reliability, and security so enterprises can leverage the same benefits. 2. Flexible cloud connectivityNetwork Connectivity Center delivers a unified connectivity experience by allowing enterprises to use Google’s global infrastructure, leveraging new or existing partners and dedicated interconnects, Cloud VPN connections, and third-party routers/SD-WAN to transfer data reliably across on-premises sites and cloud resources. 3. VPN-based multicloud connectivity Network Connectivity Center unlocks VPN-based cloud connectivity directly and via a set of partners allowing enterprises the flexibility of choice to create, connect and consume resources spanning multiple clouds. These resources are reliably connected using Google’s infrastructure and can be operated directly via Network Connectivity Center or select partner solutions.4. SD-WAN Integration/Third-party router Network Connectivity Center can be used as the default landing point when integrating SD-WAN and other routing solutions with Google’s infrastructure. When doing so, enterprises get a simple and reliable way to consume connectivity on-demand while extending the benefits of their SD-WAN and routing solutions to Google Cloud.5. Real-time visibility for your global network Network Connectivity Center offers a single pane of glass for connecting your VPNs, partner and dedicated interconnects and on-prem networks. It also pairs seamlessly with Network Intelligence Center for end-to-end visibility so you can monitor, visualize, and troubleshoot the network. With Network Intelligence Center, you can monitor real-time performance and network health, view traffic flows, and verify connectivity intent. Cisco partnershipIn April 2020, Cisco and Google Cloud announced Cisco SD-WAN Cloud Hub with Google Cloud to help our customers create an end-to-end network that enables secure and on-demand connectivity.Our expanded partnership with Cisco brings the best of both Cisco and Google Cloud technologies together with a turnkey networking solution. Cisco and Google Cloud’s joint solution leverages Cisco SD-WAN Cloud Hub and Google Cloud Network Connectivity Center to connect branch sites and on-prem data centers to the cloud using Google’s high performance, global infrastructure and Cisco SD-WAN’s vManage. Via Cisco Cloud OnRamp, users can also extend Cisco SD-WAN’s fabric through an automated process to Google Cloud. With tighter integrations between Cisco and Google Cloud, the availability of this solution brings a new set of capabilities that dramatically simplifies complex heterogeneous networks, protects mission critical applications, and minimizes operational burden and costs. “Google Cloud and Cisco continue driving innovation for our joint customers to enable secure and automated SD-WAN access from applications and services running on Google Cloud Platform,” said JL Valente, vice president, product management, for Cisco Enterprise Routing, SD-WAN and Cloud Networking. “Our latest integration not only extends the Cisco SD-WAN fabric to Google Cloud to automate provisioning of site-to-cloud connectivity effortlessly, but also gives customers the choice of using Google Cloud for providing a highly reliable, high performance global cloud network for site-to-site connectivity that can be deployed in minutes.”We are excited to simplify the management of on-prem and cloud networks with Network Connectivity Center. Google Cloud will be at Cisco Live!, visit our showcase to learn more about Network Connectivity Center.
Quelle: Google Cloud Platform
If you need to run an embarrassingly parallel batch processing workload, it can be tricky to decide how many instances to create in each zone while accounting for available resources, quota limits and your reservations. We are excited to announce a new method of obtaining Compute Engine instances for batch processing that accounts for availability of resources in zones of a region. Now available in preview for regional managed instance groups, you can do this simply by specifying the ANY value in the API.The capacity-aware deployment method is particularly useful if you need to easily create many instances with a special configuration such as virtual machines (VM) with a specific CPU platform or GPU model, preemptible VMs, or instances with a large number of cores or memory size.Now, when deploying instances to run embarrassingly parallel batch processing, such as financial modeling or rendering, you no longer have to figure out which zones support the required hardware and how many instances to create in each zone in a region to accommodate the requested capacity.Assuming that any distribution of instances across zones works for your batch processing job, and that the workload doesn’t require resilience against zone-level failure, you can now delegate the job of obtaining the requested capacity to a regional managed instance group. A regional MIG with the new distribution shape ANY automatically deploys instances to zones where resources are available to fulfill your request, accounting for your quota limits. This works both when you create a group or when you increase it in size.If you use reservations to ensure that resources are available for your computation, you should specify reservation affinity in a group’s instance template. A regional MIG with distribution shape ANY utilizes the specified reservations efficiently by prioritizing consumption of unused reserved capacity before provisioning additional resources.A regional MIG with distribution shape ANY automatically deploys instances to zones where capacity is available, takes quotas into account, and prioritizes consumption of a specified reservation.Depending on the availability of the requested resources, a regional MIG with ANY distribution might deploy all instances to a single zone or spread the instances across multiple zones. The distribution shape ANY is not suitable for highly available serving workloads such as frontend web services because a zone-level failure could result in all or most of the instances becoming unavailable if they happen to be deployed to the zone that failed. Getting startedTo configure the new distribution shape ANY when creating a regional MIG, look under the “Target distribution shape” setting on Create instance group screen in the Google Cloud Console:Configuring a regional managed instance group’s distribution shapeYou can also set the distribution shape to ANY for an existing regional MIG—for example, by running a gcloud command:Summing it upObtaining capacity to run an embarrassingly parallel batch processing workload is easier with a regional MIG’s new distribution shape ANY. When deciding how many instances to create in each zone, the regional MIG accounts for availability of resources in each zone, accounts for your quota limits, and prioritizes consumption of specified reservations.Visit the regional MIG documentation to learn more about creating instances using the new distribution shape ANY.Continue your learning at the Cloud Technical Series digital event, March 23-26, and go deeper into VM migration, application modernization, GKE, data analytics, AI/ML and more. Register here.Related ArticleIntroducing HPC VM images—pre-tuned for optimal performanceGoogle Cloud’s first pre-configured HPC VM image is a CentOS 7-based image optimized for tightly-coupled MPI workloads.Read Article
Quelle: Google Cloud Platform
If you have VMware workloads and you want to modernize your application to take advantage of cloud services to increase agility and reduce total cost of ownership then Google Cloud VMware Engine is the service for you! It is a managed VMware service with bare metal infrastructure that runs the VMware software stack on Google Cloud—fully dedicated and physically isolated from other customers. In this blog post, I’ll take you through Google Cloud VMware Engine, its benefits, features, and use cases. Benefits of Google Cloud VMware EngineOperational continuity- Google offers native access to VMware platforms. the service is sold, delivered and supported by Google – no other companies are involved. The architecture is compatible with your existing applications, as well as operations, security, backup, disaster recovery, audit, and compliance tools and processes.No retraining – Your teams can use their existing skills and knowledge. Infrastructure agility – The service is delivered as a Google Cloud service, and infrastructure scales on demand in minutes.Security – Access to the environment through Google Cloud provides built-in DDoS protection and security monitoring.Policy compatibility- You can continue to use VMware tools and security procedures, audit practices, and compliance certifications.Infrastructure monitoring – You get reliability with fully redundant and dedicated 100 Gbps networking, providing up to 99.99% availability to meet the needs of your VMware stack. There is also infrastructure monitoring so failed hardware automatically gets replaced. Hybrid platform – The service enables high-speed, low-latency access to other Google Cloud services such as BigQuery, AI Platform, Cloud Storage, and more.Low cost- Because the service is engineered for automation, operational efficiency, and scale it is also cost effective!How does Google Cloud VMware Engine work?Google Cloud VMware Engine makes it easy to migrate or extend your VMware environment to Google Cloud. Here is how it works… you can easily migrate your on-premises VMware instances to Google Cloud, using included HCX licenses, via a cloud VPN or interconnect. The service comprises VMware vCenter, the virtual machines, ESXi host, storage, and network on bare metal! You can easily connect from the service to other Google Cloud services such as Cloud SQL, BigQuery, Memorystore, and so on. You can access the service UI, billing, and identity and access management all from the Google Cloud console as well as connect to other third-party disaster recovery and storage services such as Zerto and Veeam. Google Cloud VMware Engine use casesRetire or migrate data centers – Scale data center capacity in the cloud and stop managing hardware refreshes. Reduce risk and cost by migrating to the cloud while still using familiar VMware tools and skills. In the cloud, use Google Cloud services to modernize your applications at your pace.Expand on demand – Scale capacity to meet unanticipated needs, such as new development environments or seasonal capacity bursts, and keep it only as long as you need it. Reduce your up-front investment, accelerate speed of provisioning, and reduce complexity by using the same architecture and policies across both on-premises and the cloud.Disaster recovery and virtual desktops in Google Cloud – High-bandwidth connections let you quickly upload and download data to recover from incidents.Virtual desktops in Google Cloud – Create virtual desktops (VDI) in Google Cloud for remote access to data, apps, and desktops. Low-latency networks give you fast response times — similar to those of a desktop app. Power high-performance applications and databases – In Google Cloud you have a hyper-converged architecture designed to run your most demanding VMware workloads such as Oracle, Microsoft SQL Server, middleware systems, and high-performance noSQL databases. Unify DevOps across VMware and Google Cloud – Optimize VMware administration by using Google Cloud services that can be applied across all your workloads, without having to expand your data center or re-architect your applications. You can centralize identities, access control policies, logging, and monitoring for VMware applications on Google Cloud.ConclusionSo there you have it! Google Cloud VMware Engine, its use cases, benefits, and how it works. If this has piqued your interest, check out the Google Cloud VMware Engine documentation and demo for more details. Here is a video on Google Cloud VMware Engine: What is Google Cloud VMware Engine? #GCPSketchnoteFor more #GCPSketchnote, follow the GitHub repo and for similar cloud content follow me on twitter @pvergadia and keep an eye out on thecloudgirl.devRelated ArticleGoogle Cloud VMware Engine explained: Integrated networking and connectivityLearn about the networking features in Google Cloud VMware Engine to let you easily and deploy workloads across on-prem and cloud environ…Read Article
Quelle: Google Cloud Platform
We are excited to announce the latest feature for Docker Pro and Team users, our new Advanced Image Management Dashboard available on Docker Hub. The new dashboard provides developers with a new level of access to all of the content you have stored in Docker Hub providing you with more fine grained control over removing old content and exploring old versions of pushed images.
Historically in Docker Hub we have had visibility into the latest version of a tag that a user has pushed, but what has been very hard to see or even understand is what happened to all of those old things that you pushed. When you push an image to Docker Hub you are pushing a manifest, a list of all of the layers of your image, and the layers themselves.
When you are updating an existing tag, only the new layers will be pushed along with the new manifest which references these layers. This new manifest will be given the tag you specify when you push, such as bengotch/simplewhale:latest. But this does mean that all of those old manifests which point at the previous layers that made up your image are removed from Hub. These are still here, there is just no way to easily see them or to manage that content. You can in fact still use and reference these using the digest of the manifest if you know it. You can kind of think of this like your commit history (the old digests) to a particular branch (your tag) of your repo (your image repo!).
This means you can have hundreds of old versions of images which your systems can still be pulling by hash rather than by the tag and you may be unaware which old versions are still in use. Along with this the only way until now to remove these old versions was to delete the entire repo and start again!
With the release of the image management dashboard we have provided a new GUI with all of this information available to you including whether those currently ‘untagged old manifests’ are still ‘active’ (have been pulled in the last month) or whether they are inactive. This combined with the new bulk delete for these objects and current tags provides you a more powerful tool for batch managing your content in Docker Hub.
To get started you will find a new banner on your repos page if you have inactive images:
This will tell you how many images you have, tagged or old, which have not been pushed or pulled to in the last month. By clicking view you can go through to the new Advanced Image Management Dashboard to check out all your content, from here you can see what the tags of certain manifests used to be and use the multi-selector option to bulk delete these.
For a full product tour check out our overview video of the feature below.
We hope that you are excited for the first step of us providing greater insight into your content on Docker Hub, if you want to get started exploring your content then all users can see how many inactive images they have and Pro & Team users can see which tags these used to be associated with, what the hashes of these are and start removing these today. To find out more about becoming a Pro or Team user check out this page.
The post Advanced Image Management in Docker Hub appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/
Die Plattformversion 1.4.0 von AWS Fargate ist jetzt die NEUESTE Version. Alle neuen Aufgaben von Amazon Elastic Container Service (Amazon ECS) oder ECS-Services, die den Starttyp Fargate verwenden und bei denen der Parameter platformVersion auf LATEST gesetzt oder nicht angegeben ist, werden auf der Plattformversion 1.4.0 ausgeführt. Die neue Version bietet Funktionen wie die Unterstützung von Amazon Elastic File System und Amazon ECS Exec.
Quelle: aws.amazon.com