GR Supra: Toyotas Sportwagen aus Lego fährt auch
Aus fast 480.000 Legosteinen besteht das originalgroße Fahrzeug. Der Toyota GR Supra aus Lego ist aber nicht das erste Auto dieser Art. (Lego, Onlinewerbung)
Quelle: Golem
Aus fast 480.000 Legosteinen besteht das originalgroße Fahrzeug. Der Toyota GR Supra aus Lego ist aber nicht das erste Auto dieser Art. (Lego, Onlinewerbung)
Quelle: Golem
The Matrix Resurrections: So wird der 2021 erscheinende neue Matrix-Film mit Keanu Reeves heißen. Warner Bros. zeigte zudem erste Filmszenen. (Matrix, Warner Bros)
Quelle: Golem
Zehn Jahre nach der Gründung beschäftigt sich Jolla mittlerweile mit Software anstatt mit Hardware – ein Konzept, das aufzugehen scheint. (Jolla, Softwareentwicklung)
Quelle: Golem
Ein Refresh noch 2021: Angeblich plant Nvidia eine Geforce RTX 3090 Super mit GA102-Chip, schnellerem Speicher und höherem Power-Budget. (Nvidia Ampere, Grafikhardware)
Quelle: Golem
Kanye West verkauft sein kommendes Album Donda im Verbund mit neuartiger Audio-Hardware. Fans sollen damit Remixes seiner Songs erstellen können. (Audio, Internet)
Quelle: Golem
In July, Red Hat brought together a group of security experts, partners, and industry peers to discuss some of the hybrid cloud security problems organizations face and solutions to tackle those challenges. Those sessions were recorded and are now available for free on-demand viewing.
Quelle: CloudForms
Security is at the top of mind for our customers, and understanding the language and practices around security is vital for teams delivering applications and managing infrastructure. Understanding how Red Hat reports and evaluates security vulnerabilities — as well as the tools Red Hat uses to communicate and address vulnerabilities — goes a long way towards protecting your IT environment.
Quelle: CloudForms
Cybersecurity has become national security as parties both foreign and domestic increasingly try to hack into government information systems. It is no wonder then that U.S. federal requirements for information security have also become the gold standard for cybersecurity in financial services, telecommunications, healthcare and other regulated markets. In cloud computing, chief among these requirements … Continued
Quelle: Mirantis
One of the biggest challenges when serving machine learning models is delivering predictions in near real-time. Whether you’re a retailer generating recommendations for users shopping on your site, or a food service company estimating delivery time, being able to serve results with low latency is crucial. That’s why we’re excited to announce Private Endpoints on Vertex AI, a new feature in Vertex Predictions. Through VPC Peering, you can set up a private connection to talk to your endpoint without your data ever traversing the public internet, resulting in increased security and lower latency for online predictions. Configuring VPC Network PeeringBefore you make use of a Private Endpoint, you’ll first need to create connections between your VPC (Virtual Private Cloud) network and Vertex AI. A VPC network is a global resource that consists of regional virtual subnetworks, known as subnets, in data centers, all connected by a global network. You can think of a VPC network the same way you’d think of a physical network, except that it’s virtualized within GCP. If you’re new to cloud networking and would like to learn more, check out this introductory video on VPCs.With VPC Network Peering, you can connect internal IP addresses across two VPC networks, regardless of whether they belong to the same project or the same organization. As a result, all traffic stays within Google’s network.Deploying Models with Vertex PredictionsVertex Predictions is a serverless way to serve machine learning models. You can host your model in the cloud and make predictions through a REST API. If your use case requires online predictions, you’ll need to deploy your model to an endpoint. Deploying a model to an endpoint associates physical resources with the model so it can serve predictions with low latency. When deploying a model to an endpoint, you can specify details such as the machine type, and parameters for autoscaling. Additionally, you now have the option to create a Private Endpoint. Because your data never traverses the public internet, Private Endpoints offer security benefits in addition to reducing the time your system takes to serve the prediction when it receives the request. The overhead introduced by Private Endpoints is minimal, achieving performance nearly identical to DIY serving on GKE or GCE. There is also no payload size limit for models deployed on the private endpoint.Creating a Private Endpoint on Vertex AI is simple.In the Models section of the Cloud console, select the model resource you want to deploy.Next, select DEPLOY TO ENDPOINTIn the window on the right hand side of the console, navigate to the Access section and select Private. You’ll need to add the full name of the VPC network for which your deployment should be peered.Note that many other managed services on GCP support VPC peering, such as Vertex Training, Cloud SQL, and Firestore. Endpoints is the latest to join that list.What’s Next?Now you know the basics of VPC Peering and how to use Private Endpoints on Vertex AI. If you want to learn more about configuring VPCs, check out this overview guide. And if you’re interested to learn more about how to use Vertex AI to support your ML workflow, check out this introductory video. Now it’s time for you to deploy your own ML model to a Private Endpoint for super speedy predictions!Related ArticleWhat is Vertex AI? Developer advocates share moreDeveloper Advocates Priyanka Vergadia and Sara Robinson explain how Vertex AI supports your entire ML workflow—from data management all t…Read Article
Quelle: Google Cloud Platform
Detection and remediation of security vulnerabilities before they reach deployment is critical in a cloud-native world. This makes scanning for vulnerabilities early and often an important part of continuous integration and delivery (CI/CD) processes. The earlier a problem is detected, the fewer downstream issues will occur. The process of checking for vulnerabilities earlier in development is called “shifting left”. In fact, building security into software development also speeds up software delivery and performance. Thanks to shift-left, research from DevOps Research and Assessment (DORA) shows high-performing teams spend 50 percent less time remediating security issues than low-performing teams. To help companies accomplish a leftward shift in their security, Google Cloud recently launched On-Demand Scanning to general availability. This new feature checks for vulnerabilities both in locally stored container images and images stored within GCP registries. With On-Demand Scanning, vulnerabilities can be surfaced as soon as an image is built, well before the image is pushed to a registry. This early visibility makes it possible to automate decisions and determine whether a container image should be promoted for broad use. Thus, vulnerable images surfaced within a CI pipeline can be fixed before delivery. Additionally, developers can use On-Demand Scanning as part of their local workflows via a simple gcloud command. You can learn more about this and how to build trust in your software delivery pipeline by checking out our recent secure software supply chain event.Previously, we wrote about the benefits of Google Cloud’s vulnerability scanning in the software supply chain, right from build to deploy. Those key benefits still apply, and are strengthened with the addition of On-Demand Scanning. For instance, you can continue to monitor images stored in Artifact Registry (via automated scanning) in addition to On-Demand Scanning at build time. By using On-Demand Scanning at this earlier stage, vulnerabilities can be detected before an image is stored. This way you can reduce the number of vulnerable images pushed and ensure any newly discovered vulnerabilities are caught well before deployment. The data sources for vulnerabilities come directly from the industry-standard distros (e.g. Debian, RHEL, Ubuntu) and the National Vulnerabilities Database (NVD). Aggregating these sources allows you to see results that include the CVSS score assigned by NVD, and the severity assigned by the distro. Once you’ve identified a potential vulnerability, you can make decisions based on your own security policies and needs.Results returned by On-Demand Scanning are formatted to the open-source Grafeas standard, and can be parsed in the same way as vulnerability scanning in Artifact Registry. Thus, any existing tooling that consumes the Grafeas format (including Artifact Registry and Container Registry) can be used with On-Demand Scanning. To get started today, all you need to do is enable the On-Demand Scanning API and connect it to your container. For guidance, take a look at our quickstart guide to run On-Demand Scanning on any local machine or try the tutorial that describes how to use On-Demand Scanning with Cloud Build.Related ArticleGuard against security vulnerabilities in your software supply chain with Container Registry vulnerability scanningGoogle Cloud is announcing Container Registry vulnerability scanning in beta, helping to automatically detect known security vulnerabilit…Read Article
Quelle: Google Cloud Platform