Amazon SageMaker Data Wrangler ermöglicht jetzt das Modell-Training mit Amazon SageMaker Autopilot

Ab heute können Sie SageMaker Autopilot über SageMaker Data Wrangler aufrufen, um Machine-Learning-Modelle automatisch zu trainieren, abzustimmen und zu erstellen. SageMaker Data Wrangler reduziert den Zeitaufwand für die Zusammenführung und Vorbereitung von Daten für Machine Learning (ML) von Wochen auf Minuten. SageMaker Autopilot erstellt, trainiert und optimiert automatisch die besten Modelle für Machine Learning basierend auf Ihren Daten und ermöglicht Ihnen gleichzeitig die vollständige Kontrolle und Sichtbarkeit. Zuvor verwendeten Kunden Data Wrangler zur Vorbereitung der Daten für Machine Learning und Autopilot für das Trainieren von Machine-Learning-Modellen unabhängig voneinander. Mit dieser einheitlichen Erfahrung können Sie jetzt Ihre Daten in SageMaker Data Wrangler vorbereiten und sie leicht zum Modell-Training nach SageMaker Autopilot exportieren. In nur wenigen Klicks können Sie automatisch Machine-Learning-Modelle erstellen, trainieren und abstimmen, wodurch Sie automatisch moderne Engineering-Techniken verwenden, hochwertige Machine-Learning-Modelle trainieren und schneller Erkenntnisse aus Ihren Daten ziehen können. 
Quelle: aws.amazon.com

Amazon Aurora unterstützt die PostgreSQL-Versionen 13.7, 12.11, 11.16 und 10.21 sowie aktualisierte Erweiterungen

Nach der Ankündigung von Updates für die PostgreSQL-Datenbank durch die Open-Source-Community haben wir die Amazon-Aurora-PostgreSQL-kompatible Edition aktualisiert, um PostgreSQL 13.7, 12.11, 11.16 und 10.21 zu unterstützen. Diese Releases enthalten Fehlerbehebungen und Verbesserungen durch die PostgreSQL-Community. Lesen Sie die Aurora-Versionsrichtlinie, um zu entscheiden, wie oft Sie ein Upgrade durchführen und wie Sie Ihren Upgrade-Prozess planen.
Quelle: aws.amazon.com

Amazon SageMaker Experiments unterstützt jetzt gebräuchliche Diagrammtypen zur Veranschaulichung der Ergebnisse des Modell-Trainings

SageMaker Experiments unterstützt jetzt fein abgestufte Metriken und Graphen, mit denen Sie die Ergebnisse von auf SageMaker ausgeführten Training-Jobs besser verstehen können. Amazon SageMaker Experiments ist eine Funktion von Amazon SageMaker, die das Sortieren, Nachverfolgen, Vergleichen und Evaluieren von Machine Learning (ML)-Experimenten ermöglicht. Mit diesem Launch können Sie nun Precision- und Recall-Kurven (PR), Receiver-Operating-Characteristics-Kurven (ROC) und die Konfusionsmatrix anzeigen. Sie können mit diesen Kurven falsche Positive/Negative sowie Kompromisse zwischen Leistung und Genauigkeit von auf SageMaker trainierten Modellen nachvollziehen. Außerdem können Sie besser mehrere Trainingsläufe vergleichen und das beste Modell für Ihren Anwendungsfall finden.
Quelle: aws.amazon.com

Introducing new commitments on the processing of service data for our cloud customers

At Google, we engage regularly with customers, regulators, policymakers, and other stakeholders to provide transparency into our operations, policies, and practices and to further strengthen our commitment to privacy compliance. One such engagement is our ongoing work with the Dutch government regarding its Data Protection Impact Assessment (DPIA) of Google Workspace and Workspace for Education.As a result of that engagement, today Google is announcing our intention to offer new  contractual privacy commitments for service data1 that align with the commitments we offer for customer data.2 Once those new commitments become generally available, we will process service data as a processor under customers’ instructions, with the exception of limited processing3 that we will continue to undertake as a controller. We will provide further details as we implement these updates – planned for Google Workspace, Google Workspace for Education and Google Cloud4 services – beginning in 2023 and in successive phases through 2024.In parallel, Google is working to develop a version of Chrome OS (including Chrome browser running on managed Chrome OS devices) for which Google will offer similar processor commitments. In line with our goal of giving customers greater transparency and control over their data, we’re aiming to provide this updated version of Chrome OS, once it’s complete, to our enterprise and education customers around the world. We recognise that privacy compliance plays a crucial role in earning and maintaining your trust, and we will continue to work diligently to help make compliance easier for your business as you use our cloud services. To learn more about our approach to privacy compliance, please visit our Privacy Resource Center.1. Service Data is defined in the Google Cloud Privacy Notice as the personal information Google collects or generates during the provision and administration of the Cloud Services, excluding any Customer Data and Partner Data.2. Customer Data means data submitted, stored, sent or received via the services by customer or end users, as further described in the applicable data processing terms.3. For example, billing and account management, capacity planning and forecast modeling, detecting, preventing and responding to security risks and technical issues.4. Formerly known as Google Cloud Platform.Related ArticleAn update on Google Cloud’s commitments to E.U. businesses in light of the new E.U.-U.S. data transfer frameworkGoogle Cloud welcomes the new data transfer framework deal agreed by the E.U./U.S. and explains how we support customers to further prote…Read Article
Quelle: Google Cloud Platform

Google Workspace, GKE help startup CAST AI grow faster and optimize cloud costs

In many ways, serial entrepreneur Gil Laurent and his technology startups have grown alongside Google Workspace and Google Cloud. When he was CEO and co-founder of Ukraine-based Viewdle — a machine learning and computer vision startup that was acquired by Google in 2012 — the organization relied on Google Workspace for many of its collaboration needs, trading the complexity of email attachments and file versions for the cloud-synced availability of documents in Google Drive. A similar story played out a few years later when he co-founded Zenedge — a cybersecurity company focused on the edge of the network — which was acquired by Oracle in 2018. Zenedge still used a handful of other services to round out meetings and collaboration, but Google Workspace was the foundation. In 2019, when co-founding his latest venture — cloud cost management startup CAST AI — Laurent saw that he didn’t have to pay for additional services, as Google Workspace’s product suite included everything needed to connect his teams and workstreams. From onboarding new employees and getting them connected to their corporate email, to real-time collaboration and video conferencing, Google Workspace offered everything. “As a young startup, there was only one place to start—Google Workspace,”  recalled Laurent, who now serves as the company’s chief product officer. “We did not even consider anything else.”Google Workspace is only one part of CAST AI’s Google product adoption, however. “Our whole business runs on GKE on Google Cloud,” Laurent said. The company was up and running on GKE (Google Kubernetes Engine) almost immediately after rolling out Google Workspace, and Laurent recalls a smooth transition. “It was very natural for everyone.” CAST AI is an end-to-end Kubernetes Automation and Management platform that helps businesses optimize their cloud costs by 63% on average. With an approach built on container orchestration, a product like GKE was necessary to efficiently run the company’s workloads and services.Laurent explained that at Zenedge, the company struggled to understand how to control its cloud costs as it experienced growth: “We started out spending thousands per month with 10 engineers, which seemed right. But three years later, after continued growth, we were spending millions. We didn’t understand why. The bill could be 100 pages long.” When founding CAST AI, Laurent addressed this frustration head on, using containers to ensure their customers’ cloud resources weren’t going unused at such high rates. “Containers can be moved around, so you can optimize deployment to make them busy most of the time while eliminating waste,” Laurent said. “We knew we had to include automation. You can tell someone that they’re using 1,000 VMs and that 50 could be used better or more efficiently if moved to a different instance type — but in DevOps, who does this? The opportunities for optimization change daily and people are afraid of breaking things. We knew we had to find a way to offer not just observability but automated management.”Choosing GKE was “easy because Google invented Kubernetes, and GKE is the state of the art, with its implementation of the full Kubernetes API, autoscaling, multi-cluster support, and other features that set the trend.” Laurent added that the company also took advantage of the Google for Startups Cloud Program to scale up its business by tapping into extended benefits like tailored mentorship and coverage for their Google Cloud usage for two years. Many startups adopt Google Workspace to connect and engage in real-time with their teams, but quickly learn that leveraging other Google offerings — such as cloud solutions and the Google for Startups Cloud Program — can be very helpful to further their startup’s growth. For CAST AI, the combination of GKE on Google Cloud and Google Workspace proved especially valuable because the company was founded in late 2019, just months before the global pandemic began. The CAST AI team needed sophisticated cloud services to build their product, in addition to collaboration and productivity tools that could accommodate remote workers in different countries. “The idea that you can work in any place at any time without tradeoffs, whether you’re in Madrid or Miami — that helps a lot,” Laurent said. “Without GKE and Google Workspace, I am not sure we could have achieved all that we have so far.”To learn more about how Google Workspace and Google Cloud help startups like CAST AIaccelerate their journey — from connecting and collaborating to building and innovating — visit our startups solutions pages for Google Workspace and Google Cloud.Related ArticleWhy managed container services help startups and tech companies build smarterWhy managed container services such as GKE are crucial for startups and tech companies.Read Article
Quelle: Google Cloud Platform

How to Rapidly Build Multi-Architecture Images with Buildx

Successfully running your container images on a variety of CPU architectures can be tricky. For example, you might want to build your IoT application — running on an arm64 device like the Raspberry Pi — from a specific base image. However, Docker images typically support amd64 architectures by default. This scenario calls for a container image that supports multiple architectures, which we’ve highlighted in the past.
Multi-architecture (multi-arch) images typically contain variants for different architectures and OSes. These images may also support CPU architectures like arm32v5+, arm64v8, s390x, and others. The magic of multi-arch images is that Docker automatically grabs the variant matching your OS and CPU pairing.
While a regular container image has a manifest, a multi-architecture image has a manifest list. The list combines the manifests that show information about each variant’s size, architecture, and operating system.
Multi-architecture images are beneficial when you want to run your container locally on your x86-64 Linux machine, and remotely atop AWS Elastic Compute Cloud (EC2) Graviton2 CPUs. Additionally, it’s possible to build language-specific, multi-arch images — as we’ve done with Rust.
Follow along as we learn about each component behind multi-arch image builds, then quickly create our image using Buildx and Docker Desktop.
Building Multi-Architecture Images with Buildx and Docker Desktop
You can build a multi-arch image by creating the individual images for each architecture, pushing them to Docker Hub, and entering docker manifest to combine them within a tagged manifest list. You can then push the manifest list to Docker Hub. This method is valid in some situations, but it can become tedious and relatively time consuming.
 
Note: However, you should only use the docker manifest command in testing — not production. This command is experimental. We’re continually tweaking functionality and any associated UX while making docker manifest production ready.
 
However, two tools make it much easier to create multi-architectural builds: Docker Desktop and Docker Buildx. Docker Buildx enables you to complete every multi-architecture build step with one command via Docker Desktop.
Before diving into the nitty gritty, let’s briefly examine some core Docker technologies.
Dockerfiles
The Dockerfile is a text file containing all necessary instructions needed to assemble and deploy a container image with Docker. We’ll summarize the most common types of instructions, while our documentation contains information about others:

The FROM instruction headlines each Dockerfile, initializing the build stage and setting a base image which can receive subsequent instructions.
RUN defines important executables and forms additional image layers as a result. RUN also has a shell form for running commands.
WORKDIR sets a working directory for any following instructions. While you can explicitly set this, Docker will automatically assign a directory in its absence.
COPY, as it sounds, copies new files from a specified source and adds them into your container’s filesystem at a given relative path.
CMD comes in three forms, letting you define executables, parameters, or shell commands. Each Dockerfile only has one CMD, and only the latest CMD instance is respected when multiple exist.

 
Dockerfiles facilitate automated, multi-layer image builds based on your unique configurations. They’re relatively easy to create, and can grow to support images that require complex instructions. Dockerfiles are crucial inputs for image builds.
Buildx
Buildx leverages the docker build command to build images from a Dockerfile and sets of files located at a specified PATH or URL. Buildx comes packaged within Docker Desktop, and is a CLI plugin at its core. We consider it a plugin because it extends this base command with complete support for BuildKit’s feature set.
We offer Buildx as a CLI command called docker buildx, which you can use with Docker Desktop. In Linux environments, the buildx command also works with the build command on the terminal. Check out our Docker Buildx documentation to learn more.
BuildKit Engine
BuildKit is one core component within our Moby Project framework, which is also open source. It’s an efficient build system that improves upon the original Docker Engine. For example, BuildKit lets you connect with remote repositories like Docker Hub, and offers better performance via caching. You don’t have to rebuild every image layer after making changes.
While building a multi-arch image, BuildKit detects your specified architectures and triggers Docker Desktop to build and simulate those architectures. The docker buildx command helps you tap into BuildKit.
Docker Desktop
Docker Desktop is an application — built atop Docker Engine — that bundles together the Docker CLI, Docker Compose, Kubernetes, and related tools. You can use it to build, share, and manage containerized applications. Through the baked-in Docker Dashboard UI, Docker Desktop lets you tackle tasks with quick button clicks instead of manually entering intricate commands (though this is still possible).
Docker Desktop’s QEMU emulation support lets you build and simulate multiple architectures in a single environment. It also enables building and testing on your macOS, Windows, and Linux machines.
Now that you have working knowledge of each component, let’s hop into our walkthrough.
Prerequisites
Our tutorial requires the following:

The correct Go binary for your OS, which you can download here
The latest version of Docker Desktop
A basic understanding of how Docker works. You can follow our getting started guide to familiarize yourself with Docker Desktop.

 
Building a Sample Go Application
Let’s begin by building a basic Go application which prints text to your terminal. First, create a new folder called multi_arch_sample and move to it:
mkdir multi_arch_sample && cd multi_arch_sample
Second, run the following command to track code changes in the application dependencies:
go mod init multi_arch_sample
Your terminal will output a similar response to the following:

go: creating new go.mod: module multi_arch_sample
go: to add module requirements and sums:
go mod tidy

 
Third, create a new main.go file and add the following code to it:

package main

import (
"fmt"
"net/http"
)

func readyToLearn(w http.ResponseWriter, req *http.Request) {
w.Write([]byte("<h1>Ready to learn!</h1>"))
fmt.Println("Server running…")
}

func main() {

http.HandleFunc("/", readyToLearn)
http.ListenAndServe(":8000", nil)
}

 
This code created the function readyToLearn, which prints “Ready to learn!” at the 127.0.0.1:8000 web address. It also outputs the phrase Server running… to the terminal.
Next, enter the go run main.go command to run your application code in the terminal, which will produce the Ready to learn! response.
Since your app is ready, you can prepare a Dockerfile to handle the multi-architecture deployment of your Go application.
Creating a Dockerfile for Multi-arch Deployments
Create a new file in the working directory and name it Dockerfile. Next, open that file and add in the following lines:

# syntax=docker/dockerfile:1

# specify the base image to be used for the application
FROM golang:1.17-alpine

# create the working directory in the image
WORKDIR /app

# copy Go modules and dependencies to image
COPY go.mod ./

# download Go modules and dependencies
RUN go mod download

# copy all the Go files ending with .go extension
COPY *.go ./

# compile application
RUN go build -o /multi_arch_sample

# network port at runtime
EXPOSE 8000

# execute when the container starts
CMD [ "/multi_arch_sample" ]

 
Building with Buildx
Next, you’ll need to build your multi-arch image. This image is compatible with both the amd64 and arm32 server architectures. Since you’re using Buildx, BuildKit is also enabled by default. You won’t have to switch on this setting or enter any extra commands to leverage its functionality.
The builder builds and provisions a container. It also packages the container for reuse. Additionally, Buildx supports multiple builder instances — which is pretty handy for creating scoped, isolated, and switchable environments for your image builds.
Enter the following command to create a new builder, which we’ll call mybuilder:
docker buildx create –name mybuilder –use –bootstrap
You should get a terminal response that says mybuilder. You can also view a list of builders using the docker buildx ls command. You can even inspect a new builder by entering docker buildx inspect <name>.
Triggering the Build
Now, you’ll jumpstart your multi-architecture build with the single docker buildx command shown below:
docker buildx build –push
–platform linux/amd64,linux/arm64
–tag your_docker_username/multi_arch_sample:buildx-latest .
 
This does several things:

Combines the build command to start a build
Shares the image with Docker Hub using the push operation
Uses the –platform flag to specify the target architectures you want to build for. BuildKit then assembles the image manifest for the architectures
Uses the –tag flag to set the image name as multi_arch_sample

 
Once your build is finished, your terminal will display the following:
[+] Building 123.0s (23/23) FINISHED
 
Next, navigate to the Docker Desktop and go to Images > REMOTE REPOSITORIES. You’ll see your newly-created image via the Dashboard!
 

 
 
 
 
 
 
 
 
 
 
 
 

Conclusion
Congratulations! You’ve successfully explored multi-architecture builds, step by step. You’ve seen how Docker Desktop, Buildx, BuildKit, and other tooling enable you to create and deploy multi-architecture images. While we’ve used a sample Go web application, you can apply these processes to other images and applications.
To tackle your own projects, learn how to get started with Docker to build more multi-architecture images with Docker Desktop and Buildx. We’ve also outlined how to create a custom registry configuration using Buildx.
Quelle: https://blog.docker.com/feed/