Introducing Google Cloud Backup and DR

Backup is a fundamental aspect of application protection. As such, the need for a seamlessly integrated, centralized backup service is vital when seeking to ensure resilience and recoverability for data generated by Google Cloud services or on-premises infrastructure. Regardless of whether the need to restore data is triggered by a user error, malicious activity, or some other reason, the ability to execute reliable, fast recovery from backups is a critical aspect of a resilient infrastructure. A comprehensive backup capability should have the following characteristics: 1) centralized backup management across workloads, 2) efficient use of storage to minimize costs, and 3) minimal recovery times. To effectively address these requirements backup service providers must deliver efficiency at the workload level, while also supporting a diverse spectrum of customer environments, applications, and use cases. Consequently, the implementation of a truly effective, user-friendly backup experience is no small feat.And that’s why, today, we’re excited to announce the availability of Google Cloud Backup and DR, enabling centralized backup management directly from the Google Cloud console.Helping you maximize backup valueAt Google Cloud we have a unique opportunity to solve backup challenges in ways that fully maximize the value you achieve. By building a product with our customers firmly in mind, we’ve made sure that Google Cloud Backup and DR makes it easy to set up, manage, and restore backups.As an example, we placed a high priority on delivering an intuitive, centralized backup management experience. With Google Cloud Backup and DR, administrators can effectively manage backups spanning multiple workloads. Admins can generate application- and crash-consistent backups for VMs on Compute Engine, VMware Engine or on-premises VMware, databases (such as SAP, MySQL and SQL Server), and file systems. Having a holistic view of your backups across multiple workloads means you spend less time on management and can be sure you have consistency and completeness in your data protection coverage.Google Cloud Backup and DR dashboardEven better, Google Cloud Backup and DR stores backup data in its original, application-readable format. As a result, backup data for many workloads can be made available directly from long-term backup storage (e.g., leveraging cost-effective Cloud Storage), with no need for time-consuming data movement or translation. This accelerates recovery of critical files and supports rapid resumption of critical business operations.Making sure you minimize backup TCOSimilarly, we also took care to help you minimize total cost of ownership (TCO) of your backups. With this objective in mind, we designed Google Cloud Backup and DR to implement space-efficient “incremental forever” storage technology to ensure that you pay only for what you truly need. With “incremental forever” backup, after Google Cloud Backup and DR takes an initial backup, subsequent backups only store data associated with changes relative to the prior backup. This allows backups to be captured more quickly and reduces the network bandwidth required to transmit the associated data. It also minimizes the amount of storage consumed by the backups, which benefits you via reduced storage consumption costs.In addition, there is flexibility built in to allow you to strike your desired balance between storage cost and data retention time. For example, when choosing to store backups on Google Cloud Storage, you can select an appropriate Cloud Storage class in alignment with your needs.Start reaping the benefitsThe introduction of Google Cloud Backup and DR is a reflection of our broader commitment to make cloud infrastructure easier to manage, faster, and less expensive, while also helping you build a more resilient business. By centralizing backup administration and applying cutting-edge storage and data management technologies, we’ve eliminated much of the complexity, time, and cost traditionally associated with enterprise data protection.But don’t take our word for it. See for yourself in Google Cloud Console. Take advantage of $300 in free Google Cloud credits, give Google Cloud Backup and DR a try starting in late September 2022, and enjoy the benefits of cloud-integrated backup and recovery.Related ArticleNew storage innovations to drive your next-gen applicationsLearn about the latest products and features rolling out for customers using cloud-based block, file and object storage, as well as backu…Read Article
Quelle: Google Cloud Platform

How to Use the Alpine Docker Official Image

With its container-friendly design, the Alpine Docker Official Image (DOI) helps developers build and deploy lightweight, cross-platform applications. It’s based on Alpine Linux which debuted in 2005, making it one of today’s newest major Linux distros. 

While some developers express security concerns when using relatively newer images, Alpine has earned a solid reputation. Developers favor Alpine for the following reasons:  

It has a smaller footprint, and therefore a smaller attack surface (even evading 2014’s ShellShock Bash exploit!).It takes up less disk space.It offers a strong base for customization.It’s built with simplicity in mind.

In fact, the Alpine DOI is one of our most popular container images on Docker Hub. To help you get started, we’ll discuss this image in greater detail and how to use the Alpine Docker Official Image with your next project. Plus, we’ll explore using Alpine to grab the slimmest image possible. Let’s dive in!

In this tutorial:

What is the Alpine Docker Official Image?When to use AlpineHow to run Alpine in DockerUse a quick pull commandBuild your DockerfileGrabbing the slimmest possible imageGet up and running with Alpine today

What is the Alpine Docker Official Image?

The Alpine DOI is a building block for Alpine Linux Docker containers. It’s an executable software package that tells Docker and your application how to behave. The image includes source code, libraries, tools, and other core dependencies that your application needs. These components help Alpine Linux function while enabling developer-centric features. 

The Alpine Docker Official Image differs from other Linux-based images in a few ways. First, Alpine is based on the musl libc implementation of the C standard library — and uses BusyBox instead of GNU coreutils. While GNU packages many Linux-friendly programs together, BusyBox bundles a smaller number of core functions within one executable. 

While our Ubuntu and Debian images leverage glibc and coreutils, these alternatives are comparatively lightweight and resource-friendly, containing fewer extensions and less bloat.

As a result, Alpine appeals to developers who don’t need uncompromising compatibility or functionality from their image. Our Alpine DOI is also user-friendly and straightforward since there are fewer moving parts.

Alpine Linux performs well on resource-limited devices, which is fitting for developing simple applications or spinning up servers. Your containers will consume less RAM and less storage space. 

The Alpine Docker Official Image also offers the following features:

The robust apk package managerA rapid, consistent development-and-release cycle vs. other Linux distributionsMultiple supported tags and architectures, like amd64, arm/v6+, arm64, and ppc64le

Multi-arch support lets you run Alpine on desktops, mobile devices, rack-mounted servers, Raspberry Pis, and even newer M-series Macs. Overall, Alpine pairs well with a wide variety of embedded systems. 

These are only some of the advantages to using the Alpine DOI. Next, we’ll cover how to harness the image for your application. 

When to use Alpine

You may be interested in using Alpine, but find yourself asking, “When should I use it?” Containerized Alpine shines in some key areas: 

Creating serversRouter-based networkingDevelopment/testing environments

While there are some other uses for Alpine, most projects will fall under these two categories. Overall, our Alpine container image excels in situations where space savings and security are critical. 

How to run Alpine in Docker

Before getting started, download Docker Desktop and then install it. Docker Desktop is built upon Docker Engine and bundles together the Docker CLI, Docker Compose, and other core components. Launching Docker Desktop also lets you use Docker CLI commands (which we’ll get into later). Finally, the included Docker Dashboard will help you visually manage your images and containers. 

After completing these steps, you’re ready to Dockerize Alpine!

Note: For Linux users, Docker will still work perfectly fine if you have it installed externally on a server, or through your distro’s package manager. However, Docker Desktop for Linux does save time and effort by bundling all necessary components together — while aiding productivity through its user-friendly GUI. 

Use a quick pull command

You’ll have to first pull the Alpine Docker Official Image before using it for your project. The fastest method involves running docker pull alpine from your terminal. This grabs the alpine:latest image (the most current available version) from Docker Hub and downloads it locally on your machine: 

Your terminal output should show when your pull is complete — and which alpine version you’ve downloaded. You can also confirm this within Docker Desktop. Navigate to the Images tab from the left sidebar. And a list of downloaded images will populate on the right. You’ll see your alpine image, tag, and its minuscule (yes, you saw that right) 5.29 MB size:

Other Linux distro images like Ubuntu, Debian, and Fedora are many, many times larger than Alpine.

That’s a quick introduction to using the Alpine Official Image alongside Docker Desktop. But it’s important to remember that every Alpine DOI version originates from a Dockerfile. This plain-text file contains instructions that tell Docker how to build an image layer by layer. Check out the Alpine Linux GitHub repository for more Dockerfile examples. 

Next up, we’ll cover the significance of these Dockerfiles to Alpine Linux, some CLI-based workflows, and other key information.

Build your Dockerfile

Because Alpine is a standard base for container images, we recommend building on top of it within a Dockerfile. Specify your preferred alpine image tag and add instructions to create this file. Our example takes alpine:3.14 and runs an executable mysql client with it: 

FROM alpine:3.14
RUN apk add –no-cache mysql-client
ENTRYPOINT ["mysql"]

In this case, we’re starting from a slim base image and adding our mysql-client using Alpine’s standard package manager. Overall, this lets us run commands against our MySQL database from within our application. 

This is just one of the many ways to get your Alpine DOI up and running. In particular, Alpine is well-suited to server builds. To see this in action, check out Kathleen Juell’s presentation on serving static content with Docker Compose, Next.js, and NGINX. Navigate to timestamp 7:07 within the embedded video. 

The Alpine Official Image has a close relationship with other technologies (something that other images lack). Many of our Docker Official Images support -alpine tags. For instance, our earlier example of serving static content leverages the node:16-alpine image as a builder. 

This relationship makes Alpine and multi-stage builds an ideal pairing. Since the primary goal of a multi-stage build is to reduce your final image size, we recommend starting with one of the slimmest Docker Official Images.

Grabbing the slimmest possible image

Pulling an -alpine version of a given image typically yields the slimmest result. You can do this using our earlier docker pull [image] command. Or you can create a Dockerfile and specify this image version — while leaving room for customization with added instructions. 

In either case, here are some results using a few of our most popular images. You can see how image sizes change with these tags:

Image tagImage sizeimage:[version number]-alpine sizepython:3.9.13867.66 MB46.71 MBnode:18.8.0939.71 MB164.38 MBnginx:1.23.1134.51 MB22.13 MB

We’ve used the :latest tag since this is the default image tag Docker grabs from Docker Hub. As shown above with Python, pulling the -alpine image version reduces its footprint by nearly 95%! 

From here, the build process (when working from a Dockerfile) becomes much faster. Applications based on slimmer images spin up quicker. You’ll also notice that docker pull and various docker run commands execute swifter with -alpine images. 

However, remember that you’ll likely have to use this tag with a specified version number for your parent image. Running docker pull python-alpine or docker pull python:latest-alpine won’t work. Docker will alert you that the image isn’t found, the repo doesn’t exist, the command is invalid, or login information is required. This applies to any image. 

Get up and running with Alpine today

The Alpine Docker Official Image shines thanks to its simplicity and small size. It’s a fantastic base image — perhaps the most popular amongst Docker users — and offers plenty of room for customization. Alpine is arguably the most user-friendly, containerized Linux distro. We’ve tackled how to use the Alpine Official Image, and showed you how to get the most from it. 

Want to use Alpine for your next application or server? Pull the Alpine Official Image today to jumpstart your build process. You can also learn more about supported tags on Docker Hub. 

Additional resources

Browse the official Alpine Wiki.Learn some Alpine fundamentals via the Alpine newbie Wiki page.Read similar articles about Docker Images. Download and install the latest version of Docker Desktop.
Quelle: https://blog.docker.com/feed/

Eine neue Anmeldeerfahrung ist jetzt allgemein für Amazon QuickSight verfügbar

Amazon QuickSight bietet seinen Benutzern einen neuen Anmeldevorgang, der die Anmeldeerfahrung an vorhandene Anmeldemuster in AWS-Anwendungen anpasst. Der QuickSight-Anmeldevorgang erfolgt nun in drei Schritten: 1) auf der ersten Seite müssen Sie Ihren QuickSight-Kontonamen eingeben, 2) auf der zweiten Seite müssen Sie Ihren Benutzernamen eingeben, 3) die dritte Seite variiert je nach Ihrer Anmeldekonfiguration: nativer QuickSight- oder Active-Directory-Benutzer, AWS-Root-Benutzer oder IAM-Benutzer. Diese Änderung wirkt sich nicht auf Benutzer aus, die Single Sign-On (SSO) nutzen.
Quelle: aws.amazon.com

Amazon QuickSight führt eine neue Benutzeroberfläche für Datensatzmanagement ein

Amazon QuickSight führt eine neue Benutzeroberfläche für Datensatzmanagement ein. Zuvor war die Datensatzmanagement-Erfahrung ein modales Pop-up-Dialogfeld mit begrenztem Platz, bei dem alle Funktionalitäten nur in einem kleinen Modalfenster angezeigt wurden. Die neue Datensatzmanagement-Benutzeroberfläche ersetzt das bisherige modale Pop-up-Dialogfeld mit einer ganzseitigen Erfahrung. Dadurch lassen sich die einzelnen Datensatzmanagementkategorien, darunter Zusammenfassung, Aktualisieren, Berechtigungen und Nutzung, besser anzeigen. Dieses Update bildet außerdem die Grundlage für zukünftige Verbesserungen und Funktionen. Weitere Informationen finden Sie hier.
Quelle: aws.amazon.com

Amazon Personalize erweitert die Anzahl der Ereignisse, die von Filtern berücksichtigt werden, um Empfehlungen noch relevanter zu machen

Amazon Personalize hat die Möglichkeiten seiner Filter erweitert, indem es die Grenzwerte erhöht und die Kontrolle über die Anzahl der von jedem Filter berücksichtigten Interaktionen ermöglicht. Amazon-Personalize-Filter verbessern die Relevanz von Empfehlungen, indem sie Produkte entfernen, die die Benutzer bereits gekauft haben, Videos, die sie bereits angesehen haben, oder andere digitale Inhalte, die sie in ihren letzten Interaktionen bereits konsumiert haben. Mehrfache Empfehlungen können für die Benutzer frustrierend sein, wodurch sie weniger interagieren und so Umsatz verloren gehen kann. Amazon Personalize bietet jetzt die Möglichkeit, die Anzahl der Interaktionen, die von den Filtern berücksichtigt werden, zu erweitern, um die historischen Aktivitäten der Benutzer besser zu erfassen, insbesondere für Anwendungsfälle, in denen Kunden ein hohes Interaktionsvolumen haben. Die Filter berücksichtigen jetzt bis zu 100 Interaktionen pro Benutzer und Ereignistyp.
Quelle: aws.amazon.com