Anzeige: Google Pixel 7 und Watch mit 320 Euro Rabatt bei Media Markt
Das Google Pixel 7 ist in verschiedenen Bundles bei Media Markt radikal günstig. Besonders das Set mit der Google Pixel Watch ist spannend. (Google Pixel, Google)
Quelle: Golem
Das Google Pixel 7 ist in verschiedenen Bundles bei Media Markt radikal günstig. Besonders das Set mit der Google Pixel Watch ist spannend. (Google Pixel, Google)
Quelle: Golem
Die New York Times sah durch ChatGPT schon zuvor ihre Rechte verletzt. Nun arbeitet die US-Zeitung angeblich an einer Klage gegen OpenAI. (KI, Software)
Quelle: Golem
In the rapidly evolving era of machine learning (ML) and artificial intelligence (AI), TensorFlow has emerged as a leading framework for developing and implementing sophisticated models. With the introduction of TensorFlow.js, TensorFlow’s capability is boosted for JavaScript developers.
TensorFlow.js is a JavaScript machine learning toolkit that facilitates the creation of ML models and their immediate use in browsers or Node.js apps. TensorFlow.js has expanded the capability of TensorFlow into the realm of actual web development. A remarkable aspect of TensorFlow.js is its ability to utilize pretrained models, which opens up a wide range of possibilities for developers.
In this article, we will explore the concept of pretrained models in TensorFlow.js and Docker and delve into the potential applications and benefits.
Understanding pretrained models
Pretrained models are a powerful tool for developers because they allow you to use ML models without having to train them yourself. This approach can save a lot of time and effort, and it can also be more accurate than training your own model from scratch.
A pretrained model is an ML model that has been professionally trained on a large volume of data. Because these models have been trained on complex patterns and representations, they are incredibly effective and precise in carrying out specific tasks. Developers may save a lot of time and computing resources by using pretrained models because they can avoid having to train a model from scratch.
Types of pretrained models available
There is a wide range of potential applications for pretrained models in TensorFlow.js.
For example, developers could use them to:
Build image classification models that can identify objects in images.
Build natural language processing (NLP) models that can understand and respond to text.
Build speech recognition models that can transcribe audio into text.
Build recommendation systems that can suggest products or services to users.
TensorFlow.js and pretrained models
Developers can easily include pretrained models in their web applications using TensorFlow.js. With TensorFlow.js, you can benefit from robust machine learning algorithms without needing to be an expert in model deployment or training. The library offers a wide variety of pretrained models, including those for audio analysis, picture identification, natural language processing, and more (Figure 1).
Figure 1: Available pretrained model types for TensorFlow.
How does it work?
The module allows for the direct loading of models in TensorFlow SavedModel or Keras Model formats. Once the model has been loaded, developers can use its features by invoking certain methods made available by the model API. Figure 2 shows the steps involved for training, distribution, and deployment.
Figure 2: TensorFlow.js model API for a pretrained image classification model.
Training
The training section shows the steps involved in training a machine learning model. The first step is to collect data. This data is then preprocessed, which means that it is cleaned and prepared for training. The data is then fed to a machine learning algorithm, which trains the model.
Preprocess data: This is the process of cleaning and preparing data for training. This includes tasks such as removing noise, correcting errors, and normalizing the data.
TF Hub: TF Hub is a repository of pretrained ML models. These models can be used to speed up the training process or to improve the accuracy of a model.
tf.keras: tf.keras is a high-level API for building and training machine learning models. It is built on top of TensorFlow, which is a low-level library for machine learning.
Estimator: An estimator is a type of model in TensorFlow. It is a simplified way to build and train ML models.
Distribution
Distribution is the process of making a machine learning model available to users. This can be done by packaging the model in a format that can be easily shared, or by deploying the model to a production environment.
The distribution section shows the steps involved in distributing a machine learning model. The first step is to package the model. This means that the model is converted into a format that can be easily shared. The model is then distributed to users, who can then use it to make predictions.
Deployment
The deployment section shows the steps involved in deploying a machine learning model. The first step is to choose a framework. A framework is a set of tools that makes it easier to build and deploy machine learning models. The model is then converted into a format that can be used by the framework. The model is then deployed to a production environment, where it can be used to make predictions.
Benefits of using pretrained models
There are several pretrained models available in TensorFlow.js that can be utilized immediately in any project and offer the following notable advantages:
Savings in time and resources: Building an ML model from scratch might take a lot of time and resources. Developers can skip this phase and use a model that has already learned from lengthy training by using pretrained models. The time and resources needed to implement machine learning solutions are significantly decreased as a result.
State-of-the-art performance: Pretrained models are typically trained on huge datasets and refined by specialists, producing models that give state-of-the-art performance across a range of applications. Developers can benefit from these models’ high accuracy and reliability by incorporating them into TensorFlow.js, even if they lack a deep understanding of machine learning.
Accessibility: TensorFlow.js makes pretrained models powerful for web developers, allowing them to quickly and easily integrate cutting-edge machine learning capabilities into their projects. This accessibility creates new opportunities for developing cutting-edge web-based solutions that make use of machine learning’s capabilities.
Transfer learning: Pretrained models can also serve as the foundation for your process. Using a smaller, domain-specific dataset, developers can further train a pretrained model. Transfer learning enables models to swiftly adapt to particular use cases, making this method very helpful when data is scarce.
Why is containerizing TensorFlow.js important?
Containerizing TensorFlow.js brings several important benefits to the development and deployment process of machine learning applications. Here are five key reasons why containerizing TensorFlow.js is important:
Docker provides a consistent and portable environment for running applications. By containerizing TensorFlow.js, you can package the application, its dependencies, and runtime environment into a self-contained unit. This approach allows you to deploy the containerized TensorFlow.js application across different environments, such as development machines, staging servers, and production clusters, with minimal configuration or compatibility issues.
Docker simplifies the management of dependencies for TensorFlow.js. By encapsulating all the required libraries, packages, and configurations within the container, you can avoid conflicts with other system dependencies and ensure that the application has access to the specific versions of libraries it needs. This containerization eliminates the need for manual installation and configuration of dependencies on different systems, making the deployment process more streamlined and reliable.
Docker ensures the reproducibility of your TensorFlow.js application’s environment. By defining the exact dependencies, libraries, and configurations within the container, you can guarantee that the application will run consistently across different deployments.
Docker enables seamless scalability of the TensorFlow.js application. With containers, you can easily replicate and distribute instances of the application across multiple nodes or servers, allowing you to handle high volumes of user requests.
Docker provides isolation between the application and the host system and between different containers running on the same host. This isolation ensures that the application’s dependencies and runtime environment do not interfere with the host system or other applications. It also allows for easy management of dependencies and versioning, preventing conflicts and ensuring a clean and isolated environment in which the application can operate.
Building a fully functional ML face-detection demo app
By combining the power of TensorFlow.js and Docker, developers can create a fully functional machine learning (ML) face-detection demo app. Once the app is deployed, the TensorFlow.js model can recognize faces in real-time by leveraging the camera. However, with a minor code change, it’s possible for developers to build an app that allows users to upload images or videos to be detected.
In this tutorial, you’ll learn how to build a fully functional face-detection demo app using TensorFlow.js and Docker. Figure 3 shows the file system architecture for this setup. Let’s get started.
Prerequisite
The following key components are essential to complete this walkthrough:
Docker Desktop
Figure 3: File system architecture for Docker Compose development setup.
Deploying a ML face-detection app is a simple process involving the following steps:
Clone the repository.
Set up the required configuration files.
Initialize TensorFlow.js.
Train and run the model.
Bring up the face-detection app.
We’ll explain each of these steps below.
Quick demo
If you’re in hurry, you can bring up the complete app by running the following command:
docker run -p 1234:1234 harshmanvar/face-detection-tensorjs:slim-v1
Open URL in browser: http://localhost:1234.
Figure 4: URL opened in a browser.
Getting started
Cloning the project
To get started, you can clone the repository by running the following command:
https://github.com/dockersamples/face-detection-tensorjs
We are utilizing the MediaPipe Face Detector demo for this demonstration. You first create a detector by choosing one of the models from SupportedModels, including MediaPipeFaceDetector.
For example:
const model = faceDetection.SupportedModels.MediaPipeFaceDetector;
const detectorConfig = {
runtime: ‘mediapipe’, // or ‘tfjs’
}
const detector = await faceDetection.createDetector(model, detectorConfig);
Then you can use the detector to detect faces:
const faces = await detector.estimateFaces(image);
File: index.html:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=1.0, user-scalable=no">
<style>
body {
margin: 0;
}
#stats {
position: relative;
width: 100%;
height: 80px;
}
#main {
position: relative;
margin: 0;
}
#canvas-wrapper {
position: relative;
}
</style>
</head>
<body>
<div id="stats"></div>
<div id="main">
<div class="container">
<div class="canvas-wrapper">
<canvas id="output"></canvas>
<video id="video" playsinline style="
-webkit-transform: scaleX(-1);
transform: scaleX(-1);
visibility: hidden;
width: auto;
height: auto;
">
</video>
</div>
</div>
</div>
</div>
</body>
<script src="https://cdnjs.cloudflare.com/ajax/libs/dat-gui/0.7.6/dat.gui.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/stats.js/r16/Stats.min.js"></script>
<script src="src/index.js"></script>
</html>
The web application’s principal entry point is the index.html file. It includes the video element needed to display the real-time video stream from the user’s webcam and the basic HTML page structure. The relevant JavaScript scripts for the facial detection capabilities are also imported.
File: src/Index.js:
import ‘@tensorflow/tfjs-backend-webgl';
import ‘@tensorflow/tfjs-backend-webgpu';
import * as tfjsWasm from ‘@tensorflow/tfjs-backend-wasm';
tfjsWasm.setWasmPaths(
`https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-backend-wasm@${
tfjsWasm.version_wasm}/dist/`);
import * as faceDetection from ‘@tensorflow-models/face-detection';
import {Camera} from ‘./camera';
import {setupDatGui} from ‘./option_panel';
import {STATE, createDetector} from ‘./shared/params';
import {setupStats} from ‘./shared/stats_panel';
import {setBackendAndEnvFlags} from ‘./shared/util';
let detector, camera, stats;
let startInferenceTime, numInferences = 0;
let inferenceTimeSum = 0, lastPanelUpdate = 0;
let rafId;
async function checkGuiUpdate() {
if (STATE.isTargetFPSChanged || STATE.isSizeOptionChanged) {
camera = await Camera.setupCamera(STATE.camera);
STATE.isTargetFPSChanged = false;
STATE.isSizeOptionChanged = false;
}
if (STATE.isModelChanged || STATE.isFlagChanged || STATE.isBackendChanged) {
STATE.isModelChanged = true;
window.cancelAnimationFrame(rafId);
if (detector != null) {
detector.dispose();
}
if (STATE.isFlagChanged || STATE.isBackendChanged) {
await setBackendAndEnvFlags(STATE.flags, STATE.backend);
}
try {
detector = await createDetector(STATE.model);
} catch (error) {
detector = null;
alert(error);
}
STATE.isFlagChanged = false;
STATE.isBackendChanged = false;
STATE.isModelChanged = false;
}
}
function beginEstimateFaceStats() {
startInferenceTime = (performance || Date).now();
}
function endEstimateFaceStats() {
const endInferenceTime = (performance || Date).now();
inferenceTimeSum += endInferenceTime – startInferenceTime;
++numInferences;
const panelUpdateMilliseconds = 1000;
if (endInferenceTime – lastPanelUpdate >= panelUpdateMilliseconds) {
const averageInferenceTime = inferenceTimeSum / numInferences;
inferenceTimeSum = 0;
numInferences = 0;
stats.customFpsPanel.update(
1000.0 / averageInferenceTime, 120);
lastPanelUpdate = endInferenceTime;
}
}
async function renderResult() {
if (camera.video.readyState < 2) {
await new Promise((resolve) => {
camera.video.onloadeddata = () => {
resolve(video);
};
});
}
let faces = null;
if (detector != null) {
beginEstimateFaceStats();
try {
faces =
await detector.estimateFaces(camera.video, {flipHorizontal: false});
} catch (error) {
detector.dispose();
detector = null;
alert(error);
}
endEstimateFaceStats();
}
camera.drawCtx();
if (faces && faces.length > 0 && !STATE.isModelChanged) {
camera.drawResults(
faces, STATE.modelConfig.boundingBox, STATE.modelConfig.keypoints);
}
}
async function renderPrediction() {
await checkGuiUpdate();
if (!STATE.isModelChanged) {
await renderResult();
}
rafId = requestAnimationFrame(renderPrediction);
};
async function app() {
const urlParams = new URLSearchParams(window.location.search);
await setupDatGui(urlParams);
stats = setupStats();
camera = await Camera.setupCamera(STATE.camera);
await setBackendAndEnvFlags(STATE.flags, STATE.backend);
detector = await createDetector();
renderPrediction();
};
app();
JavaScript file that conducts the facial detection logic. TensorFlow.js is loaded, allowing for real-time face detection on the video stream using the pretrained face identification model. The file manages access to the camera, processing of the video frames, and creating bounding boxes around faces that have been recognized in the video feed.
File: src/camera.js:
import {VIDEO_SIZE} from ‘./shared/params';
import {drawResults, isMobile} from ‘./shared/util';
export class Camera {
constructor() {
this.video = document.getElementById(‘video’);
this.canvas = document.getElementById(‘output’);
this.ctx = this.canvas.getContext(‘2d’);
}
static async setupCamera(cameraParam) {
if (!navigator.mediaDevices || !navigator.mediaDevices.getUserMedia) {
throw new Error(
‘Browser API navigator.mediaDevices.getUserMedia not available’);
}
const {targetFPS, sizeOption} = cameraParam;
const $size = VIDEO_SIZE[sizeOption];
const videoConfig = {
‘audio': false,
‘video': {
facingMode: ‘user’,
width: isMobile() ? VIDEO_SIZE[’360 X 270′].width : $size.width,
height: isMobile() ? VIDEO_SIZE[’360 X 270′].height : $size.height,
frameRate: {
ideal: targetFPS,
},
},
};
const stream = await navigator.mediaDevices.getUserMedia(videoConfig);
const camera = new Camera();
camera.video.srcObject = stream;
await new Promise((resolve) => {
camera.video.onloadedmetadata = () => {
resolve(video);
};
});
camera.video.play();
const videoWidth = camera.video.videoWidth;
const videoHeight = camera.video.videoHeight;
// Must set below two lines, otherwise video element doesn’t show.
camera.video.width = videoWidth;
camera.video.height = videoHeight;
camera.canvas.width = videoWidth;
camera.canvas.height = videoHeight;
const canvasContainer = document.querySelector(‘.canvas-wrapper’);
canvasContainer.style = `width: ${videoWidth}px; height: ${videoHeight}px`;
camera.ctx.translate(camera.video.videoWidth, 0);
camera.ctx.scale(-1, 1);
return camera;
}
drawCtx() {
this.ctx.drawImage(
this.video, 0, 0, this.video.videoWidth, this.video.videoHeight);
}
drawResults(faces, boundingBox, keypoints) {
drawResults(this.ctx, faces, boundingBox, keypoints);
}
}
The configuration for the camera’s width, audio, and other setup-related items is managed in camera.js.
File: .babelrc:
The .babelrc file is used to configure Babel, a JavaScript compiler, specifying presets and plugins that define the transformations to be applied during code transpilation.
File: src/shared:
shared % tree
.
├── option_panel.js
├── params.js
├── stats_panel.js
└── util.js
1 directory, 4 files
The parameters and other shared files found in the src/shared folder are needed to run and access the camera, checks, and parameter values.
Defining services using a Compose file
Here’s how our services appear within a Docker Compose file:
services:
tensorjs:
#build: .
image: harshmanvar/face-detection-tensorjs:v2
ports:
– 1234:1234
volumes:
– ./:/app
– /app/node_modules
command: watch
Your sample application has the following parts:
The tensorjs service is based on the harshmanvar/face-detection-tensorjs:v2 image.
This image contains the necessary dependencies and code to run a face detection system using TensorFlow.js.
It exposes port 1234 to communicate with the TensorFlow.js.
The volume ./:/app sets up a volume mount, linking the current directory (represented by ./) on the host machine to the /app directory within the container. This allows you to share files and code between your host machine and the container.
The watch command specifies the command to run within the container. In this case, it runs the watch command, which suggests that the face detection system will continuously monitor for changes or updates.
Building the image
It’s time to build the development image and install the dependencies to launch the face-detection model.
docker build -t tensor-development:v1
Running the container
docker run -p 1234:1234 -v $(pwd):/app -v /app/node_modules tensor-development:v1 watch
Bringing up the container services
You can launch the application by running the following command:
docker compose up -d
Then, use the docker compose ps command to confirm that your stack is running correctly. Your terminal will produce the following output:
docker compose ps
NAME IMAGE COMMAND SERVICE STATUS PORTS
tensorflow tensorjs:v2 "yarn watch" tensorjs Up 48 seconds 0.0.0.0:1234->1234/tcp
Viewing the containers via Docker Dashboard
You can also leverage the Docker Dashboard to view your container’s ID and easily access or manage your application (Figure 5) container.
Figure 5: Viewing containers in the Docker Dashboard.
Conclusion
Well done! You have acquired the knowledge to utilize a pre-trained machine learning model with JavaScript for a web application, all thanks to TensorFlow.js. In this article, we have demonstrated how Docker Compose lets you quickly create and deploy a fully functional ML face-detection demo app, using just one YAML file.
With this newfound expertise, you can now take this guide as a foundation to build even more sophisticated applications with just a few additional steps. The possibilities are endless, and your ML journey has just begun!
Learn more
Get the latest release of Docker Desktop.
Vote on what’s next! Check out our public roadmap.
Have questions? The Docker community is here to help.
New to Docker? Get started.
Quelle: https://blog.docker.com/feed/
If you missed our webinar “Docker Scout: Live Demo, Insights, and Q&A” — or if you want to watch it again — it’s available on-demand. The audience had more questions than we had time to answer, so we’ve included additional Q&A below.
Many developers — and their employers — are concerned with securing their software supply chain. But what does that mean? Senior Developer Relations Manager Michael Irwin uses a coffee analogy (even though he doesn’t drink coffee himself!). To brew the best cup of coffee, you need many things: clean water, high-quality beans, and good equipment. For the beans and the water, you want assurances that they meet your standards. You might look for beans that have independent certification of their provenance and processing to make sure they are produced sustainably and ethically, for example.
The same concepts apply to producing software. You want to start with trusted content. Using images from Docker Official Images, Docker Verified Publishers, and Docker-Sponsored Open Source lets you know you’re building on a reliable, up-to-date foundation. From those images and your layered software libraries, Docker can build a software bill of materials (SBOM) that you can present to your customers to show exactly what went into making your application. And with Docker Scout, you can automatically check for known vulnerabilities, which helps you find and fix security issues before they reach your customers.
During the webinar, Senior Principal Software Engineer Christian Dupuis demonstrated using Docker Scout. He highlighted how Docker Scout utilizes SBOM and provenance attestation produced by BuildKit. He also showed Docker Scout indicating vulnerabilities by severity. Docker Scout doesn’t stop at showing vulnerabilities, it lets you know where the vulnerability is added to the image and provides suggestions for remediation.
The audience asked great questions during the live Q&A. Since we weren’t able to answer them all during the webinar, we want to take a moment to address them now.
Webinar Q&A
What source does Docker Scout use to determine the CVEs?
Docker Scout gets vulnerability data from approximately 20 advisory sources. This includes Linux distributions and code repository platforms like Debian, Ubuntu, GitHub, GitLab, and other trustworthy providers of advisory metadata.
We constantly cross-reference the SBOM information stored in the Docker Scout system-of-record with advisory data. New vulnerability information is immediately reflected on Docker Desktop, in the Docker Scout CLI, and on scout.docker.com.
How much does Docker Scout cost?
Docker Scout has several different price tiers. You can start for free with up to 3 image repositories; if you need more, we also offer paid plans. The Docker Scout product page has a full comparison to help you pick the right option.
How do I add Docker Scout to my CI pipeline?
The documentation on Docker Scout has a dedicated section on CI integrations.
How can I contribute?
There are several ways you can engage with the product team behind Docker Scout and influence the roadmap:
For feedback and issues about Docker Scout in the CLI, CI, Docker Desktop or scout.docker.com, open issues at https://github.com/docker/scout-cli.
Learn more about the Docker Scout Design Partner Program.
What platforms are supported?
Docker Scout works on all supported operating systems. You can use Docker Scout in Docker Desktop version 4.17 or later or log in to scout.docker.com to see information across all of your Docker Hub images. Make sure you keep your Docker Desktop version up to date — we’re adding new features and capabilities in every release.
We also provide a Docker Scout CLI plugin. You can find instructions in the scout-cli GitHub repository.
How do I export a list of vulnerabilities?
You can use the Docker Scout CLI to export vulnerabilities into a SARIF file for further processing or export. You can read more about this in the Docker Engine documentation.
How does Docker Scout help if I’m already using scanning tools?
Docker Scout builds upon a system of record for the entire software development life cycle, so you can integrate it with other tools you use in your software delivery process. Talk to us to learn more.
Get started with Docker Scout
Developers want speed, security, and choice. Docker Scout helps improve developer efficiency and software security by detecting known vulnerabilities early. While it offers remediation suggestions, developers still have the choice in determining the best approach to addressing vulnerabilities. Get started today to see how Docker Scout helps you secure your software supply chain.
Learn more
Watch our webinar “Docker Scout: Live Demo, Insights, and Q&A”.
Get the latest release of Docker Desktop.
Vote on what’s next! Check out our public roadmap.
Have questions? The Docker community is here to help.
New to Docker? Get started.
Quelle: https://blog.docker.com/feed/
Docker is committed to delivering the most efficient and high-performing container development toolset in the market, so we continue to advance our technology to exceed customer expectations. With the latest version 4.22 release, Docker Desktop has undergone significant optimizations, making it streamlined, lightweight, and faster than ever. Not only does this new version offer enhanced performance, but it also contributes to a more environmentally friendly approach, saving energy and reducing resource consumption on local machines:
Network speeds from 3.5 Gbit/sec to 19 Gbit/sec — a 443% improvement
Filesystem improvements that yield 60% faster builds
Enhanced active memory usage from 4GB to 2GB — a 2x improvement
Resource Saver mode — automatically reduces CPU and memory utilization by 10x
Discover how our commitment to delivering exceptional experiences for our developer community and customers shines through in the latest updates to Docker Desktop.
4.19: Networking stack — turbocharging container connectivity
The Docker Desktop 4.19 release brought a substantial boost to Docker Desktop’s networking stack, the technology used by containers to access the internet. This upgrade significantly enhances networking performance, which is particularly beneficial for tasks like docker builds, which often involve downloading and installing numerous packages.
Benchmark tests using iperf3 on a first-generation M1 Mac Mini demonstrated remarkable progress. The previous unoptimized network stack managed around 3.5 Gbit/sec, whereas the current default networking stack in 4.19+ achieves an impressive 19 Gbit/sec on the same machine. This optimization translates to faster build times and smoother container operations.
4.21: Optimized CPU, memory, and filesharing performance
Docker Desktop 4.21 introduced the first version of what is now a game-changing feature, Resource Saver, which automatically reduces CPU. This intelligent mode detects when Docker Desktop is not running containers and automatically reduces CPU consumption, ensuring that developers can keep the application running in the background without compromising battery life or dealing with noisy laptop fans. Across all Docker Desktop users on 4.21, this innovative feature has saved up to 38,500 CPU hours every day, making it a true productivity booster.
Furthermore, Docker Desktop 4.21 significantly enhanced its active memory usage, slashing it from approximately 4GB to around 2GB — a remarkable 2x advancement. This empowers developers to seamlessly juggle multiple applications alongside Docker Desktop, resulting in an elevated and smoother user experience.
Additionally, Docker Desktop now utilizes VirtioFS on macOS 12.5+ to deliver substantial performance gains when sharing files with containers through docker run -v. Notably, the time needed for a clean (non-incremental) build of redis/redis checked out on the host has been reduced by more than half over recent releases resulting in ~60% faster builds, further solidifying Docker Desktop’s reputation as an indispensable development tool.
4.22: Heightened efficiency — dramatically reducing memory utilization when idle
Now, with the release of Docker Desktop 4.22, we’re excited to announce that Docker Desktop’s newest performance enhancement feature, Resource Saver, supports automatic low memory mode for Mac, Windows, and Linux. This addition detects when Docker Desktop is not running containers and dramatically reduces its memory footprint by 10x, freeing up valuable resources on developers’ machines for other tasks and minimizing the risk of lag when navigating across different applications. Memory allocation can now be quick and efficient, resulting in a seamless and performant development experience.
But don’t just take it from us. In “What is Resource Saver Mode in Docker Desktop and what problem does it solve?” Ajeet Raina explains how the new Resource Saver feature optimizes efficiency, enhances performance, and simplifies the development workflow.
Conclusion
Docker Desktop continues to evolve. The latest enhancements in version 4.22, combined with the resource-saving features introduced in 4.21 and 4.19, have made Docker Desktop a lighter, faster, and more environmentally friendly solution for developers.
By optimizing resource usage and maximizing performance, Docker Desktop enables developers to build and release applications faster while being conscious of their environmental impact. As Docker continues to innovate and fine-tune its offerings, developers can expect even greater strides toward a more efficient and productive development experience.
Download or update to the newest version of Docker Desktop today to start saving time and take advantage of these new advancements.
Learn more
Get the latest release of Docker Desktop.
Vote on what’s next! Check out our public roadmap.
Have questions? The Docker community is here to help.
New to Docker? Get started.
Quelle: https://blog.docker.com/feed/
Warum mehrere Netzteile mitschleppen? Bei Amazon ist ein Ladegerät von Oraimo mit vier USB-Anschlüssen und 120 W Leistung sagenhaft günstig. (Technik/Hardware, Amazon)
Quelle: Golem
Die zweitälteste noch aktive Linux-Distribution Debian ist 30 Jahre alt geworden. Zeit für Glückwünsche und einen kleinen Rückblick. (Debian, FreeBSD)
Quelle: Golem
Das Preisschild steht durch Klemmbausteine unter Beschuss, gibt’s doch sowohl den Tiger-Panzer als auch den T-34 von Mould King mit Rabatt. (Lego, Technik/Hardware)
Quelle: Golem
Vielseitiger Kaffeegenuss für jeden Moment: Diese Philips-Kaffeemaschine gibt es jetzt bei Amazon zu einem unschlagbaren Preis. (Haushaltsgeräte, Technik/Hardware)
Quelle: Golem
Im August 2023 hat 1&1 ein National-Roaming-Abkommen mit Vodafone verkündet. Dem bisherigen Partner O2 drohen Einbußen beim Cashflow. (O2, Long Term Evolution)
Quelle: Golem