Raumfahrt: Nasa erwägt kommerziellen Weg zum Mars
Werden demnächst private Raumfahrt-Unternehmen zum Mars fliegen? Die Nasa hat eine neue Ausschreibung veröffentlicht. Ein Bericht von Patrick Klapetz (Nasa, Raumfahrt)
Quelle: Golem
Werden demnächst private Raumfahrt-Unternehmen zum Mars fliegen? Die Nasa hat eine neue Ausschreibung veröffentlicht. Ein Bericht von Patrick Klapetz (Nasa, Raumfahrt)
Quelle: Golem
Shim macht auf Linux-Systemen Secure Boot nutzbar. Eine Schwachstelle in der Anwendung ermöglicht es Angreifern, die Kontrolle zu übernehmen. (Sicherheitslücke, Ubuntu)
Quelle: Golem
Docker Compose‘s simplicity — just run compose up — has been an integral part of developer workflows for a decade, with the first commit occurring in 2013, back when it was called Plum. Although the feature set has grown dramatically in that time, maintaining that experience has always been integral to the spirit of Compose.
In this post, we’ll walk through how to manage microservice sprawl with Docker Compose by importing subprojects from other Git repos.
Maintaining simplicity
Now, perhaps more than ever, that simplicity is key. The complexity of modern software development is undeniable regardless of whether you’re using microservices or a monolith, deploying to the cloud or on-prem, or writing in JavaScript or C.
Compose has not kept up with this “development sprawl” and is even sometimes an obstacle when working on larger, more complex projects. Maintaining Compose to accurately represent your increasingly complex application can require its own expertise, often resulting in out-of-date configuration in YAML or complex makefile tasks.
As an open source project, Compose serves everyone from home lab enthusiasts to transcontinental corporations, which is no small feat, and our commitment to maintaining Compose’s signature simplicity for all users hasn’t changed.
The increased flexibility afforded by Compose watch and include means your project no longer needs to be one-size-fits-all. Now, it’s possible to split your project across Git repos and import services as needed, customizing their configuration in the process.
Application architecture
Let’s take a look at a hypothetical application architecture. To begin, the application is split across two Git repos:
backend — Backend in Python/Flask
frontend — Single-page app (SPA) frontend in JavaScript/Node.js
While working on the frontend, the developers run without using Docker or Compose, launching npm start on their laptops directly and proxy API requests to a shared staging server (as opposed to running the backend locally). Meanwhile, while working on the backend, developers and CI (for integration tests) share a Compose file and rely on command-line tools like cURL to manually test functionality locally.
We’d like a flexible configuration that enables each group of developers to use their optimal workflow (e.g., leveraging hot reload for the frontend) while also allowing reuse to share project configuration between repos. At first, this seems like an impossible situation to resolve.
Frontend
We can start by adding a compose.yaml file to frontend:
services:
frontend:
pull_policy: build
build:
context: .
environment:
BACKEND_HOST: ${BACKEND_HOST:-https://staging.example.com}
ports:
– 8000:8000
Note: If you’re wondering what the Dockerfile looks like, take a look at this samples page for an up-to-date example of best practices generated by docker init.
This is a great start! Running docker compose up will now build the Node.js frontend and make it accessible at http://localhost:8000/.
The BACKEND_HOST environment variable can be used to control where upstream API requests are proxied to and defaults to our shared staging instance.
Unfortunately, we’ve lost the great developer experience afforded by hot module reload (HMR) because everything is inside the container. By adding a develop.watch section, we can preserve that:
services:
frontend:
pull_policy: build
build:
context: .
environment:
BACKEND_HOST: ${BACKEND_HOST:-https://staging.example.com}
ports:
– 8000:8000
develop:
watch:
– path: package.json
action: rebuild
– path: src/
target: /app/src
action: sync
Now, while working on the frontend, developers continue to benefit from the rapid iteration cycles due to HMR. Whenever a file is modified locally in the src/ directory, it’s synchronized into the container at /app/src.
If the package.json file is modified, the entire container is rebuilt, so that the RUN npm install step in the Dockerfile will be re-executed and install the latest dependencies. The best part is the only change to the workflow is running docker compose watch instead of npm start.
Backend
Now, let’s set up a Compose file in backend:
services:
backend:
pull_policy: build
build:
context: .
ports:
– 1234:8080
develop:
watch:
– path: requirements.txt
action: rebuild
– path: ./
target: /app/
action: sync
include:
– path: git@github.com:myorg/frontend.git
env_file: frontend.env
frontend.env
BACKEND_HOST=http://backend:8080
Much of this looks very similar to the frontend compose.yaml.
When files in the project directory change locally, they’re synchronized to /app inside the container, so the Flask dev server can handle hot reload. If the requirements.txt is changed, the entire container is rebuilt, so that the RUN pip install step in the Dockerfile will be re-executed and install the latest dependencies.
However, we’ve also added an include section that references the frontend project by its Git repository. The custom env_file points to a local path (in the backend repo), which sets BACKEND_HOST so that the frontend service container will proxy API requests to the backend service container instead of the default.
Note: Remote includes are an experimental feature. You’ll need to set COMPOSE_EXPERIMENTAL_GIT_REMOTE=1 in your environment to use Git references.
With this configuration, developers can now run the full stack while keeping the frontend and backend Compose projects independent and even in different Git repositories.
As developers, we’re used to sharing code library dependencies, and the include keyword brings this same reusability and convenience to your Compose development configurations.
What’s next?
There are still some rough edges. For example, the remote project is cloned to a temporary directory, which makes it impractical to use with watch mode when imported, as the files are not available for editing. Enabling bigger and more complex software projects to use Compose for flexible, personal environments is something we’re continuing to improve upon.
If you’re a Docker customer using Compose across microservices or repositories, we’d love to hear how we can better support you. Get in touch!
Learn more
Get the latest release of Docker Desktop.
Vote on what’s next! Check out our public roadmap.
Have questions? The Docker community is here to help.
New to Docker? Get started.
Quelle: https://blog.docker.com/feed/
In May 2023, Docker announced the beta release of docker init, a new command-line interface (CLI) tool in Docker Desktop designed to streamline the Docker setup process for various types of applications and help users containerize their existing projects. We’re now excited to announce the general availability of docker init, with support for multiple languages and stacks, making it simpler than ever to containerize your applications.
What is docker init?
Initially released in its beta form in Docker 4.18, docker init has undergone several enhancements. docker init is a command-line utility that aids in the initialization of Docker resources within a project. It automatically generates Dockerfiles, Compose files, and .dockerignore files based on the nature of the project, significantly reducing the setup time and complexity associated with Docker configurations.
The initial beta release of init came with support only for Go and generic projects. The latest version, available in Docker Desktop 4.27, supports Go, Python, Node.js, Rust, ASP.NET, PHP, and Java.
How to use docker init
Using docker init is straightforward and involves a few simple steps. Start by navigating to your project directory where you want the Docker assets to be initialized. In the terminal, execute the docker init command. This command initiates the tool and prepares it to analyze your project (Figure 1).
Figure 1: Docker init will suggest the best template for the application.
docker init will scan your project and ask you to confirm and choose the template that best suits your application. Once you select the template, docker init asks you for some project-specific information, automatically generating the necessary Docker resources for your project (Figure 2).
Figure 2. Once a template is applied, you’ll be ready to run your application with Compose.
This step includes creating a Dockerfile and a Compose file tailored to the language and framework of your choice, as well as other relevant files. The last step is to run docker-compose up to start your newly containerized project.
Why use docker init?
The docker init tool simplifies the process of dockerization, making it accessible even to those new to Docker. It eliminates the need to manually write Dockerfiles and other configuration files from scratch, saving time and reducing the potential for errors. With its template-based approach, docker init ensures that the Docker setup is optimized for the specific type of application you are working on and that your project will follow the industry’s best practices.
Conclusion
The general availability of docker init offers an efficient and user-friendly way to integrate Docker into your projects. Whether you’re a seasoned Docker user or new to containerization, docker init is set to enhance your development workflow.
Learn more
New to Docker? Start by downloading Docker Desktop.
Watch the following video tutorials for further insight on leveraging docker init:
Docker init — Docker short tutorial
Docker init with Python
Docker init with Rust
Build an AI App with FastAPI and Docker — Coding Tutorial with Tips
Let’s Discover: Docker init command
Docker init support for .NET applications
Quelle: https://blog.docker.com/feed/
We are happy to announce that Mutagen’s file-sharing technology, acquired by Docker, has been seamlessly integrated into Docker Desktop, and the synchronized file shares feature is available now in Docker Desktop. This enhancement brings fast and flexible host-to-VM file sharing, offering a performance boost for developers dealing with extensive codebases.
Synchronized file shares overcome the limitations of traditional bind mounts, providing native file system performance, so developers can enjoy 2-10x faster file operation speeds. Simply log in to Docker Desktop with your subscription account (Docker Pro, Teams, or Business) to experience this new time-saving feature.
Improving the developer experience
Synchronized file shares transform the backend developer experience by increasing developer productivity with the time saved compared to traditional file-sharing systems. Synchronized file sharing is ideal for developers who:
Manage large repositories or monorepos with more than 100,000 files, totaling significant storage.
Utilize virtual file systems (such as VirtioFS, gRPC FUSE, or osxfs) and face scalability issues with their workflows.
Encounter performance limitations and want a seamless file-sharing solution without worrying about ownership conflicts.
To get started, go to Settings and navigate to the File sharing tab within the Resources section (Figure 1). You can learn more about the functionality and how to use it in our documentation.
Figure 1: File sharing — shares have been created and are available for use in containers.
How Docker solves the problem
Using synchronized file system caches to improve bind mount performance isn’t a new idea, but this functionality has never been available to developers as an ergonomic first-party solution. With Docker’s acquisition of Mutagen, we’re now in a position to offer an easy-to-use and transparent mechanism with potentially order-of-magnitude improvements to developer workflows.
Bind mounts are the mechanism that Linux uses to make files (like code, scripts, and images) available to containers. They’re what you get when you specify a host path to the -v/–volume flag in docker run or docker create commands (or a host path under volumes: in Compose). If folders are bind-mounted in read/write mode (the default), they also allow containers to write back to the host file system, which is great for getting files (like build products) out of containers.
When using containers natively on Linux, for example with Docker Engine, this functionality is enabled by the Linux kernel and comes with no performance impact. When using a cross-platform solution like Docker Desktop, the necessity of virtualization means that an additional file-sharing mechanism between the host system, and the Linux VM is required to enable bind mounts.
Historically, Docker has used a number of virtual file system solutions to enable this host/VM file sharing, with different solutions available based on the host platform. The most recent of these mechanisms, VirtioFS, provides an excellent out-of-the-box file-sharing solution for most developers and projects, and we’re continuing to invest in further performance improvements. These virtual file systems operate by running a file server on the host, providing files on demand via FUSE-backed file systems within the VM.
Although virtual file systems work great for most cases, there are projects where additional performance is required. In cases where a project contains many thousands (or even millions) of files totaling hundreds of megabytes or gigabytes, the demanding system calls used by development tools can lead to extremely slow behavior.
Your project might fall into this category even if it contains only a single file — look at the staggering tree of dependencies that modern frameworks bring into your node_modules directory, for example. Modern developer tools like compilers, dynamic language runtimes, and package managers love to traverse file systems, issuing thousands or millions of readdir(), stat(), and open()/read()/write()/close() calls. With virtual file systems, each of these system calls has to be sent across the host/VM boundary (in addition to incurring the standard round trips between kernel space and user space within the Linux VM when using the FUSE stack).
Using synchronized file shares
This is where synchronized file shares come into play. With synchronized file shares, developers can create ext4-backed caches of host file system locations inside the Docker Desktop VM. This means all those expensive file system calls are now handled directly by the Linux kernel on a native file system. These caches are kept in sync with the host file system using the Mutagen file synchronization engine, so the files are propagated bidirectionally with ultra-low latency. For most developers, there should be no perceptible difference in the file-sharing experience, other than improved performance!
So what’s the trade-off? Well, you’ll pay to store the files twice (the originals on the host and the cache inside the VM). Given the relatively low cost of disk space, compared with the high cost of developer time, this trade-off is usually a no-brainer.
To keep you in control of what gets synced, we’ve made synchronized file shares a granular, opt-in experience (we don’t want to sync your entire hard drive by default). We’ve worked hard to make this step as easy as possible — select Create share in the File sharing settings pane and choose the location you want.
The opt-in nature of synchronized file shares also makes it easy to adopt either gradually or selectively — there’s no need to impose changes on your entire team. Any bind mount that can’t be provided by synchronized file shares’ caches will fall back to your default virtual file-sharing mechanism, meaning there’s no change to your existing workflows. Team members can opt-in to synchronized file shares as necessary, using the functionality as a strategic optimization for specific parts of a codebase.
Conclusion
We’re excited about this latest time-saving feature and what it means to you — freeing up time, increasing productivity, and enabling a focus on innovation. Docker Desktop continues investing in modernizing the developer experience, and synchronized file shares is the latest enhancement.
Learn more
Read the synchronized file shares documentation.
Read the Docker Desktop release notes.
Get the latest release of Docker Desktop.
New to Docker? Get started.
Have questions? The Docker community is here to help.
Quelle: https://blog.docker.com/feed/
Die automatische Modelloptimierung von Amazon SageMaker stellt jetzt eine API zum programmgesteuerten Löschen von Tuningaufträgen bereit. Damit können Sie die Tuningaufträge entfernen, die Sie nicht mehr in den ListHyperParameterTuningJob-APIs haben möchten, die Namen von Tuningaufträgen wiederverwenden oder den Auftragsverlauf Ihrer Tuningaufträge optimieren.
Quelle: aws.amazon.com
Ab sofort sind C7gd-, M7gd- und R7gd-Instances von Amazon Elastic Compute Cloud (Amazon EC2) mit bis zu 3,8 TB lokalem NVMe-basiertem SSD-Speicher auf Blockebene in der Region Europa (Spanien) verfügbar.
Quelle: aws.amazon.com
Einige Besitzer der Vision Pro werden nach ihrer PIN gefragt, die anschließend nicht funktioniert. Manche vermuten einen Bug. Lösen lässt sich das Problem nur über Apple. (Vision Pro, Apple)
Quelle: Golem
Sparoffensive bei Amazon: Für Sneaker von Adidas und Vans gibt es derzeit hohe Rabatte, je nach Modell und Größe mehr als 40 Prozent. (Technik/Hardware, Amazon)
Quelle: Golem
Lauter und bessere Nebengeräuschunterdrückung: Per Beta-Firmware wird der Lautsprecher des Playstation-5-Gamepads optimiert. (Playstation 5, Sony)
Quelle: Golem