Elektrischer Ford: Mustang Cobra Jet mit 1.400 PS für Extremsprints
Mit Elektroautos lassen sich traumhafte Beschleunigungswerte erreichen – ein Ford Mustang will nun ganz vorn an die Spitze. (Elektroauto, Technologie)
Quelle: Golem
Mit Elektroautos lassen sich traumhafte Beschleunigungswerte erreichen – ein Ford Mustang will nun ganz vorn an die Spitze. (Elektroauto, Technologie)
Quelle: Golem
Schwefelhexaflourid kommt als Isoliermittel in vielen technischen Anlagen zum Einsatz. Das Problem: Das Gas befeuert den Treibhauseffekt zwanzigtausendfach stärker als Kohlendioxid. Die Suche nach Alternativen läuft deshalb auf Hochtouren. Ein Bericht von Jan Oliver Löfken (Umweltschutz, Siemens)
Quelle: Golem
Die Kommunikation der Valve-Teams ist unterschiedlich: Das Counter-Strike-Team äußert sich und beruhigt, das Team-Fortress-2-Team bleibt still. (CS:Go, Server)
Quelle: Golem
Disney will Star Wars offenbar zum Zugpferd für sein Videostreamingabo Disney+ machen. (Star Wars, Disney)
Quelle: Golem
Wer sich bei Ebay einen Tiersarg besorgt, um für eine Custom-Big-Jim-Figur einen Vampirsarg zu basteln, hat offenbar ein sehr spezielles Hobby. Ein Erfahrungsbericht von Achim Sawall (BigJim, Games)
Quelle: Golem
Introduction
In today’s fast-paced development world CTOs, dev managers and product managers demand quicker turnarounds for features and defect fixes. “No problem, boss,” you say. “We’ll just use containers.” And you would be right but once you start digging in and looking at ways to get started with containers, well quite frankly, it’s complex.
One of the biggest challenges is getting a toolset installed and setup where you can build images, run containers and duplicate a production kubernetes cluster locally. And then shipping containers to the Cloud, well, that’s a whole ‘nother story.
Docker Desktop and Docker Hub are two of the foundational toolsets to get your images built and shipped to the cloud. In this two-part series, we’ll get Docker Desktop set up and installed, build some images and run them using Docker Compose. Then we’ll take a look at how we can ship those images to the cloud, set up automated builds, and deploy our code into production using Docker Hub.
Docker Desktop
Docker Desktop is the easiest way to get started with containers on your development machine. The Docker Desktop comes with the Docker Engine, Docker CLI, Docker Compose and Kubernetes. With Docker Desktop there are no cloning of repos, running make files and searching StackOverflow to help fix build and install errors. You just need to download the image for your OS and double-click to get started installing. Let’s quickly walk through the process now.
Installing Docker Desktop
Docker Desktop is available for Mac and Windows. Navigate over to Docker Desktop homepage and choose your OS.
Once the download has completed, double click on the image and follow the instructions to get Docker Desktop installed. For more information on installing for your specific operating system, click the link below.
Install Docker Desktop on MacInstall Docker Desktop on Windows
Docker Desktop UI Overview
Once you’ve downloaded and installed Docker Desktop and the whale icon has become steady you are all set. Docker Desktop is running on your machine.
Dashboard
Now, let’s open the docker dashboard and take a look around.
Click on the Docker icon and choose “Desktop” from the dropdown menu.
The following window should open:
As you can see, we do not have any containers running at this time. We’ll fix that in a minute but for now, let’s take a quick tour of the dashboard.
Login with Docker ID
The first thing we want to do is login with our Docker ID. If you do not already have a one, head over to Docker Hub and signup. Go ahead, I’ll wait.
Okay, in the top right corner of the Dashboard, you’ll see the Sign in button. Click on that and enter your Docker ID and Password. If instead, you see your Docker ID, then you are already logged in.
Settings
Now let’s take a look at the settings you can configure in Docker Desktop. Click on the settings icon in the upper right hand corner of the window and you should see the Settings screen:
General
Under this tab is where you’ll find the general settings such as starting Docker Desktop when you log in to your machine, automatically checking for updates, include the Docker Desktop VM in backups, and whether Docker Desktop will send usage statistics to Docker.
These default settings are fine. You really do not need to change them unless you are doing advanced image builds and need to backup your working images. Or you want to have more control over when Docker Desktop is started.
Resources
Next let’s take a look at the Resources tab. On this tab and its sub-tabs is where you can control the resources that are allocated to your Docker environment. These default settings are sufficient to get started. If you are building a lot of images or running a lot of containers at once, you might want to bump up the number of CPUs, Memory and RAM. You can find more information about these settings in our documentation.
Docker Engine
If you are looking to make more advanced changes to the way the Docker Engine runs, then this is the tab for you. The Docker Engine daemon is configured using a daemon.json file located in /etc/docker/daemon.json on Linux systems. But when using Docker Desktop, you will add the config settings here in the text area provided. These settings will get passed to the Docker Engine that is used with Docker Desktop. All available configurations can be found in the documentation.
Command Line
Turning on and off experimental features for the CLI is as simple as toggling a switch. These features are for testing and feedback purposes only. So don’t rely on them for production. They could be changed or removed in future builds.
You can find more information about what experimental features are included in your build on this documentation page.
Kubernetes
Docker Desktop comes with a standalone Kubernetes server and client and is integrated with the Docker CLI. On this tab is where you can enable and disable this Kubernetes. This instance of Kubernetes is not configurable and comes with one single-node cluster.
The Kubernetes server runs within a Docker container and is intended for local testing only. When Kubernetes support is enabled, you can deploy your workloads, in parallel, on Kubernetes, Swarm, and as standalone containers. Enabling or disabling the Kubernetes server does not affect your other workloads.
Troubleshoot
Let’s move on to the troubleshoot screen. Click on the but icon in the upper right hand corner of the window and you should see the following Troubleshoot screen:
Here is where you can restart Docker Desktop, Run Diagnostics, Reset features and Uninstall Docker Desktop.
Building Images and Running Containers
Now that we have Docker Desktop installed and have a good overview of the UI, let’s jump in and create a Docker image that we can run and ship to Hub.
Docker consists of two major components: the Engine that runs as a daemon on your system and a CLI that sends commands to the daemon to build, ship and run your images and containers.
In this article, we will be primarily interacting with Docker through the CLI.
Difference between Images and Containers
A container is a process running on your system just like any other process. But the difference between a “container” process and a “normal” process is that the container process has been sandboxed or isolated from other resources on the system.
One of the main pieces of this isolation is the filesystem. Each container is given its own private filesystem which is created from the Docker image. This Docker image is where everything is packaged for the processes to run – code, libraries, configuration files, environment variables and runtime.
Creating a Docker Image
I’ve put together a small node.js application that I’ll use for demonstration purposes but any web application would follow the same principles that we will be talking about. Feel free to use your own application and follow along.
First, let’s clone the application from GitHub.
$ git clone git@github.com:pmckeetx/projectz.git
Open the project in your favorite text editor. You’ll see that the application is made up of a UI written in React.js and a backend service written in Node.js and Express.
Let’s install the dependencies and run the application locally to make sure everything is working.
Open your favorite terminal and cd into the root directory of the project.
$ cd services
$ npm install
Now let’s install the UI dependencies.
$ cd ../ui
$ npm install
Let’s start the services project first. Open a new terminal window and cd into the services directory. To run the application execute the following command:
$ npm run start
In your original terminal window, start the UI. To start the UI run the following command:
$ npm run start
If a browser window is not opened for you automatically, fire up your favorite browser and navigate to http://localhost:3000/
You should see the following screen:
If you do not see a list of projects or get an error message, make sure you have the services project running.
Okay, great, we have everything set up and running.
Dockerfile
Before we build our images, let’s take a quick look at the Dockerfile we’ll use to build the services image.
In your texteditor, open the Dockerfile for the services project. You should see the following.
FROM node:lts
ARG NODE_ENV=production
ENV NODE_ENV $NODE_ENV
WORKDIR /code
ARG PORT=80
ENV PORT $PORT
COPY package.json /code/package.json
COPY package-lock.json /code/package-lock.json
RUN npm ci
COPY . /code
CMD [ “node”, “src/server.js” ]
A Dockerfile is basically a shell script that tells Docker how to build your image.
FROM node:lts
The first line in the file tells Docker that we will be using the long-term-support of node.js as our base image.
ARG NODE_ENV=production
ENV NODE_ENV $NODE_ENV
Next, we create a build arg and set the default value to be “production” and then set NODE_ENV environment variable to what was set in the NODE_ENV build arg.
WORKDIR /code
Now we tell Docker to create a directory named code and use it as our working directory. The following COPY and RUN commands will be performed in this directory:
ARG PORT=80
ENV PORT $PORT
Here we are creating another build argument and assigning 80 as the value. Then this build argument is used to set the PORT environment variable.
COPY package.json /code/package.json
COPY package-lock.json /code/package-lock.json
RUN npm ci
These COPY commands will copy the package*.json files into our image and will be used by the npm ci to install node.js dependencies.
COPY . /code
Now we’ll copy our application code into the image.
Quick Note: Dockerfiles are executed from top to bottom. Each command will first be checked against a cache. If nothing has changed in the cache, Docker will use the cache instead of running the command. On the other hand, if something has changed, the cache will be invalidated and all subsequent cache layers will also be invalidated and corresponding commands will be run. So if we want to have the fastest build possible and not invalidate the entire cache on every image build, we will want to place the commands that change the most as far to the bottom of the Dockerfile as possible.
So for example, we want to copy the package.json and package-lock.json files into the image before we copy the source code because the source code will change a lot more often than adding modules to the package.json file.
CMD [ “node”, “src/server.js” ]
The last line in our Dockerfile tells Docker what command we would like to execute when our image is started. In this case, we want to execute the command: node src/server.js
Building the image
Now that we understand our Dockerfile. Let’s have Docker build the image.
In the root of the services directory, run the following command:
$ docker build –tag projectz-svc .
This tells Docker to build our image using the Dockerfile located in the current directory and then tag that image with projectz-svc
You should see a similar output when Docker has finished building the image.
Successfully built 922d1db89268
Successfully tagged projectz-svc
Now let’s run our container and make sure we can connect to it. Run the following command to start our image and connect port 8080 to port 80 inside our container.
$ docker run -it –rm –name services -p 8080:80 projectz-svc
You should see the following printed to the terminal:
Listening on port: 80
Open your browser and navigate to http://localhost:8080/services/projects
If all is well, you will see a bunch of json returned in the browser and “GET /services/projects” printed on in the terminal.
Let’s do the same for the front-end UI. I won’t walk you through the Dockerfile at this time but we will revisit when we look at pushing to the Cloud.
Navigate in your terminal into the UI source directory and run the following commands:
$ docker build –tag projectz-ui .
$ docker run -it –rm –name ui -p 3000:80 projectz-ui
Again, open your favorite browser and navigate to http://localhost:3000/
Awesome!!!
Now, if you remember at the beginning of the article we took a look at the Docker Desktop UI. At that time we did not have any containers running. Open the Docker Dashboard by clicking on the whale icon () either in the Notification area (or System Tray) on Windows or from the menu bar on Mac.
We can now see our two containers running:
If you do not see them running, re-run the following commands in your terminal.
$ docker run -it –rm –name services -p 8080:80 projectz-svc
$ docker run -it –rm –name ui -p 3000:80 projectz-ui
Hover your mouse over one of the images and you’ll see buttons appear.
With these buttons you can do the following:
Open in a browser – If the container exposes a port, you can click this button and open your application in a browser.CLI – This button will run the docker exec in a terminal for you.Stop/Start – You can start and stop your container.Restart – You are also able to restart your container.Delete – You can also remove your container.
Now click on the ui container to view its details page.
On the details screen, we are able to view the container logs, inspect the container, and view stats such as CPU Usage, Memory Usage, Disk Read/Writes, and Networking I/O.
Docker-compose
Now let’s take a look at how we can do this a little easier using docker-compose. Using docker-compose, we can configure both our applications in one file and start both of them with one command.
If you take a look in the root of our git repo, you’ll see a docker-compose.yml file. Open that file in your text editor and let’s have a look.
version: “3.7”
services:
ui:
image: projectz-ui
build:
context: ./ui
args:
NODE_ENV: production
REACT_APP_SERVICE_HOST: http://localhost:8080
ports:
– “3000:80″
services:
image: projectz-svc
build:
context: ./services
args:
NODE_ENV: production
PORT: “80”
ports:
– “8080:80″
This file combines all the parameters we passed to our two earlier commands to build and run our services.
If you have not done so already, stop and remove the services and ui containers that we start earlier.
$ docker stop services
$ docker stop ui
Now let’s start our application using docker-compose. Make sure you are in the root of the git repo and run the following command:
$ docker-compose up –build
Docker-compose will build our images and tag them. Once that is finished, compose will start two containers – one for the UI application and one for the services application.
Open up the Docker Desktop dashboard screen and you will now be able to see we have projectz running.
Expand the projectz and you will see our two containers running:
If you click on either one of the containers, you will have access to the same details screens as before.
Docker-compose gives us huge improvements over running each individual docker build and docker run commands as before. Just imagine if you had 10s of services or even 100s of micro-services running your application and having to start each individual container one at a time. With docker-compose, you can configure your application, build arguments and start all services with one command.
Next Steps
For more on how to use Docker Desktop, check out these resources:
Docker OverviewGetting started tutorial
Stay tuned for Part II of this series where we’ll use Docker Hub to build our images, run automated tests, and push our images to the cloud.
The post Using Docker Desktop and Docker Hub Together – Part 1 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/
Not all application modernization strategies are created equal. One of the simplest approaches is to take an existing virtual machine and save it as a container. But while the resulting container will work, it won’t give you the benefits of more sophisticated modernization techniques—both in terms of resource utilization, or the advanced “day-two operations” made possible by running on an advanced container management platform like Anthos GKE.Today we announced several new updates for Anthos including the latest release of Migrate for Anthos. Our automated containerization solution now includes enhanced VM-to-container conversion capabilities that can help you modernize your legacy workloads into Kubernetes and Anthos. It’s also tightly integrated with Anthos Service Mesh, supports Anthos running on-premises, and can convert legacy Windows Server applications into containers. Beyond lift and shift with imagesEarlier versions of Migrate for Anthos took a “lift and shift” approach to containerization. It extracted the workloads from the virtual machine (while leaving out the operating system kernel and VM-related components) and converted them into stateful containers. It also added a runtime layer that integrated the workloads with Kubernetes storage, networking and monitoring. With this new release, Migrate for Anthos now dissects the contents of a VM and generates a suggested breakdown of its content into image and data components. These can be reviewed and tested, and generates all the artifacts you need to perform container image-based management: Docker image, Dockerfile, Deployment YAMLs and a consolidated data volume, which can be any type of Kubernetes-supported storage. The modernization process itself is elegantly orchestrated by Kubernetes building blocks (CRDs, CLI) and mechanisms, as described in this video and diagram.This image-based approach allows you to harness the modern CI/CD pipeline tools to build, test, and deploy applications, as well as leverage Kubernetes for consistent and efficient deployment and roll out of new images across your Kubernetes deployments, including clusters, multi-clusters and multiple clouds. In addition to enabling a modern developer experience, the image-based solution unlocks the power of the Kubernetes control plane and its declarative API for further operational efficiencies. For instance, with application components that are stateless in nature, you can implement load balancing, dynamic scaling and self healing without having to re-write the application. This means that Migrate for Anthos is now tightly integrated with Anthos Service Mesh, bringing the benefits of enhanced observability, security, and automated network policy management to legacy applications—again, without changing the application code. The containerization technology in Migrate for Anthos 1.3 is GA for Anthos on Google Cloud. But, for organizations that want to modernize their workloads to Anthos, but aren’t ready to move these workloads to Google Cloud yet, Migrate for Anthos 1.3 also includes a preview that supports Anthos GKE running on-prem.One of our partners, Arctiq, is actively using Migrate for Anthos and says it is helping them transform their customers’ operations:”Migrate for Anthos is a uniquely powerful way to modernize your existing virtual machines into containers running in Google Kubernetes Engine,” said Kyle Bassett, Partner at Arctiq. “Traditionally, converting these VMs into containers was laborious and required a deep knowledge of Kubernetes, so most customers just left their VMs alone. But with Migrate for Anthos, you can extract workloads out of VMs and get them running on containers using a more automated and reliable workflow. Leveraging Migrate for Anthos, Arctiq is able to assist our customers increase their workloads’ performance while reducing their infrastructure and management costs.” Automated containerization for Windows serversEarlier this year, we announced you could now run Windows Server containers on GKE. However, because this is still an emerging technology, there aren’t many native Windows containers yet, and manually containerizing a Windows application can be challenging. With Migrate for Anthos, you can now convert legacy Windows server apps into Windows Server 2019 containers and run them on GKE in Google Cloud . This includes Windows 2008 R2, which recently reached end-of-support from Microsoft. This feature is available in preview and includes fully automated discovery and assessment tooling. This lets you automatically convert IIS, ASP.NET based apps running on Google Compute Engine VMs, which helps you reduce infrastructure and licensing costs. For IIS and ASP.NET apps that run on-premises or on other clouds, you can first use Migrate for Compute Engine to move them into Compute Engine VMs, then use Migrate for Anthos to convert them into containers. Support for non IIS and ASP.NET apps is forthcoming.Another alternative is to migrate only parts of an application stack to Windows containers. That way, elements that can’t easily be migrated to containers can run in Compute Engine VMs and still leverage VPC-level networking integration with containers on GKE.Accelerate your modernizationAlmost every customer we talk to tells us that they want to use more containers. Migrate for Anthos can help you accelerate that process by reducing the time and effort that exist with alternative processes. If you’re interested in participating in these or upcoming Migrate for Anthos previews, please fill out this form and mention “Migrate for Anthos” in the ‘Your Project’ field.
Quelle: Google Cloud Platform
Die alten Fernsehfrequenzen von Vodafone durchdringen die Wände besonders gut. Doch die 5G-Datenrate ist niedrig. (5G, Vodafone)
Quelle: Golem
Das neue Motorola-Smartphone hat zudem ein stark um den Rand gebogenes Display. (Motorola, Smartphone)
Quelle: Golem
In business as in life, change is constant and unpredictable. When building the platforms to power your organization, you can’t be limited by yesterday’s technology decisions. Nor can the systems you create today constrain your ability to act tomorrow. In times of uncertainty, you need an architecture that gives you the agility and flexibility you need to help you weather change—or even take advantage of it. Since first announcing Anthos, our multi-cloud and hybrid application platform, just under two years ago, we’ve been continuously delivering new capabilities to help organizations of all sizes develop, deploy, and manage applications more quickly and flexibly. Today, we are expanding Anthos to support more kinds of workloads, in more kinds of environments, in many more locations. With these announcements, we look forward to helping you build applications that can thrive in any environment. “When you’ve been around as long as KeyBank has – nearly 200 years – we know a thing or two about keeping up with the pace of change,” said Keith Silvestri, CTO, KeyBank. “Anthos is a true differentiator for us in terms of releases and a cornerstone to our agile methodology. With our ability to flex between on-prem and public clouds, our team can now spend less time managing the complex tasks of using multiple clouds and focus on ways we can serve our clients today.”More clouds, more optionsEnterprises know they need the cloud to help drive cost efficiency and digital transformation. Last year, we announced our multi-cloud vision, and previewed Anthos running and managing applications on AWS. Today, we are excited to announce that Anthos support for multi-cloud is generally available. Now, you can consolidate all your operations across on-premises, Google Cloud, and other clouds starting with AWS (support for Microsoft Azure is currently in preview). The flexibility to run applications where you need them without added complexity has been a key factor in choosing Anthos—many customers want to continue to use their existing investments both on-premises as well as in other clouds, and having a common management layer helps their teams deliver quality services with low overhead. Often our customers look to this flexibility for their teams to work across platforms and the freedom from lock in it provides. One such early adopter is Plaid, a Japanese tech company providing real-time visibility into user activity online. Plaid’s customers rely on their always-available analytics service to make changes in real-time and continuously improve the user experience. “At Plaid we provide real-time data analysis of over 6.8 billion online users. Our customers rely on us to be always available and as a result we have very high reliability requirements,” said Naohiko Takemura, PLAID Inc., Head of Engineering. “We pursued a multi-cloud strategy to ensure redundancy for our critical KARTE service. Google Cloud’s Anthos works seamlessly across GCP and our other cloud providers preventing any business disruption. Thanks to Anthos, we prevent vendor lock-in, avoid managing cloud-specific infrastructure, and our developers are not constrained by cloud providers.” Indeed, adopting multi-cloud can be a particularly valuable strategy in times of uncertainty, analysts say. “In times of disruption, the effective use of and easy access to innovative, yet resilient, technology anywhere and everywhere is critical”, said Richard Villars, Vice President, Datacenter & Cloud, IDC. “While the initial goal may be to achieve short-term cost savings, the long term benefits of aligning technology adoption and IT operational governance with business outcomes will ultimately ensure ongoing success. Solutions like Google’s Anthos enable the cost effective extension of cloud capabilities across on-premises and cloud-based resources while also enabling organizations to tap into the new developer services that they’ll need to continue innovating in their businesses.” One management experience for all your applicationsWhether your organization is a born-in-the-cloud digital native or a traditional enterprise, it can be hard to manage workloads consistently, and at scale. This is especially true for traditional enterprises with lots of legacy workloads. With this latest release, we are making managing diverse environments easier than ever before, with deeper support for virtual machines, letting you extend Anthos’ management framework to the types of workloads that make up the vast majority of existing systems. Specifically, Anthos now lets you manage two of the most complex pieces of traditional workloads:Policy and configuration management – With Anthos Config Management, you can now use a programmatic and declarative approach to manage policies for your VMs on Google Cloud just as you do for your containers. This reduces the likelihood of configuration errors due to manual intervention while speeding up time to delivery. In the meantime, the platform ensures your applications are running with the desired state at all times. Managing services on heterogeneous deployments – Over the coming months Anthos Service Mesh will also include support for applications running in virtual machines, letting you consistently manage security and policy across different workloads in Google Cloud, on-premises and in other clouds. These are just two examples of how Anthos can help you reduce risk and complexity associated with managing traditional workloads. Stay tuned in the coming months as we discuss other ways you can use Anthos as a single management framework for your various virtual machines and cloud environments. Driving efficiency with AnthosIn addition to deployment flexibility, Anthos can also help you drive out costs and inefficiency from your environment. Later this year you’ll be able to run Anthos with no third-party hypervisor, delivering even better performance, further reducing costs and eliminating the management overhead of yet another vendor relationship. This is also great for demanding workloads that require bare metal for performance or regulatory reasons. Bare metal also powers Anthos at the edge, letting you deploy workloads beyond your data center and public cloud environments to wherever you need it. Whether it’s a retail store, branch office, or even remote sites, Anthos can help you bring your applications closer to your end users, for optimal performance. Finally, we are further simplifying application modernization with Migrate for Anthos, which lets you reduce costs and improve performance without having to rearchitect or replatform your workloads manually. With this latest release, you can simplify day-two operations and integrate migrated workloads with other Anthos services. You can learn more here.Building the future This is a time of great uncertainty. Enterprises need an application platform that embraces the technology choices they’ve already made, and gives them the flexibility they need to adapt to what comes next. Google Cloud and our partners are here to help you with your journey.“No customer I’ve ever talked to said ‘give me less flexibility.’ Being able to run Anthos on AWS gives customers even more options for designing a platform that’s right for their needs—especially in difficult times,” said Miles Ward, CTO, SADA. “No matter if you’re focused on keeping up with increasing demand, leveraging existing investments or getting closer to customers to reach them in new ways, this is a great step forward for the intercloud.” Whether you run your workloads in Google Cloud, on-prem, or in third-party cloud providers, Anthos provides a consistent platform on which your teams can build great applications that can thrive in a changing environment.You can learn more about how Anthos has been helping our customers gain flexibility while making a positive economic impact through application modernization, here.
Quelle: Google Cloud Platform