containerd livestream recap

In case you missed it last month, we announced that is extracting a key component of its platform, a part of the engine plumbing called  &; a core container runtime – and committed to donating it to an open foundation.
You can find up-to-date roadmap, architecture and API definitions in the Github repository, and more details about the project in our engineering team’s blog post.

You can also watch the following video recording of the containerd online meetup, for a summary and Q&A with Arnaud Porterie, Michael Crosby, Stephen Day, Patrick Chanezon and Solomon Hykes from the Docker team:

Here is the list of top questions we got following this announcement:
Q. Are you planning to run docker without runC ?
A. Although runC is the default runtime, as of  Docker 1.12, it can be replaced by any other OCI-compliant implementation. Docker will be compliant with the OCI Runtime Specification
Q. What major changes are on the roadmap for swarmkit to run on containerd if any? 
A. SwarmKit is using Docker Engine to orchestrate tasks, and Docker Engine is already using containerd for container execution. So technically, you are already using containerd when using SwarmKit. There is no plan currently to have SwarmKit directly orchestrate containerd containers though.
Q. Mind sharing why you went with GRPC for the API?
A. containerd is a component designed to be embedded in a higher level system, and serve a host local API over a socket. GRPC enables us to focus on designing RPC calls and data structures instead of having to deal with JSON serialization and HTTP error codes. This improves iteration speed when designing the API and data structures. For higher level systems that embed containerd, such as Docker or Kubernetes, a JSON/HTTP API makes more sense, allowing easier integration. The Docker API will not change, and will continue to be based on JSON/HTTP.
Q. How do you expect to see others leverage containerd outside of Docker?
A. Cloud managed container services such as Amazon ECS, Microsoft ACS, Google Container Engine, or orchestration tools such as Kubernetes or Mesos can leverage containerd as their core container runtime. containerd has been designed to be embedded for that purpose.
Q. How did you decided which feature should get into containerd?  How did you came up with the scope of the future containers?
A. We’re trying to capture in containerd the features that any container-centric platform would need, and for which there’s reasonable consensus on the way it should be implemented. Aspects which are either not widely agreed on or that can trivially be built one layer up were left out.
Q. How integrate with CNI and CNM?
A. Phase 3 of the containerd roadmap involves porting the network drivers from libnetwork and finding a good middle ground between the CNM abstraction of libnetwork and the CNI spec.
Additional Resources:

Contribute to containerd
Join the containerd slack channel
Read the engineering team’s blog post.

Docker Extracts & Donates containerd, it&;s Core Container Runtime for the container IndustryClick To Tweet

The post containerd livestream recap appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DockerCon 2017: Call For Papers FAQ

It’s a new year, and we are looking for new stories of how you are using technology to do big things. Submit your cool hack, use case or deep dive sessions before the 2017 CFP closes on January 14th.

To help with your submissions, we’ve answered the most frequent questions below and put together a list of tips to help get your proposal selected.
Q. How do I submit a proposal?
A. Submit your proposal here.
Q. What kind of talks are you looking for?
A. This year, we are looking for cool hacks, user stories and deep dive submissions:

Cool Hacks: Show us your cool hack and wow us with the interesting ways you can push the boundaries of the Docker stack. You do not have to have your hack ready by the submission deadline, just clearly explain your hack, what makes it cool and the technologies you will use.

Using Docker: Tell us first-hand about your Docker usage, challenges and what you learned along the way and inspire us on how to use Docker to accomplish real tasks.

Deep Dives: Propose code and demo heavy deep-dive sessions on what you have been able to transform with your use of the Docker stack. Entice your audience by going deeply technical and teach them how to do something they haven’t done.

Above all, DockerCon is a user conference and product and vendor pitches are not appropriate.
Q. What will I need to include in my submission?
A. Speaking proposals will ask for:

Title, the more catchy and descriptive, the better. But don&;t be too cute.
Abstract describing the presentation. This is what gets shown in the agenda and how the audience decides if they want to attend your session.
Key Takeaways that communicate your session’s main idea and conclusion. This is your gift to the audience, what will they learn from your session and be able to apply when they get back to work the following week.
Speaker(s): expertise and summary biography
Suggested tags
Past Speaking examples
Recommendations of appropriate audience.

Q. How can I increase the odds of my proposal being selected?
A. Check out the following resources:

Read our tips to help get your proposal selected
See the list of sessions chosen for the 2016 DockerCon and DockerCon EU 2015 programs and read their descriptions
Watch videos from previous DockerCons
See speaker slides from previous DockerCons.

Q. How are submissions selected?
A. After a proposal is submitted, it will be reviewed initially for content and format. Once past the initial review, a committee will read the proposals and vote on best submissions. There are a limited number of speaking slots and we work to achieve a balance of presentations that will interest the Docker community.
Q. How will Speakers be compensated?
A. One speaker for every session will be given a full conference pass. Any additional speakers will be given a pass at the Early Bird rate.
Q. Will there be a Speaker room at the conference?
A. Yes, we will provide a Speaker Ready room for speakers to prepare for presentations, relax and mingle. Speakers should check in with the DockerCon 2017 speaker manager on the day of your talk in the Speaker Room and make sure you are all set for your talk.
Q. What are the important dates to remember?
A.

Call for Proposals Closes &; January 14, 2017 at 11:59 PST
All proposers notified &8211; Late February
Program announced &8211; Late February
Submit your proposal &8211; Today!

DockerCon 2017 CFP is open until Jan 14! Submit your Docker story todayClick To Tweet

The post DockerCon 2017: Call For Papers FAQ appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Top Docker content of 2016

2016 has been an amazing year for and the container industry. We had 3 major releases of Docker engine this year , and tremendous increase in usage. The community has been following along and contributing amazing Docker resources to help you learn and get hands-on experience. Here’s some of the top read and viewed content for the year:
Releases
Of course releases are always really popular, particularly when they fit requests we had from the community. In particular, we had:

Docker for Mac & Docker for Windows Beta and GA release blog posts, and the video

Docker 1.12 Built-in Orchestration release, and the DockerCon keynote where we announced it

And the release of the Docker for AWS and Azure beta

Windows Containers
When Microsoft made Windows 2016 generally available, people rushed to

Our release blog to read the news
Tutorials to find out how to use Windows containers powered by Docker
The commercial relationship blog post to understand how it all fits together

About Docker
We also provide a lot of information about how to use Docker. In particular, these posts and articles that we shared on social media were the most read:

Containers are Not VMs by Mike Coleman
9 Critical Decisions for Running Docker in Production by James Higginbotham
A Comparative Study of Docker Engine on Windows Server vs. Linux Platform by Docker Captain Ajeet Singh Raina
Our White paper &; The Definitive Guide To Docker

How to Use Docker
Docker has a wide variety of use cases, articles and videos about how to use it are really popular. In particular, when we share content from our users and Docker Captains, they get a lot of views:

Getting started with Docker 1.12 and Raspberry Pi by Docker Captain Alex Ellis
Docker: Making our bioinformatics easier and more reproducible by Jeremy Yoder
NGINX as a Reverse Proxy for Docker by Lorenzo Fontana
5 minute guide for getting Docker 1.12.1 running on your Raspberry Pi 3 by Docker Captain Ajeet Singh Raina
The Docker Cheat Sheet
Docker for Developers

Cgroups, namespaces, and beyond

Still hungry for more info? Here’s some more Docker resources:

Check out Follow all the Captains in one shot with Docker by Docker Captain Alex Ellis
Docker labs and tutorials on GitHub
Follow us on Twitter, Facebook or LinkedIn group
Join the Docker Community Directory and Slack
And of course, keep following this blog for more exciting info

Top Docker content from 2016 &8211; What you docker resources you read the most Click To Tweet

The post Top Docker content of 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Understanding Docker Networking Drivers and their use cases

Applications requirements and networking environments are diverse and sometimes opposing forces. In between applications and the network sits networking, affectionately called the Container Network Model or CNM. It’s CNM that brokers connectivity for your Docker containers and also what abstracts away the diversity and complexity so common in networking. The result is portability and it comes from CNM’s powerful network drivers. These are pluggable interfaces for the Docker Engine, Swarm, and UCP that provide special capabilities like multi-host networking, network layer encryption, and service discovery.
Naturally, the next question is which network driver should I use? Each driver offers tradeoffs and has different advantages depending on the use case. There are built-in network drivers that come included with Docker Engine and there are also plug-in network drivers offered by networking vendors and the community. The most commonly used built-in network drivers are bridge, overlay and macvlan. Together they cover a very broad list of networking use cases and environments. For a more in depth comparison and discussion of even more network drivers, check out the Docker Network Reference Architecture.
Bridge Network Driver
The bridge networking driver is the first driver on our list. It’s simple to understand, simple to use, and simple to troubleshoot, which makes it a good networking choice for developers and those new to Docker. The bridge driver creates a private network internal to the host so containers on this network can communicate. External access is granted by exposing ports to containers. Docker secures the network by managing rules that block connectivity between different Docker networks.
Behind the scenes, the Docker Engine creates the necessary Linux bridges, internal interfaces, iptables rules, and host routes to make this connectivity possible. In the example highlighted below, a Docker bridge network is created and two containers are attached to it. With no extra configuration the Docker Engine does the necessary wiring, provides service discovery for the containers, and configures security rules to prevent communication to other networks. A built-in IPAM driver provides the container interfaces with private IP addresses from the subnet of the bridge network.
In the following examples, we use a fictitious app called pets comprised of a web and db container. Feel free to try it out on your own UCP or Swarm cluster. Your app will be accessible on `<host-ip>:8000`.
docker network create -d bridge mybridge
docker run -d –net mybridge –name db redis
docker run -d –net mybridge -e DB=db -p 8000:5000 –name web chrch/web
 
 
Our application is now being served on our host at port 8000. The Docker bridge is allowing web to communicate with db by its container name. The bridge driver does the service discovery for us automatically because they are on the same network. All of the port mappings, security rules, and pipework between Linux bridges is handled for us by the networking driver as containers are scheduled and rescheduled across a cluster.
The bridge driver is a local scope driver, which means it only provides service discovery, IPAM, and connectivity on a single host. Multi-host service discovery requires an external solution that can map containers to their host location. This is exactly what makes the overlay driver so great.
Overlay Network Driver
The built-in Docker overlay network driver radically simplifies many of the complexities in multi-host networking. It is a swarm scope driver, which means that it operates across an entire Swarm or UCP cluster rather than individual hosts. With the overlay driver, multi-host networks are first-class citizens inside Docker without external provisioning or components. IPAM, service discovery, multi-host connectivity, encryption, and load balancing are built right in. For control, the overlay driver uses the encrypted Swarm control plane to manage large scale clusters at low convergence times.
The overlay driver utilizes an industry-standard VXLAN data plane that decouples the container network from the underlying physical network (the underlay). This has the advantage of providing maximum portability across various cloud and on-premises networks. Network policy, visibility, and security is controlled centrally through the Docker Universal Control Plane (UCP).

In this example we create an overlay network in UCP so we can connect our web and db containers when they are living on different hosts. Native DNS-based service discovery for services & containers within an overlay network will ensure that web can resolve to db and vice-versa. We turned on encryption so that communication between our containers is secure by default.  Furthermore, visibility and use of the network in UCP is restricted by the permissions label we use.
UCP will schedule services across the cluster and UCP will dynamically program the overlay network to provide connectivity to the containers wherever they are. When services are backed by multiple containers, VIP-based load balancing will distribute traffic across all of the containers.
Feel free to run this example against your UCP cluster with the following CLI commands:
docker network create -d overlay –opt encrypted pets-overlay
docker service create –network pets-overlay –name db redis
docker service create –network pets-overlay -p 8000:5000 -e DB=db –name web chrch/web
 
In this example we are still serving our web app on port 8000 but now we have deployed our application across different hosts. If we wanted to scale our web containers, Swarm & UCP networking would load balance the traffic for us automatically.
The overlay driver is a feature-rich driver that handles much of the complexity and integration that organizations struggle with when crafting piecemeal solutions. It provides an out-of-the-box solution for many networking challenges and does so at scale.
MACVLAN Driver
The macvlan driver is the newest built-in network driver and offers several unique characteristics. It’s a very lightweight driver, because rather than using any Linux bridging or port mapping, it connects container interfaces directly to host interfaces. Containers are addressed with routable IP addresses that are on the subnet of the external network.
As a result of routable IP addresses, containers communicate directly with resources that exist outside a Swarm cluster without the use of NAT and port mapping. This can aid in network visibility and troubleshooting. Additionally, the direct traffic path between containers and the host interface helps reduce latency. macvlan is a local scope network driver which is configured per-host. As a result, there are stricter dependencies between MACVLAN and external networks, which is both a constraint and an advantage that is different from overlay or bridge.
The macvlan driver uses the concept of a parent interface. This interface can be a host interface such as eth0, a sub-interface, or even a bonded host adaptor which bundles Ethernet interfaces into a single logical interface. A gateway address from the external network is required during MACVLAN network configuration, as a MACVLAN network is a L2 segment from the container to the network gateway. Like all Docker networks, MACVLAN networks are segmented from each other – providing access within a network, but not between networks.
The macvlan driver can be configured in different ways to achieve different results. In the below example we create two MACVLAN networks joined to different subinterfaces. This type of configuration can be used to extend multiple L2 VLANs through the host interface directly to containers. The VLAN default gateway exists in the external network.
 
The db and web containers are connected to different MACVLAN networks in this example. Each container resides on its respective external network with an external IP provided from that network. Using this design an operator can control network policy outside of the host and segment containers at L2. The containers could have also been placed in the same VLAN by configuring them on the same MACVLAN network. This just shows the amount of flexibility offered by each network driver.
Portability and choice are important tenants in the Docker philosophy. The Docker Container Network Model provides an open interface for vendors and the community to build network drivers. The complementary evolution of Docker and SDN technologies is providing more options and capabilities every day.

Get familiar with Docker Network drivers: bridge, overlay, macvlanClick To Tweet

Happy Networking!
More Resources:

Check out the latest Docker Datacenter networking updates
Read the latest RA: Docker UCP Service Discovery and Load Balancing
See What’s New in Docker Datacenter
Sign up for a free 30 day trial

The post Understanding Docker Networking Drivers and their use cases appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker & Prometheus Joint Holiday Meetup Recap

Last Wednesday we had our 52nd at HQ, but this time we joined forces with the Prometheus user group to host a mega-meetup! There was a great turnout and members were excited to see the talks on using Docker with Prometheus, OpenTracing and the new Docker playground; play-with-docker.
First up was Stephen Day, a Senior Software Engineer at Docker, who presented a talk entitled, ‘The History of According to Me’. Stephen believes that metrics and should be built into every piece of software we create, from the ground up. By solving the hard parts of application metrics in Docker, he thinks it becomes more likely that metrics are a part of your services from the start. See the video of his intriguing talk and slides below.

&;The History of Metrics According to me&; by Stephen Day from Docker, Inc.

&8216;The History of Metrics According to Me&8217; @stevvooe talking metrics and monitoring at the Docker SF meetup! @prometheusIO @CloudNativeFdn pic.twitter.com/6hk0yAtats
— Docker (@docker) December 15, 2016

Next up was Ben Sigelman, an expert in distributed tracing, whose talk ‘OpenTracing Isn’t Just Tracing: Measure Twice, Instrument Once’ was both informative and humorous. He began by describing OpenTracing and explaining why anyone who monitors microservices should care about it. He then stepped back to examine the historical role of operational logging and metrics in distributed system monitoring and illustrated how the OpenTracing API maps to these tried-and-true abstractions. To find out more and see his demo involving donuts watch the video below and slides.

Last but certainly not least were two of our amazing Docker Captains all the way from Buenos Aires, Marcos Nils and Jonathan Leibiusky! During the Docker Distributed Systems Summit in Berlin last October, they built ‘play-with-docker’. It is a a Docker playground which gives you the experience of having a free Alpine Linux Virtual Machine in the cloud where you can build and run Docker containers and even create clusters with Docker features like Swarm Mode. Under the hood DIND or Docker-in-Docker is used to give the effect of multiple VMs/PCs. Watch the video below to see how they built it and hear all about the new features.

@marcosnils & @xetorthio sharing at the Docker HQ meetup all the way from Buenos Aires! pic.twitter.com/kXqOZgClMz
— Docker (@docker) December 15, 2016

play-with-docker was a hit with the audience  and there was a line of attendees hoping to speak to Marcos and Jonathan after their talk! All in all, it was a great night thanks to our amazing speakers, Docker meetup members, the Prometheus user group and the CNCF who sponsored drinks and snacks.

New blog post w/ videos & slides from the Docker & @PrometheusIO joint meetup! To Tweet

The post Docker &; Prometheus Joint Holiday Meetup Recap appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Convert ASP.NET Web Servers to Docker with Image2Docker

A major update to Image2Docker was released last week, which adds ASP.NET support to the tool. Now you can take a virtualized web server in Hyper-V and extract a image for each website in the VM &; including ASP.NET WebForms, MVC and WebApi apps. 

Image2Docker is a PowerShell module which extracts applications from a Windows Virtual Machine image into a Dockerfile. You can use it as a first pass to take workloads from existing servers and move them to Docker containers on Windows.
The tool was first released in September 2016, and we&;ve had some great work on it from PowerShell gurus like Docker Captain Trevor Sullivan and Microsoft MVP Ryan Yates. The latest version has enhanced functionality for inspecting IIS &8211; you can now extract ASP.NET websites straight into Dockerfiles.
In Brief
If you have a Virtual Machine disk image (VHD, VHDX or WIM), you can extract all the IIS websites from it by installing Image2Docker and running ConvertTo-Dockerfile like this:
Install-Module Image2Docker
Import-Module Image2Docker
ConvertTo-Dockerfile -ImagePath C:win-2016-iis.vhd -Artifact IIS -OutputPath c:i2d2iis
That will produce a Dockerfile which you can build into a Windows container image, using docker build.
How It Works
The Image2Docker tool (also called &;I2D2&;) works offline, you don&8217;t need to have a running VM to connect to. It inspects a Virtual Machine disk image &8211; in Hyper-V VHD, VHDX format, or Windows Imaging WIM format. It looks at the disk for known artifacts, compiles a list of all the artifacts installed on the VM and generates a Dockerfile to package the artifacts.
The Dockerfile uses the microsoft/windowsservercore base image and installs all the artifacts the tool found on the VM disk. The artifacts which Image2Docker scans for are:

IIS & ASP.NET apps
MSMQ
DNS
DHCP
Apache
SQL Server

Some artifacts are more feature-complete than others. Right now (as of version 1.7.1) the IIS artifact is the most complete, so you can use Image2Docker to extract Docker images from your Hyper-V web servers.
Installation
I2D2 is on the PowerShell Gallery, so to use the latest stable version just install and import the module:
Install-Module Image2Docker
Import-Module Image2Docker
If you don&8217;t have the prerequisites to install from the gallery, PowerShell will prompt you to install them.
Alternatively, if you want to use the latest source code (and hopefully contribute to the project), then you need to install the dependencies:
Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201
Install-Module -Name Pester,PSScriptAnalyzer,PowerShellGet
Then you can clone the repo and import the module from local source:
mkdir docker
cd docker
git clone https://github.com/sixeyed/communitytools-image2docker-win.git
cd communitytools-image2docker-win
Import-Module .Image2Docker.psm1
Running Image2Docker
The module contains one cmdlet that does the extraction: ConvertTo-Dockerfile. The help text gives you all the details about the parameters, but here are the main ones:

ImagePath &8211; path to the VHD | VHDX | WIM file to use as the source
Artifact &8211; specify one artifact to inspect, otherwise all known artifacts are used
ArtifactParam &8211; supply a parameter to the artifact inspector, e.g. for IIS you can specify a single website
OutputPath &8211; location to store the generated Dockerfile and associated artifacts

You can also run in Verbose mode to have Image2Docker tell you what it finds, and how it&8217;s building the Dockerfile.
Walkthrough &8211; Extracting All IIS Websites
This is a Windows Server 2016 VM with five websites configured in IIS, all using different ports:

Image2Docker also supports Windows Server 2012, with support for 2008 and 2003 on its way. The websites on this VM are a mixture of technologies &8211; ASP.NET WebForms, ASP.NET MVC, ASP.NET WebApi, together with a static HTML website.
I took a copy of the VHD, and ran Image2Docker to generate a Dockerfile for all the IIS websites:
ConvertTo-Dockerfile -ImagePath C:i2d2win-2016-iis.vhd -Artifact IIS -Verbose -OutputPath c:i2d2iis
In verbose mode there&8217;s a whole lot of output, but here are some of the key lines &8211; where Image2Docker has found IIS and ASP.NET, and is extracting website details:
VERBOSE: IIS service is present on the system
VERBOSE: ASP.NET is present on the system
VERBOSE: Finished discovering IIS artifact
VERBOSE: Generating Dockerfile based on discovered artifacts in
:C:UserseltonAppDataLocalTemp865115-6dbb-40e8-b88a-c0142922d954-mount
VERBOSE: Generating result for IIS component
VERBOSE: Copying IIS configuration files
VERBOSE: Writing instruction to install IIS
VERBOSE: Writing instruction to install ASP.NET
VERBOSE: Copying website files from
C:UserseltonAppDataLocalTemp865115-6dbb-40e8-b88a-c0142922d954-mountwebsitesaspnet-mvc to
C:i2d2iis
VERBOSE: Writing instruction to copy files for -mvc site
VERBOSE: Writing instruction to create site aspnet-mvc
VERBOSE: Writing instruction to expose port for site aspnet-mvc
When it completes, the cmdlet generates a Dockerfile which turns that web server into a Docker image. The Dockerfile has instructions to installs IIS and ASP.NET, copy in the website content, and create the sites in IIS.
Here&8217;s a snippet of the Dockerfile &8211; if you&8217;re not familiar with Dockerfile syntax but you know some PowerShell, then it should be pretty clear what&8217;s happening:
# Install Windows features for IIS
RUN Add-WindowsFeature Web-server, NET-Framework-45-ASPNET, Web-Asp-Net45
RUN Enable-WindowsOptionalFeature -Online -FeatureName IIS-ApplicationDevelopment,IIS-ASPNET45,IIS-BasicAuthentication…

# Set up website: aspnet-mvc
COPY aspnet-mvc /websites/aspnet-mvc
RUN New-Website -Name ‘aspnet-mvc’ -PhysicalPath “C:websitesaspnet-mvc” -Port 8081 -Force
EXPOSE 8081
# Set up website: aspnet-webapi
COPY aspnet-webapi /websites/aspnet-webapi
RUN New-Website -Name ‘aspnet-webapi’ -PhysicalPath “C:websitesaspnet-webapi” -Port 8082 -Force
EXPOSE 8082
You can build that Dockerfile into a Docker image, run a container from the image and you&8217;ll have all five websites running in a Docker container on Windows. But that&8217;s not the best use of Docker.
When you run applications in containers, each container should have a single responsibility &8211; that makes it easier to deploy, manage, scale and upgrade your applications independently. Image2Docker support that approach too.
Walkthrough &8211; Extracting a Single IIS Website
The IIS artifact in Image2Docker uses the ArtifactParam flag to specify a single IIS website to extract into a Dockerfile. That gives us a much better way to extract a workload from a VM into a Docker Image:
ConvertTo-Dockerfile -ImagePath C:i2d2win-2016-iis.vhd -Artifact IIS -ArtifactParam aspnet-webforms -Verbose -OutputPath c:i2d2aspnet-webforms
That produces a much neater Dockerfile, with instructions to set up a single website:
# escape=`
FROM microsoft/windowsservercore
SHELL [“powershell”, “-Command”, “$ErrorActionPreference = ‘Stop';”]

# Wait-Service is a tool from Microsoft for monitoring a Windows Service
ADD https://raw.githubusercontent.com/Microsoft/Virtualization-Documentation/live/windows-server-container-tools/Wait-Service/Wait-Service.ps1 /

# Install Windows features for IIS
RUN Add-WindowsFeature Web-server, NET-Framework-45-ASPNET, Web-Asp-Net45
RUN Enable-WindowsOptionalFeature -Online -FeatureName IIS-ApplicationDevelopment,IIS-ASPNET45,IIS-BasicAuthentication,IIS-CommonHttpFeatures,IIS-DefaultDocument,IIS-DirectoryBrowsing

# Set up website: aspnet-webforms
COPY aspnet-webforms /websites/aspnet-webforms
RUN New-Website -Name ‘aspnet-webforms’ -PhysicalPath “C:websitesaspnet-webforms” -Port 8083 -Force
EXPOSE 8083

CMD /Wait-Service.ps1 -ServiceName W3SVC -AllowServiceRestart
Note &8211; I2D2 checks which optional IIS features are installed on the VM and includes them all in the generated Dockerfile. You can use the Dockerfile as-is to build an image, or you can review it and remove any features your site doesn&8217;t need, which may have been installed in the VM but aren&8217;t used.
To build that Dockerfile into an image, run:
docker build -t i2d2/aspnet-webforms .
When the build completes, I can run a container to start my ASP.NET WebForms site. I know the site uses a non-standard port, but I don&8217;t need to hunt through the app documentation to find out which one, it&8217;s right there in the Dockerfile: EXPOSE 8083.
This command runs a container in the background, exposes the app port, and stores the ID of the container:
$id = docker run -d -p 8083:8083 i2d2/aspnet-webforms
When the site starts, you&8217;ll see in the container logs that the IIS Service (W3SVC) is running:
> docker logs $id
The Service ‘W3SVC’ is in the ‘Running’ state.
Now you can browse to the site running in IIS in the container, but because published ports on Windows containers don&8217;t do loopback yet, if you&8217;re on the machine running the Docker container, you need to use the container&8217;s IP address:
$ip = docker inspect –format ‘{{ .NetworkSettings.Networks.nat.IPAddress }}’ $id
start “http://$($ip):8083″
That will launch your browser and you&8217;ll see your ASP.NET Web Forms application running in IIS, in Windows Server Core, in a Docker container:

Converting Each Website to Docker
You can extract all the websites from a VM into their own Dockerfiles and build images for them all, by following the same process &8211; or scripting it &8211; using the website name as the ArtifactParam:
$websites = @(“aspnet-mvc”, “aspnet-webapi”, “aspnet-webforms”, “static”)
foreach ($website in $websites) {
    ConvertTo-Dockerfile -ImagePath C:i2d2win-2016-iis.vhd -Artifact IIS -ArtifactParam $website -Verbose -OutputPath “c:i2d2$website” -Force
    cd “c:i2d2$website”
    docker build -t “i2d2/$website” .
}
Note. The Force parameter tells Image2Docker to overwrite the contents of the output path, if the directory already exists.
If you run that script, you&8217;ll see from the second image onwards the docker build commands run much more quickly. That&8217;s because of how Docker images are built from layers. Each Dockerfile starts with the same instructions to install IIS and ASP.NET, so once those instructions are built into image layers, the layers get cached and reused.
When the build finish I have four i2d2 Docker images:
> docker images
REPOSITORY                                    TAG                 IMAGE ID            CREATED              SIZE
i2d2/static                                   latest              cd014b51da19        7 seconds ago        9.93 GB
i2d2/aspnet-webapi                            latest              1215366cc47d        About a minute ago   9.94 GB
i2d2/aspnet-mvc                               latest              0f886c27c93d        3 minutes ago        9.94 GB
i2d2/aspnet-webforms                          latest              bd691e57a537        47 minutes ago       9.94 GB
microsoft/windowsservercore                   latest              f49a4ea104f1        5 weeks ago          9.2 GB
Each of my images has a size of about 10GB but that&8217;s the virtual image size, which doesn&8217;t account for cached layers. The microsoft/windowsservercore image is 9.2GB, and the i2d2 images all share the layers which install IIS and ASP.NET (which you can see by checking the image with docker history).
The physical storage for all five images (four websites and the Windows base image) is actually around 10.5GB. The original VM was 14GB. If you split each website into its own VM, you&8217;d be looking at over 50GB of storage, with disk files which take a long time to ship.
The Benefits of Dockerized IIS Applications
With our Dockerized websites we get increased isolation with a much lower storage cost. But that&8217;s not the main attraction &8211; what we have here are a set of deployable packages that each encapsulate a single workload.
You can run a container on a Docker host from one of those images, and the website will start up and be ready to serve requests in seconds. You could have a Docker Swarm with several Windows hosts, and create a service from a website image which you can scale up or down across many nodes in seconds.
And you have different web applications which all have the same shape, so you can manage them in the same way. You can build new versions of the apps into images which you can store in a Windows registry, so you can run an instance of any version of any app. And when Docker Datacenter comes to Windows, you&8217;ll be able to secure the management of those web applications and any other Dockerized apps with role-based access control, and content trust.
Next Steps
Image2Docker is a new tool with a lot of potential. So far the work has been focused on IIS and ASP.NET, and the current version does a good job of extracting websites from VM disks to Docker images. For many deployments, I2D2 will give you a working Dockerfile that you can use to build an image and start working with Docker on Windows straight away.
We&8217;d love to get your feedback on the tool &8211; submit an issue on GitHub if you find a problem, or if you have ideas for enhancements. And of course it&8217;s open source so you can contribute too.
Additional Resources

Image2Docker: A New Tool For Prototyping Windows VM Conversions
Containerize Windows Workloads With Image2Docker
Run IIS + ASP.NET on Windows 10 with Docker
Awesome Docker &8211; Where to Start on Windows

Convert @Windows aspnet VMs to Docker with Image2DockerClick To Tweet

The post Convert ASP.NET Web Servers to Docker with Image2Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker for Azure Public Beta

Last week for AWS went public beta, and today Docker for Azure reached the same milestone and is ready for public testing. Docker for Azure is a great way for ops to setup and maintain secure and scalable Docker deployments on Azure.

With Docker for Azure, IT ops teams can:

Deploy a standard Docker platform to ensure teams can seamlessly move apps from developer laptops to Dockerized staging and production environments, without risk of incompatibilities or lock-in.
Integrate deeply with underlying infrastructure to ensure Docker takes advantage of the host environment’s native capabilities and exposes a familiar interface to administrators.
Deploy the platform to all the places where you want to run Dockerized apps, simply and efficiently
Make sure the latest and greatest Docker versions are available for the hardware, OSs, and infrastructure you love, and provide solid upgrade paths from one Docker version to the next.

To try the latest Docker for Azure beta based on the latest Docker Engine betas, click the button below or get more details on the beta site:

Installation takes a few minutes, and will give you a fully functioning swarm, ready to deploy and scale Dockerized apps.
We first unveiled the Docker for Azure private beta on stage at DockerCon 2016 back in June, and we are excited to be opening up to beta to the public. We received lots of great feedback from private beta testers (thanks!) and incorporated as much of it as possible. Enhancements added during the private beta include:

All container logs are stored in an Azure storage account for later retrieval and inspection. That means you no longer have to rummage around on hosts to find the error you’re looking for or worry that logs are lost if a worker is replaced.
Built-in diagnose tool lets you submit a swarm-wide diagnostic dump to Docker so that we can help diagnose and troubleshoot a misbehaving Docker for Azure swarm.
Improved upgrade stability so that you can confidently upgrade your Docker for Azure to the latest version

We’re particularly proud of the progress we’ve made on diagnostics and upgradability. These are features that set a true production system apart from simple fire-and-forget templates that just spin up resources without thought for debugging or future upgrades.
The improvements added during the private beta complement the initial features Docker for Azure launched with earlier this year:

Simple access and management using SSH
Quick and secure deployment of websites thanks to auto-provisioned and auto-configured load balancers
Secure and easy-to-manage Azure network and instance configuration

With today’s public beta announcement, we hope to get even more users interested in running Docker on Azure and testing the beta. Check out the detailed docs and sign up on beta.docker.com to be notified of updates and new beta versions.
Docker for AWS and Azure currently only support Linux-based swarms of managers and workers. Windows Server worker support will come as Docker on Windows Server matures. If you have questions or feedback, send an email or post to the Docker for AWS or the Docker for Azure forums.
Additional Resources

Take a short survey to provide feedback on your experience
Check out Docker for Windows Server 2016
Learn more: Docker and Microsoft solutions

Docker for @Azure Public Beta Available Now! Click To Tweet

The post Docker for Azure Public Beta appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Get all the Docker talks from Tech Field Day 12

As 2016 comes to a close, we are excited to have participated in a few of the Tech Field Day and inaugural Cloud Field Day events to share the technology with the IT leaders and evangelists that Stephen Foskett and Tom Hollingsworth have cultivated into this fantastic group.  The final event was Tech Field Day 12 hosting in Silicon Valley.
In case you missed the live stream, check out videos of the sessions here.
Session 1: Introduction to Docker and Docker Datacenter

Session 2: Securing the Software Supply Chain with Docker

Session 3: Docker for Windows Server and Windows Containers

Session 4: Docker for AWS and Azure

Session 5: Docker Networking Fabric

These are great overviews of the Docker technology applied to enterprise app pipelines, operations, and  diverse operating systems and cloud environments. And most importantly, this was a great opportunity to meet some new people and get them excited about what we are excited about.

+1!!! Docker https://t.co/Zdsuw1emlo
— Alex Galbraith (@alexgalbraith) November 17, 2016

 
Visit the Tech Field Day site to watch more videos from previous events, read articles written by delegates or view the conversation online.

New Docker videos from TFD12 @TechFieldDay w/ @SFoskett @GestaltIT @NetworkingNerdClick To Tweet

 
More Resources:

View On Demand: Sessions from previous events
Learn More about Docker
Try Docker Datacenter free for 30 days

 
The post Get all the Docker talks from Tech Field Day 12 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker acquires Infinit: a new data layer for distributed applications

The short version: acquired a fantastic company called Infinit. Using their technology, we will provide secure distributed storage out of the box, making it much easier to deploy stateful services and legacy enterprise applications on Docker. This will be delivered in a very open and modular design, so operators can easily integrate their existing storage systems, tune advanced settings, or simply disable the feature altogether. Oh, and we’re going to open-source the whole thing.
The slightly longer version:
At Docker we believe that tools should adapt to the people using them, not the other way around. So we spend a lot of time searching for the most exciting and powerful software technology out there, then integrating it into simple and powerful tools. That is how we discovered a small team of distributed systems engineers based out of Paris, who were working on a next-generation distributed filesystem called Infinit. From the very first demo two things were immediately clear. First, Infinit is an incredible piece of technology with the potential to change how applications consume and produce data; Second, the Infinit and Docker teams were almost comically similar: same obsession with decentralized systems; same empathy for the needs of both developers and operators; same taste for simple and modular designs.
Today we are pleased to announce that Infinit is joining the Docker family. We will use the Infinit technology to address one of the most frequent Docker feature requests: distributed storage that “just works” out of the box, and can integrate existing storage system.
Docker users have been driving us in this direction for two reasons. The first is that application portability across any infrastructure has been a central driver for Docker usage. As developers rapidly evolve from single container applications to multi-container applications deployed on a distributed system, they want to make sure their entire application is portable across any type of infrastructure, whether on cloud or on premise, including for the stateful services it may include. Infinit will address that by providing a portable distributed storage engine, in the same way that our SocketPlane acquisition provided a portable distributed overlay networking implementation for Docker.
The second driver has been the rapid adoption of Docker to containerize stateful enterprise applications, as opposed to next-generation stateless apps. Enterprises expect their container platform to have a point of view about persistent storage, but at the same time they want the flexibility of working with their existing vendors like HPE, EMC, Nutanix etc. Infinit addresses this need as well.
With all of our acquisitions, whether it was Conductant, which enabled us to scale powerful large-scale web operations stacks or SocketPlane, we’ve focused on extending our core capabilities and providing users with modular building blocks to work with and expand. Docker is committed to open sourcing Infinit’s solution in 2017 and add it to the ever-expanding list of infrastructure plumbing projects that Docker has made available to the community, such as  InfraKit, SwarmKit and Notary.  
For those who are interested in learning more about the technology, you can watch Infinit CTO Quentin Hocquet’s presentation at Docker Distributed Systems Summit last month, and we have scheduled an online meetup where the Infinit founders will walk through the architecture and do a demo of their solution. A key aspect of the Infinit architecture is that it is completely decentralized. At Docker we believe that decentralization is the only path to creating software systems capable of scaling at Internet scale. With the help of the Infinit team, you should expect more and more decentralized designs coming out of Docker engineering.
A few Words from CEO and founder Julien Quintard &;
&;We are thrilled to join forces with Docker. Docker has changed the way developers work in order to gain in agility. Stateful applications is the natural next step in this evolution. This is where Infinit comes into play, providing the Docker community with a default storage platform for applications to reliably store their state be it for a database, logs, a website&;s media files and more.&;
A few details about the Infinit’ architecture:

Infinit&8217;s next generation storage platform has been designed to be scalable and resilient while being highly customizable for container environments. The Infinit storage platform has the following characteristics:
&8211; Software-based: can be deployed on any hardware from legacy appliances to commodity bare metal, virtual machines or even containers.
&8211; Programmatic: developers can easily automate the creation and deployment of multiple storage infrastructure, each tailored to the overlying application&8217;s needs through policy-based capabilities.
&8211; Scalable: by relying on a decentralized architecture (i.e peer-to-peer), Infinit does away with the leader/follower model, hence does not suffer from bottlenecks and single points of failure.
&8211; Self Healing: Infinit&8217;s rebalancing mechanism allows for the system to adapt to various types of failures, including Byzantine.
&8211; Multi-Purpose: the Infinit platform provides interfaces for block, object and file storage: NFS, SMB, AWS S3, OpenStack Swift, iSCSI, FUSE etc.
 
Learn More

Sign up for the next Docker Online meetup on Docker and Infinit: Modern Storage Platform for Container Environments
Read about Docker and Infinit

Docker Acquires Distributed Storage Startup @Infinit to Provide Support for Stateful Containerized&;Click To Tweet

The post Docker acquires Infinit: a new data layer for distributed applications appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Global Mentor Week: Thank you Docker Community!

Danke, рақмет сізге, tak, धन्यवाद, cảm ơn bạn, شكرا, mulțumesc, Gracias, merci, asante, ευχαριστώ, thank you community for an incredible Docker Global Mentor Week! From Tokyo to Sao Paulo, Kisimu to Copenhagen and Ottowa to Manila, it was so awesome to see the energy from the community coming together to celebrate and learn about Docker!

Over 7,500 people registered to attend one of the 110 mentor week events across 5 continents! A huge thank you to all the Docker meetup organizers who worked hard to make these special events happen and offer Docker beginners and intermediate users an opportunity to participate in Docker courses.
None of this would have been possible without the support (and expertise!) of the 500+ advanced Docker users who signed up as mentors to help newcomers .
Whether it was mentors helping attendees, newcomers pushing their first image to Docker Hub or attendees mingling and having a good time, everyone came together to make mentor week a success as you can see on social media and the Facebook photo album.
Here are some of our favorite tweets from the meetups:
 

@Docker LearnDocker at Grenoble France 17Nov2016 @HPE_FR pic.twitter.com/8RSxXUWa4k
— Stephane Bureau (@SBUCloud) November 18, 2016

Awesome turnout at tonight&;s @DockerNYC learndocker event! We will be hosting more of these &; Keep tabs on meetup: https://t.co/dT99EOs4C9 pic.twitter.com/9lZocCjMPb
— Luisa M. Morales (@luisamariethm) November 18, 2016

And finally&; &;Tada&; Docker Mentor Weeklearndocker pic.twitter.com/6kzedIoGyB
— Károly Kass (@karolykassjr) November 17, 2016

 
Learn Docker
In case you weren’t able to attend a local event, the five courses are now available to everyone online here: https://training.docker.com/instructor-led-training
Docker for Developers Courses
Developer &8211; Beginner Linux Containers
This tutorial will guide you through the steps involved in setting up your computer, running your first containers, deploying a web application with Docker and running a multi-container voting app with Docker Compose.
Developer &8211; Beginner Windows Containers
This tutorial will walk you through setting up your environment, running basic containers and creating a Docker Compose multi-container application using Windows containers.
Developer &8211; Intermediate (both Linux and Windows)
This tutorial teaches you how to network your containers, how you can manage data inside and between your containers and how to use Docker Cloud to build your image from source and use developer tools and programming languages with Docker.
Docker for Operations courses
This courses are step-by-step guides where you will build your own Docker cluster, and use it to deploy a sample application. We have two solutions for you to create your own cluster.

Using play-with-docker

Play With Docker is a Docker playground that was built by two amazing Docker captains: Marcos Nils and Jonathan Leibiusky during the Docker Distributed Systems Summit in Berlin last October.
Play with Docker (aka PWD) gives you the experience of having a free Alpine Linux Virtual Machine in the cloud where you can build and run Docker containers and even create clusters with Docker features like Swarm Mode.
Under the hood DIND or Docker-in-Docker is used to give the effect of multiple VMs/PCs.
To get started, go to http://play-with-docker.com/ and click on ADD NEW INSTANCE five times. You will get five &8220;docker-in-docker&8221; containers, all on a private network. These are your five nodes for the workshop!
When the instructions in the slides tell you to &8220;SSH on node X&8221;, just go to the tab corresponding to that node.
The nodes are not directly reachable from outside; so when the slides tell you to &8220;connect to the IP address of your node on port XYZ&8221; you will have to use a different method.
We suggest to use &8220;supergrok&8221;, a container offering a NGINX+ngrok combo to expose your services. To use it, just start (on any of your nodes) the jpetazzo/supergrok image. The image will output further instructions:
docker run –name supergrok -d jpetazzo/supergrok
docker logs –follow supergrok
The logs of the container will give you a tunnel address and explain you how to connected to exposed services. That&8217;s all you need to do!
You can also view this excellent video by Docker Brussels Meetup organizer Nils de Moor who walks you through the steps to build a Docker Swarm cluster in a matter of seconds through the new play-with-docker tool.

 
Note that the instances provided by Play-With-Docker have a short lifespan (a few hours only), so if you want to do the workshop over multiple sessions, you will have to start over each time &8230; Or create your own cluster with option below.

Using Docker Machine to create your own cluster

This method requires a bit more work to get started, but you get a permanent cluster, with less limitations.
You will need Docker Machine (if you have Docker Mac, Docker Windows, or the Docker Toolbox, you&8217;re all set already). You will also need:

credentials for a cloud provider (e.g. API keys or tokens),
or a local install of VirtualBox or VMware (or anything supported by Docker Machine).

Full instructions are in the prepare-machine subdirectory.
Once you have decided what option to choose to create your swarm cluster, you ready to get started with one of the operations course below:
Operations &8211; Beginner
The beginner part of the Ops tutorial will teach you how to set up a swarm, how to use it to host your own registry, how to build your app container images and how to deploy and scale a distributed application called Dockercoins.
Operations &8211; Intermediate
From global container scheduling, overlay networks troubleshooting, dealing with stateful services and node management, this tutorial will show you how to operate your swarm cluster at scale and take you on a swarm mode deep dive.

Danke, Gracias, Merci, Asante, ευχαριστώ, thank you Docker community for an amazing&8230;Click To Tweet

The post Global Mentor Week: Thank you Docker Community! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/