CI Engineer

The post CI Engineer appeared first on Mirantis | The Pure Play OpenStack Company.
Mirantis is the leading global provider of Software and Services for OpenStack(TM), a massively scalable and feature-rich Open Source Cloud Operating System. OpenStack is used by hundreds of companies, including AT&T, Cisco, Symantec, NASA, Dell, PayPal and many more.Mirantis has more experience delivering OpenStack clouds to more customers than any other company in the world. We build the infrastructure that makes OpenStack work. We are proud to serve on the OpenStack Foundation Board and to be one of the top contributors to OpenStack.Mirantis is looking for a qualified candidate with experience in continuous integration, release engineering, or quality assurance, to join our CI Services team, which designs and implements CI/CD pipelines to build and test product artifacts and deliverables of the Mirantis Openstack distribution.Responsibilities:design and implement CI/CD pipelines,develop a unified CI framework based on existing tools (Zuul, Jenkins Job Builder, fabric, Gerrit, etc.),define and manage test environments required for different types of automated tests,drive cross-team communications to streamline and unify build and test processes,track and optimize hardware utilization by CI/CD pipelines,provide and maintain specifications and documentation for CI systems,provide support for users of CI systems (developers and QA engineers),produce and deliver technical presentations at internal knowledge transfer sessions, public workshops and conferences,participate in upstream OpenStack community, working together with OpenStack Infra team on common CI/CD tools and processes.Required Skills:Linux system administration &; package management, services administration, networking, KVM-based virtualization;scripting with Bash and Python;experience with the DevOps configuration management methodology and tools (Puppet, Ansible);ability to describe and document systems design decisions;familiarity with development workflows &8211; feature design, release cycle, code-review practices;English, both written and spoken.Will Be a Plus:knowledge of CI tools and frameworks (Jenkins, Buildbot, etc.);release engineering experience &8211; branching, versioning, managing security updates;understanding of release engineering and QA practices of major Linux distributions;experience in test design and automation;experience in project management;involvement in major Open Source communities (developer, package maintainer, etc.).What We Offer:challenging tasks, providing room for creativity and initiative,work in a highly-distributed international team,work in the Open Source community, contributing patches to upstream,opportunities for career growth and relocation,business trips for meetups and conferences, including OpenStack Summits,strong benefits plan,medical insurance.The post CI Engineer appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

The Dollars and Cents of How to Consume a Private Cloud

The post The Dollars and Cents of How to Consume a Private Cloud appeared first on Mirantis | The Pure Play OpenStack Company.
In my blog, how does the world consume private clouds?, we reviewed different ways to consume private cloud software:

Do-it-yourself (DIY)
Software distribution from a vendor
Managed service (your hardware & datacenter, software managed by a vendor)
Managed & hosted service (hardware, software, datacenter all outsourced)

Let’s look at the economics of the first three alternatives. Rather than an absolute total-cost-of-ownership (TCO) analysis, we will focus on a relative comparison where line items that are identical in all three scenarios, e.g., hardware costs, will be removed.
Of course, cost is not the only criteria in choosing your consumption model; there are other criteria, such as the ability to recruit OpenStack talent, long-term strategic interests, customizations required, and so on, but these topics are not covered in this blog.
DIY
This initially appears to be a no-brainer option. After all, isn’t open-source free software? Doesn’t one just download, install and be on their merry way? Unfortunately not &; open-source software provides numerous benefits such as higher innovation velocity, ability to influence direction and functionality, elimination of vendor lock-in and short-circuiting standards by defining APIs, drivers and plugins. But “free” is not a benefit mainly because open-source projects are not finished products. Below are typical costs incurred in a DIY scenario based on the numerous customers we have had the opportunity to work with who initially tried DIY OpenStack.

Cost
Representative Breakdown

Fixed size engineering team of 13 engineers
(Size independent of cloud scale)
5 Upstream engineers (to fix bugs, work on features, create reference architecture)
5 QA engineers (to package, QA & do interop testing)
3 Lifecycle tooling & monitoring engineers

Fixed size IT/OPS team of 9 engineers
(Size independent of cloud scale)
1 IT architect (to architect, do capacity planning)
1 L3 engineer (troubleshooting)
2 L2 engineers (to deploy, update, upgrade, and do ongoing management)
5 L1 engineers (to monitor, look at basic issues, respond to tenant requests)

Variable size engineering team of 1.1 person per 100 nodes and 1.1 person per 1PB storage
(Size depends on cloud scale, kicks-in only when past fixed size minimums &; so no double counting)
Compute:
0.3 IT/OPS architects per 100 nodes
0.1 L3 IT/OPS engineer per 100 nodes
0.3 L2 IT/OPS engineer per 100 nodes
0.4 L1 IT/OPS engineer per 100 nodes
Storage:
0.3 IT/OPS architects per 1PB storage
0.1 L3 IT/OPS engineer per 1PB storage
0.3 L2 IT/OPS engineer per 1 PB storage
0.4 L1 IT/OPS engineer per 1 PB storage

Dev/ Test cloud
$50,000 depreciated across 3 years required to test updates, upgrades, configuration changes etc.

Loss of availability
A DIY cloud typically has a lower availability than alternatives. Once you calculate the number of minutes of cloud downtime per year, you can multiply this by the margin loss per minute.
E.g. for 98% cloud availability and $50 loss per minute of cloud downtime equates to a loss of $525,600 per year.

Production delays
A DIY cloud typically takes longer to implement, delaying a production deployment.
E.g. for 6 months of delay and each month causing the business $50,000 of loss, that equates to $300,000 of one-time loss.

 
Software Distribution from a Vendor
In this consumption model, the engineering burden is shifted to the vendor, but the IT/OPS task resides with the user. The costs look like follows:

Cost
Representative Breakdown

Fixed size IT/ OPS team of 3.5 engineers
(Size independent of cloud scale, the team is much smaller than in the DIY case because there is a vendor to take support calls)
0.5 IT architect (to architect, do capacity planning)
1 L2 engineers (to deploy, update, upgrade, ongoing management)
2 L1 engineers (to monitor, look at basic issues, respond to tenant requests)

Variable size engineering team of 1 person per 100 nodes and 1 person per 1PB storage
(Size varies depending on cloud scale, kicks-in only when past fixed size minimums &8211; so no double counting)
Compute:
0.3 IT/OPS architects per 100 nodes
0.3 L2 IT/OPS engineer per 100 nodes
0.4 L1 IT/OPS engineer per 100 nodes
Storage:
0.3 IT/OPS architects per 1PB storage
0.3 L2 IT/OPS engineer per 1 PB storage
0.4 L1 IT/OPS engineer per 1 PB storage

Dev/Test cloud
$50,000 depreciated across 3 years required to test updates, upgrades, configuration changes etc.

Loss of availability
A cloud based on a distro typically has better availability than DIY. Once you calculate the number of minutes of cloud downtime per year, you can multiply this by the margin loss per minute.
E.g. for 99.5% cloud availability and $50 loss per minute of cloud downtime equates to a loss of $262,800 per year.

Software support costs
In lieu of the internal engineering team, in this scenario, there is a support cost payable to the vendor.

 
Managed Service from a Vendor
Here the engineering and IT/OPS burden for the software is shifted to the vendor. The costs look like follows:

Cost
Representative Breakdown

Loss of availability
A managed cloud typically offers the highest availability of the three options. Once you calculate the number of minutes of cloud downtime per year, you can multiply this by the margin loss per minute.
E.g. for 99.9% cloud availability and $50 loss per minute of cloud downtime equates to a loss of $52,560 per year.

Managed services costs
In lieu of the internal engineering & IT/OPS team, in this scenario, there is a managed service fee payable to the vendor.

 
The Bottom Line
Here are results of 3 scenarios we ran:

Relative Costs (4 year timeline)

Initial number of VMs
3,000
20,000
60,000

DIY cost/VM
$1,448
$249
$118

Distro cost/VM
$614
$179
$124

Managed cloud cost/VM
$298
$189
$149

The net-net is that for small clouds, managed is a very attractive option. For mid-size clouds a distribution may be more cost effective. For the largest clouds, DIY might be the least expensive option assuming the IT team can keep the availability reasonably high 98.5% or higher.
The post The Dollars and Cents of How to Consume a Private Cloud appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

containerd – a core container runtime project for the industry

Today Docker is spinning out its core runtime functionality into a standalone component, incorporating it into a separate project called , and will be donating it to a neutral foundation early next year. This is the latest chapter in a multi-year effort to break up the Docker platform into a more modular architecture of loosely coupled components.
Over the past 3 years, as Docker adoption skyrocketed, it grew into a complete platform to build, ship and run distributed applications, covering many functional areas from infrastructure to orchestration, the core container runtime being just a piece of it. For millions of developers and IT pros, a complete platform is exactly what they need. But many platform builders and operators are looking for “boring infrastructure”: a basic component that provides the robust primitives for running containers on their system, bundled in a stable interface, and nothing else. A component that they can customize, extend and swap out as needed, without unnecessary abstraction getting in their way. containerd is built to provide exactly that.

What Docker does best is provide developers and operators with great tools which make them more productive. Those tools come from integrating many different components into a cohesive whole. Most of those components are invented by others &; but along the way we find ourselves developing some of those components from scratch. Over time we spin out these components as independent projects which anyone can reuse and contribute back to. containerd is the latest of those components.

containerd is already deployed on millions of machines since April 2016 when it was included in Docker 1.11. Today we are announcing a roadmap to extend containerd, with input from the largest cloud providers, Alibaba Cloud, AWS, Google, IBM, Microsoft, and other active members of the container ecosystem. We will add more Docker Engine functionality to containerd so that containerd 1.0 will provide all the core primitives you need to manage containers with parity on Linux and Windows hosts:

Container execution and supervision
Image distribution
Network Interfaces Management
Local storage
Native plumbing level API
Full OCI support, including the extended OCI image specification

When containerd 1.0 implements that scope, in Q2 2017, Docker and other leading container systems, from AWS ECS to Microsoft ACS, Kubernetes, Mesos or Cloud Foundry will be able to use it as their core container runtime. containerd will use the OCI standard and be fully OCI compliant.

Over the past 3 years, the adoption of containers with Docker has triggered an unprecedented wave of innovation in our industry. We think containerd will unlock a whole new phase of innovation and growth across the entire container ecosystem, which in turn will benefit every Docker developer and customer.
You can find up-to-date roadmap, architecture and API definitions in the Github repository, and more details about the project in our engineering team’s blog post. We plan to have a summit at the end of February to bring in more contributors, stay tuned for more details about that in the next few weeks.
Thank you to Arnaud Porterie, Michael Crosby, Mickaël Laventure, Stephen Day, Patrick Chanezon and Mike Goelzer from the Docker team, and all the maintainers and contributors of the Docker project for making this project a reality.

Introducing containerd &8211; a core container runtime project for the industryClick To Tweet

The post containerd &8211; a core container runtime project for the industry appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

More details about containerd, Docker’s core container runtime component

Today we announced that Docker is extracting a key component of its platform, a part of the engine plumbing&; a core container runtime&8211;and commits to donating it to an open foundation. containerd is designed to be less coupled, and easier to integrate with other tools sets. And it is being written and designed to address the requirements of the major cloud providers and container orchestration systems.
Because we know a lot of Docker fans want to know how the internals work, we thought we would share the current state of containerd and what we plan for version 1.0. Before that, it’s a good idea to look at what Docker has become over the last three and a half years.
The Docker platform isn’t a container runtime. It is in fact a set of integrated tools that allow you to build ship and run distributed applications. That means Docker handles networking, infrastructure, build, orchestration, authorization, security, and a variety of other services that cover the complete distributed application lifecycle.

The core container runtime, which is containerd, is a small but vital part of the platform. We started breaking out containerd from the rest of the engine in Docker 1.11, planning for this eventual release.
This is a look at Docker Engine 1.12 as it currently is, and how containerd fits in.

You can see that containerd has just the APIs currently necessary to run a container. A GRPC API is called by the Docker Engine, which triggers an execution process. That spins up a supervisor and an executor which is charged with monitoring and running containers. The container is run (i.e. executed) by runC, which is another plumbing project that we open sourced as a reference implementation of the Open Container Initiative runtime standard.
When containerd reaches 1.0, we plan to have a number of other features from Docker Engine as well.

That feature set and scope of containerd is:

A distribution component that will handle pushing to a registry, without a preferencetoward a particular vendor.
Networking primitives for the creation of system interfaces and APIs to manage a container&;s network namespace
Host level storage for image and container filesystems
A GRPC API
A new metrics API in the Prometheus format for internal and container level metrics
Full support of the OCI image spec and runC reference implementation

A more detailed architecture overview is available in the project’s GitHub repository.
This is a look at a future version of Docker Engine leveraging containerd 1.0.

containerd is designed to be embedded into a larger system, rather than being used directly by developers or end-users; and in fact this evolution of Docker plumbing will go unnoticed by end-users. It has a CLI, ctr, designed for debugging and experimentation, and a GRPC API designed for embedding. It’s designed as a plumbing component, designed to be integrated into other projects that can benefit from the lessons we’ve learned running containers.
We are at containerd version 0.2.4, so a lot of work needs to be done. We’ve invited the container ecosystem to participate in this project and are please to have support from Alibaba, AWS, Google, IBM and Microsoft who are providing contributors to help developing containerd. You can find up-to-date roadmap, architecture and API definitions in the github repo, and learn more at the containerd livestream meetup Friday, December 16th at 10am PST. We also plan to organize a summit at the end of February to bring contributors together.

More details about containerd, @Docker’s core container runtime componentClick To Tweet

The post More details about containerd, Docker’s core container runtime component appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Convert ASP.NET Web Servers to Docker with Image2Docker

A major update to Image2Docker was released last week, which adds ASP.NET support to the tool. Now you can take a virtualized web server in Hyper-V and extract a image for each website in the VM &; including ASP.NET WebForms, MVC and WebApi apps. 

Image2Docker is a PowerShell module which extracts applications from a Windows Virtual Machine image into a Dockerfile. You can use it as a first pass to take workloads from existing servers and move them to Docker containers on Windows.
The tool was first released in September 2016, and we&;ve had some great work on it from PowerShell gurus like Docker Captain Trevor Sullivan and Microsoft MVP Ryan Yates. The latest version has enhanced functionality for inspecting IIS &8211; you can now extract ASP.NET websites straight into Dockerfiles.
In Brief
If you have a Virtual Machine disk image (VHD, VHDX or WIM), you can extract all the IIS websites from it by installing Image2Docker and running ConvertTo-Dockerfile like this:
Install-Module Image2Docker
Import-Module Image2Docker
ConvertTo-Dockerfile -ImagePath C:win-2016-iis.vhd -Artifact IIS -OutputPath c:i2d2iis
That will produce a Dockerfile which you can build into a Windows container image, using docker build.
How It Works
The Image2Docker tool (also called &;I2D2&;) works offline, you don&8217;t need to have a running VM to connect to. It inspects a Virtual Machine disk image &8211; in Hyper-V VHD, VHDX format, or Windows Imaging WIM format. It looks at the disk for known artifacts, compiles a list of all the artifacts installed on the VM and generates a Dockerfile to package the artifacts.
The Dockerfile uses the microsoft/windowsservercore base image and installs all the artifacts the tool found on the VM disk. The artifacts which Image2Docker scans for are:

IIS & ASP.NET apps
MSMQ
DNS
DHCP
Apache
SQL Server

Some artifacts are more feature-complete than others. Right now (as of version 1.7.1) the IIS artifact is the most complete, so you can use Image2Docker to extract Docker images from your Hyper-V web servers.
Installation
I2D2 is on the PowerShell Gallery, so to use the latest stable version just install and import the module:
Install-Module Image2Docker
Import-Module Image2Docker
If you don&8217;t have the prerequisites to install from the gallery, PowerShell will prompt you to install them.
Alternatively, if you want to use the latest source code (and hopefully contribute to the project), then you need to install the dependencies:
Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201
Install-Module -Name Pester,PSScriptAnalyzer,PowerShellGet
Then you can clone the repo and import the module from local source:
mkdir docker
cd docker
git clone https://github.com/sixeyed/communitytools-image2docker-win.git
cd communitytools-image2docker-win
Import-Module .Image2Docker.psm1
Running Image2Docker
The module contains one cmdlet that does the extraction: ConvertTo-Dockerfile. The help text gives you all the details about the parameters, but here are the main ones:

ImagePath &8211; path to the VHD | VHDX | WIM file to use as the source
Artifact &8211; specify one artifact to inspect, otherwise all known artifacts are used
ArtifactParam &8211; supply a parameter to the artifact inspector, e.g. for IIS you can specify a single website
OutputPath &8211; location to store the generated Dockerfile and associated artifacts

You can also run in Verbose mode to have Image2Docker tell you what it finds, and how it&8217;s building the Dockerfile.
Walkthrough &8211; Extracting All IIS Websites
This is a Windows Server 2016 VM with five websites configured in IIS, all using different ports:

Image2Docker also supports Windows Server 2012, with support for 2008 and 2003 on its way. The websites on this VM are a mixture of technologies &8211; ASP.NET WebForms, ASP.NET MVC, ASP.NET WebApi, together with a static HTML website.
I took a copy of the VHD, and ran Image2Docker to generate a Dockerfile for all the IIS websites:
ConvertTo-Dockerfile -ImagePath C:i2d2win-2016-iis.vhd -Artifact IIS -Verbose -OutputPath c:i2d2iis
In verbose mode there&8217;s a whole lot of output, but here are some of the key lines &8211; where Image2Docker has found IIS and ASP.NET, and is extracting website details:
VERBOSE: IIS service is present on the system
VERBOSE: ASP.NET is present on the system
VERBOSE: Finished discovering IIS artifact
VERBOSE: Generating Dockerfile based on discovered artifacts in
:C:UserseltonAppDataLocalTemp865115-6dbb-40e8-b88a-c0142922d954-mount
VERBOSE: Generating result for IIS component
VERBOSE: Copying IIS configuration files
VERBOSE: Writing instruction to install IIS
VERBOSE: Writing instruction to install ASP.NET
VERBOSE: Copying website files from
C:UserseltonAppDataLocalTemp865115-6dbb-40e8-b88a-c0142922d954-mountwebsitesaspnet-mvc to
C:i2d2iis
VERBOSE: Writing instruction to copy files for -mvc site
VERBOSE: Writing instruction to create site aspnet-mvc
VERBOSE: Writing instruction to expose port for site aspnet-mvc
When it completes, the cmdlet generates a Dockerfile which turns that web server into a Docker image. The Dockerfile has instructions to installs IIS and ASP.NET, copy in the website content, and create the sites in IIS.
Here&8217;s a snippet of the Dockerfile &8211; if you&8217;re not familiar with Dockerfile syntax but you know some PowerShell, then it should be pretty clear what&8217;s happening:
# Install Windows features for IIS
RUN Add-WindowsFeature Web-server, NET-Framework-45-ASPNET, Web-Asp-Net45
RUN Enable-WindowsOptionalFeature -Online -FeatureName IIS-ApplicationDevelopment,IIS-ASPNET45,IIS-BasicAuthentication…

# Set up website: aspnet-mvc
COPY aspnet-mvc /websites/aspnet-mvc
RUN New-Website -Name ‘aspnet-mvc’ -PhysicalPath “C:websitesaspnet-mvc” -Port 8081 -Force
EXPOSE 8081
# Set up website: aspnet-webapi
COPY aspnet-webapi /websites/aspnet-webapi
RUN New-Website -Name ‘aspnet-webapi’ -PhysicalPath “C:websitesaspnet-webapi” -Port 8082 -Force
EXPOSE 8082
You can build that Dockerfile into a Docker image, run a container from the image and you&8217;ll have all five websites running in a Docker container on Windows. But that&8217;s not the best use of Docker.
When you run applications in containers, each container should have a single responsibility &8211; that makes it easier to deploy, manage, scale and upgrade your applications independently. Image2Docker support that approach too.
Walkthrough &8211; Extracting a Single IIS Website
The IIS artifact in Image2Docker uses the ArtifactParam flag to specify a single IIS website to extract into a Dockerfile. That gives us a much better way to extract a workload from a VM into a Docker Image:
ConvertTo-Dockerfile -ImagePath C:i2d2win-2016-iis.vhd -Artifact IIS -ArtifactParam aspnet-webforms -Verbose -OutputPath c:i2d2aspnet-webforms
That produces a much neater Dockerfile, with instructions to set up a single website:
# escape=`
FROM microsoft/windowsservercore
SHELL [“powershell”, “-Command”, “$ErrorActionPreference = ‘Stop';”]

# Wait-Service is a tool from Microsoft for monitoring a Windows Service
ADD https://raw.githubusercontent.com/Microsoft/Virtualization-Documentation/live/windows-server-container-tools/Wait-Service/Wait-Service.ps1 /

# Install Windows features for IIS
RUN Add-WindowsFeature Web-server, NET-Framework-45-ASPNET, Web-Asp-Net45
RUN Enable-WindowsOptionalFeature -Online -FeatureName IIS-ApplicationDevelopment,IIS-ASPNET45,IIS-BasicAuthentication,IIS-CommonHttpFeatures,IIS-DefaultDocument,IIS-DirectoryBrowsing

# Set up website: aspnet-webforms
COPY aspnet-webforms /websites/aspnet-webforms
RUN New-Website -Name ‘aspnet-webforms’ -PhysicalPath “C:websitesaspnet-webforms” -Port 8083 -Force
EXPOSE 8083

CMD /Wait-Service.ps1 -ServiceName W3SVC -AllowServiceRestart
Note &8211; I2D2 checks which optional IIS features are installed on the VM and includes them all in the generated Dockerfile. You can use the Dockerfile as-is to build an image, or you can review it and remove any features your site doesn&8217;t need, which may have been installed in the VM but aren&8217;t used.
To build that Dockerfile into an image, run:
docker build -t i2d2/aspnet-webforms .
When the build completes, I can run a container to start my ASP.NET WebForms site. I know the site uses a non-standard port, but I don&8217;t need to hunt through the app documentation to find out which one, it&8217;s right there in the Dockerfile: EXPOSE 8083.
This command runs a container in the background, exposes the app port, and stores the ID of the container:
$id = docker run -d -p 8083:8083 i2d2/aspnet-webforms
When the site starts, you&8217;ll see in the container logs that the IIS Service (W3SVC) is running:
> docker logs $id
The Service ‘W3SVC’ is in the ‘Running’ state.
Now you can browse to the site running in IIS in the container, but because published ports on Windows containers don&8217;t do loopback yet, if you&8217;re on the machine running the Docker container, you need to use the container&8217;s IP address:
$ip = docker inspect –format ‘{{ .NetworkSettings.Networks.nat.IPAddress }}’ $id
start “http://$($ip):8083″
That will launch your browser and you&8217;ll see your ASP.NET Web Forms application running in IIS, in Windows Server Core, in a Docker container:

Converting Each Website to Docker
You can extract all the websites from a VM into their own Dockerfiles and build images for them all, by following the same process &8211; or scripting it &8211; using the website name as the ArtifactParam:
$websites = @(“aspnet-mvc”, “aspnet-webapi”, “aspnet-webforms”, “static”)
foreach ($website in $websites) {
    ConvertTo-Dockerfile -ImagePath C:i2d2win-2016-iis.vhd -Artifact IIS -ArtifactParam $website -Verbose -OutputPath “c:i2d2$website” -Force
    cd “c:i2d2$website”
    docker build -t “i2d2/$website” .
}
Note. The Force parameter tells Image2Docker to overwrite the contents of the output path, if the directory already exists.
If you run that script, you&8217;ll see from the second image onwards the docker build commands run much more quickly. That&8217;s because of how Docker images are built from layers. Each Dockerfile starts with the same instructions to install IIS and ASP.NET, so once those instructions are built into image layers, the layers get cached and reused.
When the build finish I have four i2d2 Docker images:
> docker images
REPOSITORY                                    TAG                 IMAGE ID            CREATED              SIZE
i2d2/static                                   latest              cd014b51da19        7 seconds ago        9.93 GB
i2d2/aspnet-webapi                            latest              1215366cc47d        About a minute ago   9.94 GB
i2d2/aspnet-mvc                               latest              0f886c27c93d        3 minutes ago        9.94 GB
i2d2/aspnet-webforms                          latest              bd691e57a537        47 minutes ago       9.94 GB
microsoft/windowsservercore                   latest              f49a4ea104f1        5 weeks ago          9.2 GB
Each of my images has a size of about 10GB but that&8217;s the virtual image size, which doesn&8217;t account for cached layers. The microsoft/windowsservercore image is 9.2GB, and the i2d2 images all share the layers which install IIS and ASP.NET (which you can see by checking the image with docker history).
The physical storage for all five images (four websites and the Windows base image) is actually around 10.5GB. The original VM was 14GB. If you split each website into its own VM, you&8217;d be looking at over 50GB of storage, with disk files which take a long time to ship.
The Benefits of Dockerized IIS Applications
With our Dockerized websites we get increased isolation with a much lower storage cost. But that&8217;s not the main attraction &8211; what we have here are a set of deployable packages that each encapsulate a single workload.
You can run a container on a Docker host from one of those images, and the website will start up and be ready to serve requests in seconds. You could have a Docker Swarm with several Windows hosts, and create a service from a website image which you can scale up or down across many nodes in seconds.
And you have different web applications which all have the same shape, so you can manage them in the same way. You can build new versions of the apps into images which you can store in a Windows registry, so you can run an instance of any version of any app. And when Docker Datacenter comes to Windows, you&8217;ll be able to secure the management of those web applications and any other Dockerized apps with role-based access control, and content trust.
Next Steps
Image2Docker is a new tool with a lot of potential. So far the work has been focused on IIS and ASP.NET, and the current version does a good job of extracting websites from VM disks to Docker images. For many deployments, I2D2 will give you a working Dockerfile that you can use to build an image and start working with Docker on Windows straight away.
We&8217;d love to get your feedback on the tool &8211; submit an issue on GitHub if you find a problem, or if you have ideas for enhancements. And of course it&8217;s open source so you can contribute too.
Additional Resources

Image2Docker: A New Tool For Prototyping Windows VM Conversions
Containerize Windows Workloads With Image2Docker
Run IIS + ASP.NET on Windows 10 with Docker
Awesome Docker &8211; Where to Start on Windows

Convert @Windows aspnet VMs to Docker with Image2DockerClick To Tweet

The post Convert ASP.NET Web Servers to Docker with Image2Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DockerCon 2017: Registration And CFP Now Open!

2017 tickets are now available! Take advantage of our lowest pricing today &; tickets are limited and Early Bird will sell out fast! We have extended DockerCon to a three-day conference with repeat sessions, hands-on labs and summits taking place on Thursday.
 
Register for DockerCon
 
The DockerCon 2017 Call for Proposals is open! Before you submit your cool hack or session proposals, take a look at our tips for getting selected below. We have narrowed the scope of sessions we’re looking for this year down to Cool Hacks and Use Cases. The deadline for submissions is January 14, 2017 at 11:59 PST.
Submit a talk

Proposal Dos:

Submitting a Cool Hack:
Be novel
Show us your cool hacks and wow us with the interesting ways you can push the boundaries of the Docker stack. Check out past audience favorites like Serverless Docker, In-the-air update of a drone with Docker and Resin.io and building a UI for container management with Minecraft for inspiration.
Be clear
You do not have to have your hack ready by the submission deadline, rather, plan to clearly explain your hack, what makes it cool and the technologies you will use.
 
All Sessions:
To illustrate the tips below, check out the sample proposals with comments on why they stand out.
Clarify your message
The best talks leave the audience transformed: They come into the session thinking or doing things one way, and they leave armed to think about or solve a problem differently. This means that your session must have solid take-aways that the audience can apply to their use case. We ask for your three key take-aways in the CFP. Make sure to be specific about your audience transformation, i.e. instead of listing “the talk covers orchestration,” instead write, “the talk will go through a step-by-step process for setting up swarm mode, providing the audience with an live example of how easy it is to use.” This is also a great place to highlight what you will leave them with, i.e. “Attendees will have full unrestricted access to all the code I’m going to write and open-source for the talk.”
Keep in line with the theme of the conference
Conferences are organized around a narrative and DockerCon is a user conference. That means we&;re looking for proposals that will inform and delight attendees on the following topics:
Using Docker
Has Docker technology made you better at what you do? Is Docker an integral part of your company’s tech stack? Do you use Docker to do big things? Infuse your proposal with concrete, first-hand examples about your Docker usage, challenges and what you learned along the way, and inspire us on how to use Docker to accomplish real tasks.
Deep Dives
Propose code and demo heavy deep-dive sessions on what you have been able to transform with your use of the Docker stack. Entice your audience by going deeply technical and teach them how to do something they haven’t done.
Get specific
While you should submit a topic that is broad enough to cover a range of interests, sessions are a maximum of 40 minutes, so don’t try to boil the ocean. Stay focused on content that support your take-aways so you can deliver a clear and compelling story.
Inspire us
Expand the conversation beyond technical details and inspire attendees to explore new uses. Past examples include Dockerizing CS50: From Cluster to Cloud to Appliance to Container, Shipping Manifests, Bill of Lading and Docker &8211; Metadata for Containers and Stop Being Lazy and Test Your Software.
Be open
Has your company built tools used in production and/or testing? Remember the buzz around Netflix&8217;s Chaos Monkey and the excitement around it when it was released? If you have such a tool, revealing the recipe for your secret sauce is a great way to get your talk on the radar of DockerCon 2017 attendees.
Show that you are engaging
Having a great topic and talk is important, but equally important is execution and delivery. In the CFP, you have the opportunity to provide as much information as you can about presentations you have given. Videos, reviews, and slide decks will add to your credibility as an entertaining speaker.
 
Proposal Don&8217;ts
These items are surefire ways of not getting past the initial review.
Sales pitches
No, just don&8217;t. It&8217;s acceptable to mention your company&8217;s product during a presentation but it should never be the focus of your talk.
Bulk submissions
If your proposal reads as generic talk that has been submitted to a number of conferences, it will not pass the initial review. Granted that a talk can be a polished version of earlier talk, but the proposal should be tailored for DockerCon 2017.
Jargon
If the proposal contains jargon, it&8217;s very likely that the presentation will also contain jargon. Although DockerCon 2017 is a technology conference, we value the ability to explain and make your points with clear and easy to follow language.
So, what happens next?
After a proposal is submitted, it will be reviewed initially for content and format. Once past the initial review, a committee of reviewers from Docker and the industry will read the proposals and select the best ones. There are a limited number of speaking slots and we work to achieve a balance of presentations that will interest the Docker community.
The deadline for proposal submission is January 14, 2017 at 11:59 PST.
We&8217;re looking forward to reading your proposals!
Submit a talk

DockerCon CFP is now open! Let us know how you’re using To Tweet

The post DockerCon 2017: Registration And CFP Now Open! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

“Dear Boss, I want to attend the OpenStack Summit”

Want to attend the OpenStack Summit Boston but need help with the right words for getting your trip approved? While we won&;t write the whole thing for you, here&8217;s a template to get you going. It&8217;s up to you to decide how the Summit will help your team, but with free workshops and trainings, technical sessions, strategy talks and the opportunity to meet thousands of likeminded Stackers, we don&8217;t think you&8217;ll have a hard time finding an answer.
 
Dear [Boss],
All I want for the holidays is to attend the OpenStack Summit in Boston, May 8-11, 2017. The OpenStack Summit is the largest open source conference in North America, and the only one where I can get free OpenStack training, learn how to contribute code upstream to the project, and meet with other users to learn how they’ve been using OpenStack in production. The Summit is an opportunity for me to bring back knowledge about [Why you want to attend! What are you hoping to learn? What would benefit your team?] and share it with our team, while helping us get to know similar OpenStack-minded teams around the world (think 60+ countries and nearly 1,200 companies represented).
If I register before mid-March, I get early bird pricing&;$600 USD for 4 days (plus an optional day of training). Early registration also allows me to RSVP for trainings and workshops as soon as they open (they always sell out!), or sign up to take the Certified OpenStack Administrator exam onsite.
At the OpenStack Summit Austin last year, over 7,800 attendees heard case studies from Superusers like AT&T and China Mobile, learned how teams are using containers and container orchestration like Kubernetes with OpenStack, and gave feedback to Project Teams about user needs for the upcoming software release. You can browse past Summit content at openstack.org/videos to see a sample of the conference talks.
The OpenStack Summit is the opportunity for me to expand my OpenStack knowledge, network and skills. Thanks for considering my request.
[Your Name]
Quelle: openstack.org

Learn Docker with More Hands-On Labs

Docker Labs is a rich resource for technical folks from any background to learn Docker. Since the last update on the Docker Blog, three new labs have been published covering , SQL Server and running a Registry on Windows. The self-paced, hands-on labs are a popular way for people to learn how to use Docker for specific scenarios, and it&;s a resource which is growing with the help of the community.

New Labs

Ruby FAQ. You can Dockerize Ruby and Ruby on Rails apps, but there are considerations around versioning, dependency management and the server runtimes. The Ruby FAQ walks through some of the challenges in moving Ruby apps to Docker and proposes solutions. This lab is just beginning, we would love to have your contributions.
SQL Server Lab. Microsoft maintain a SQL Server Express image on Docker Hub that runs in a Windows container. That image lets you attach an existing database to the container, but this lab walks you through a full development and deployment process, building a Docker image that packages your own database schema into an image.
Registry Windows Lab. Docker Registry is an open-source registry server for storing Docker images, which you can run in your own network. There&8217;s already an official registry image for Linux, and this lab shows how to build and run a registry server in a Docker container on Windows.

Highlights
Some of the existing labs are worth calling out for the amount of information they provide. There are hours of learning here:

Docker Networking. Walks through a reference architecture for container networks, covering all the major networking concepts in detail, together with tutorials that demonstrate the concepts in action.
Swarm Mode. A beginner tutorial for native clustering which came in Docker 1.12. Explains how to run services, how Docker load-balances with the Routing Mesh, how to scale up and down, and how to safely remove nodes from the swarm.

Fun Facts
In November, the labs repo on GitHub was viewed over 35,000 times. The most popular lab right now is Windows Containers.
The repo contains 244 commits, has been forked 296 times and starred by 1,388 GitHub users. The labs are the work of 35 contributors so far &; including members of the community, Docker Captains and folks at Docker, Inc.
Among the labs there are 14 Dockerfiles and 102 pages of documentation, totalling over 77,000 words of Docker learning. It would take around 10 hours to read aloud all the labs!
How to Contribute
If you want to join the contributors, we&8217;d love to add your work to the hands-on labs. Contributing is super easy. The documentation is written in GitHub flavored markdown and there&8217;s no mandated structure, just make your lab easy to follow and learn from.
Whether you want to add a new lab or update an existing one, the process is the same:

fork the docker/labs repo on GitHub;
clone your forked repo onto your machine;
add your awesome lab, or change an existing lab to make it even more awesome;
commit your changes (and make sure to sign your work);
submit a pull request &8211; the labs maintainers will review, feed back and publish!

with hands-on labs, now with and Ruby!Click To Tweet

The post Learn Docker with More Hands-On Labs appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Scaling OpenStack With a Shared-Nothing Architecture

The post Scaling OpenStack With a Shared-Nothing Architecture appeared first on Mirantis | The Pure Play OpenStack Company.
When it comes to pushing the boundaries of OpenStack scaling, there are basically two supported constructs: Cells and Regions. With Nova Cells, instance database records are divided across multiple “shards” (i.e., cells).  This division ensures that we can keep scaling our compute capacity without getting bogged down by the limitations of a single relational database cluster or message queue.  This is what we mean by a shared-nothing architecture: Scaling a distributed system without limits by removing single points of contention.
However, in OpenStack, cells currently only exist for Nova.  If we want to extend this kind of paradigm to other OpenStack services such as Neutron, Ceilometer, and so on, then we have to look to OpenStack Regions.  (You may already be looking at using regions for other reasons &; for example, to optimize response times with proximity to various geographic localities.)
There are many ways of implementing regions in OpenStack.  You will find online references that show the same Keystone & Horizon shared between multiple regions, with some variants throwing in Glance too, while others exclude Keystone.  These are all variations in expressing the degree to which we want to share a set of common services between multiple cloud environments, versus keeping them separate.  To depict the extremes (sharing everything, vs sharing nothing):

Shared services offer the convenience of a central source of truth (e.g., for user, tenant, and role data in the case of Keystone), a single point of entry (e.g., Keystone for auth or Horizon for the dashboard), and can be less trouble than deploying and managing distributed services.
On the other hand, with this paradigm we can’t horizontally scale the relational database behind Keystone, Horizon’s shared session cache, or other single points of contention that are created when centralizing one of the control plane services.
Beyond scaling itself though, let’s take a look at some other points of discussion between the two:
Flexibility
The shared-nothing paradigm offers the flexibility to support different design decisions and control plane optimizations for different environments, providing a contrast to the “one size fits all” control plane philosophy.
It also permits the operation of different releases of OpenStack in different environments.  For example, we can have a “legacy cloud” running an older/stable OpenStack, at the same time as an “agile cloud” running a more recent, less stable OpenStack release.
Upgrades & Updates
OpenStack has been increasingly modularized by projects that specialize in doing one specific thing (e.g., the Ironic project was a product of the former bare metal driver in Nova).  However, despite this modularization, there remains a tight coupling between most of these components, given their need to work together to make a fully functioning platform.
This tight coupling is a hardship for upgrades, as it often requires a big-bang approach (different components that have to be upgraded at the same time because they won’t work properly in an incremental upgrade scenario or with mixed versions).  Most of the upstream testing is focused on testing of the same versions of components together, not in the mixing of them (especially as we see more and more projects make their way into the big tent).
When we don’t share components between clouds, we open the possibility of performing rolling upgrades that are fully isolated and independent of other environments.  This localizes any disruptions from upgrades, updates, or other changes to one specific environment at a time, and ultimately allows for a much better controlled, fully automated, and lower risk change cycle.
Resiliency & Availability
When sharing components, we have to think about common modes of failure.  For example, even if we deploy Keystone for HA, if we have corruption in the database backend, or apply schema updates (e.g., for upgrades), or take the database offline for any other maintenance reasons, these will all cause outages for the service as a whole, and by extension all of your clouds that rely on this shared service.
Another example: Suppose you are using PKI tokens and you need to change the SSL keys that encode and decode tokens.  There is not really any graceful way of doing this transition: you have to do hard cut-over to the new key on all Keystone nodes at the same time, purge all cached signing files stored by every other openstack service, and revoke all tokens issued under the old key.
Also, denial of service attacks are both easier to perform and more impactful with shared infrastructure elements.
In contrast, the shared-nothing approach removes common modes of failure and provides full isolation of failure domains.  This is especially relevant for cloud native apps that deploy to multiple regions to achieve their SLAs, where failures are taken to be independent, and where the presence of common modes of failure can invalidate the underlying assumptions of this operational model.
Performance & Scaling
When distributing services, degraded or underperforming shards do not affect the performance or integrity of other shards.  For example, in times of high loading, or denial of service attacks (whether or not malicious in nature), the impacts of these events will be localized and not spread or impact other environments.
Also, faster API response times may be realized (since requests can be processed locally), as well as lower utilization of WAN resources.  Even small latencies can add up (e.g., Keystone calls in particular should be kept as fast as possible to maximize the response time for the overall system).
Scaling out is a simple matter of adding more shards (regions).  As mentioned previously, this also helps get around the fact that we have components that cannot otherwise be horizontally scaled, such as the horizon shared session cache or relational database backend.
Design Complexity
An important factor to consider with any deployment paradigm is: “How close is this to the reference upstream architecture?”  The closer we stay to that, the more we benefit from upstream testing, and the less we have to go out and develop our own testing for customizations and deviations from this standard.
Likewise from the operations side, the closer we stick to that reference architecture, the easier time we have with fault isolation, troubleshooting, and support.
If your organization is also doing some of their own OpenStack development, the same statement could also be made about your developers: In effect, the closer your environment is to something that can be easily reproduced with DevStack, the lower the barrier of entry is for your developers to onboard and contribute.  And regardless of whether you are doing any OpenStack development, your dev and staging environments will be easier to setup and maintain for the same reasons.
The elegance of the shared-nothing approach is that it allows you to use this standard, reference deployment pattern, and simply repeat it multiple times.  It remains the same regardless of whether you deploy one or many.  It aims to commoditize the control plane and make it into something to be mass produced at economies of scale.
Challenges
There are two key challenges/prerequisites to realizing a shared-nothing deployment pattern.
The first challenge is the size of the control plane: It should be virtualized, containerized, or at least miniaturized in order to reduce the footprint and minimize overhead of having a control plane in each environment.  This additional layer may increase deployment complexity and brings its own set of challenges, but is becoming increasingly mainstream in the community (for example, see the TripleO and Kolla openstack projects, which are now part of the big tent).
The second challenge is the management and operational aspects of having multiple clouds.  Broadly speaking, you can classify the major areas of cloud management as follows:

Configuration Management (addressed by CM systems like Ansible, Puppet, etc)
OpenStack resource lifecycle management.  Specifically we are interested in those resources that we need to manage as cloud providers, such as:

Public Images
Nova flavors, host aggregates, availability zones
Tenant quotas
User identities, projects, roles
Floating/public networks
Murano catalogs
VM resource pools for Trove or other aaS offerings

Coordinated multi-cloud resource lifecycle management is a promising possibility, because it permits us to get back some of what we sacrificed when we decentralized our deployment paradigm: the single source of truth with the master state of these resources.  But rather than centralizing the entire service itself, we centralize the management of a set of distributed services.  This is the key distinction with how we manage a set of shared-nothing deployments, and leverage the relatively backwards-compatible OpenStack APIs to do multi-cloud orchestration, instead of trying to synchronize database records with an underlying schema that is constantly changing and not backwards-compatible.
What we could envision then is a resource gateway that could be used for lifecycle management of OpenStack resources across multiple clouds.  For example, if we want to push out a new public image to all of our clouds, then that request could be sent to this gateway which would then go and register that image in all our clouds (with the same image name, UUID, and metadata to each Glance API endpoint).  Or as an extension, this could be policy driven &8211; e.g., register this image only in those clouds in certain countries, or where certain regulations don’t apply.
In terms of CAP theory, we are loosening up consistency in favor of availability and partition tolerance.  The resources being managed could be said to be “eventually consistent”, which is reasonable given the types of resources being managed.
Also note that here, we only centralize those resources that cloud operators need to manage (like public images), while private image management is left to the user (as it would be in a public cloud setting).  This also gives the end-user the most control about what goes where &8211; for example, they don’t have to worry about their image being replicated to some other location which may increase their image’s exposure to security threats, or to some other country or jurisdiction where different data laws apply.
There have been a number of implementations designed to address this problem, all originating from the telco space.  Kingbird (started by OPNFV; open source) and ORM (by AT&T, with plans to open source by Q4 2016 &8211; Q1 2017) can be classified as resource management gateways.  Tricircle (Telco working group and OPNFV; open source) is another community project which also has similar aims.
It will be very interesting to see how these projects come along this year, and to what degree we see a community standard emerge to define the way we implement shared-nothing.  It would also be great to get feedback from anyone else out there who is thinking along similar lines, or if they know of any other implementations that I missed in the list above.  Feel free to comment below!
The post Scaling OpenStack With a Shared-Nothing Architecture appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Docker acquires Infinit: a new data layer for distributed applications

The short version: acquired a fantastic company called Infinit. Using their technology, we will provide secure distributed storage out of the box, making it much easier to deploy stateful services and legacy enterprise applications on Docker. This will be delivered in a very open and modular design, so operators can easily integrate their existing storage systems, tune advanced settings, or simply disable the feature altogether. Oh, and we’re going to open-source the whole thing.
The slightly longer version:
At Docker we believe that tools should adapt to the people using them, not the other way around. So we spend a lot of time searching for the most exciting and powerful software technology out there, then integrating it into simple and powerful tools. That is how we discovered a small team of distributed systems engineers based out of Paris, who were working on a next-generation distributed filesystem called Infinit. From the very first demo two things were immediately clear. First, Infinit is an incredible piece of technology with the potential to change how applications consume and produce data; Second, the Infinit and Docker teams were almost comically similar: same obsession with decentralized systems; same empathy for the needs of both developers and operators; same taste for simple and modular designs.
Today we are pleased to announce that Infinit is joining the Docker family. We will use the Infinit technology to address one of the most frequent Docker feature requests: distributed storage that “just works” out of the box, and can integrate existing storage system.
Docker users have been driving us in this direction for two reasons. The first is that application portability across any infrastructure has been a central driver for Docker usage. As developers rapidly evolve from single container applications to multi-container applications deployed on a distributed system, they want to make sure their entire application is portable across any type of infrastructure, whether on cloud or on premise, including for the stateful services it may include. Infinit will address that by providing a portable distributed storage engine, in the same way that our SocketPlane acquisition provided a portable distributed overlay networking implementation for Docker.
The second driver has been the rapid adoption of Docker to containerize stateful enterprise applications, as opposed to next-generation stateless apps. Enterprises expect their container platform to have a point of view about persistent storage, but at the same time they want the flexibility of working with their existing vendors like HPE, EMC, Nutanix etc. Infinit addresses this need as well.
With all of our acquisitions, whether it was Conductant, which enabled us to scale powerful large-scale web operations stacks or SocketPlane, we’ve focused on extending our core capabilities and providing users with modular building blocks to work with and expand. Docker is committed to open sourcing Infinit’s solution in 2017 and add it to the ever-expanding list of infrastructure plumbing projects that Docker has made available to the community, such as  InfraKit, SwarmKit and Notary.  
For those who are interested in learning more about the technology, you can watch Infinit CTO Quentin Hocquet’s presentation at Docker Distributed Systems Summit last month, and we have scheduled an online meetup where the Infinit founders will walk through the architecture and do a demo of their solution. A key aspect of the Infinit architecture is that it is completely decentralized. At Docker we believe that decentralization is the only path to creating software systems capable of scaling at Internet scale. With the help of the Infinit team, you should expect more and more decentralized designs coming out of Docker engineering.
A few Words from CEO and founder Julien Quintard &;
&;We are thrilled to join forces with Docker. Docker has changed the way developers work in order to gain in agility. Stateful applications is the natural next step in this evolution. This is where Infinit comes into play, providing the Docker community with a default storage platform for applications to reliably store their state be it for a database, logs, a website&;s media files and more.&;
A few details about the Infinit’ architecture:

Infinit&8217;s next generation storage platform has been designed to be scalable and resilient while being highly customizable for container environments. The Infinit storage platform has the following characteristics:
&8211; Software-based: can be deployed on any hardware from legacy appliances to commodity bare metal, virtual machines or even containers.
&8211; Programmatic: developers can easily automate the creation and deployment of multiple storage infrastructure, each tailored to the overlying application&8217;s needs through policy-based capabilities.
&8211; Scalable: by relying on a decentralized architecture (i.e peer-to-peer), Infinit does away with the leader/follower model, hence does not suffer from bottlenecks and single points of failure.
&8211; Self Healing: Infinit&8217;s rebalancing mechanism allows for the system to adapt to various types of failures, including Byzantine.
&8211; Multi-Purpose: the Infinit platform provides interfaces for block, object and file storage: NFS, SMB, AWS S3, OpenStack Swift, iSCSI, FUSE etc.
 
Learn More

Sign up for the next Docker Online meetup on Docker and Infinit: Modern Storage Platform for Container Environments
Read about Docker and Infinit

Docker Acquires Distributed Storage Startup @Infinit to Provide Support for Stateful Containerized&;Click To Tweet

The post Docker acquires Infinit: a new data layer for distributed applications appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/