Docker & Prometheus Joint Holiday Meetup Recap

Last Wednesday we had our 52nd at HQ, but this time we joined forces with the Prometheus user group to host a mega-meetup! There was a great turnout and members were excited to see the talks on using Docker with Prometheus, OpenTracing and the new Docker playground; play-with-docker.
First up was Stephen Day, a Senior Software Engineer at Docker, who presented a talk entitled, ‘The History of According to Me’. Stephen believes that metrics and should be built into every piece of software we create, from the ground up. By solving the hard parts of application metrics in Docker, he thinks it becomes more likely that metrics are a part of your services from the start. See the video of his intriguing talk and slides below.

&;The History of Metrics According to me&; by Stephen Day from Docker, Inc.

&8216;The History of Metrics According to Me&8217; @stevvooe talking metrics and monitoring at the Docker SF meetup! @prometheusIO @CloudNativeFdn pic.twitter.com/6hk0yAtats
— Docker (@docker) December 15, 2016

Next up was Ben Sigelman, an expert in distributed tracing, whose talk ‘OpenTracing Isn’t Just Tracing: Measure Twice, Instrument Once’ was both informative and humorous. He began by describing OpenTracing and explaining why anyone who monitors microservices should care about it. He then stepped back to examine the historical role of operational logging and metrics in distributed system monitoring and illustrated how the OpenTracing API maps to these tried-and-true abstractions. To find out more and see his demo involving donuts watch the video below and slides.

Last but certainly not least were two of our amazing Docker Captains all the way from Buenos Aires, Marcos Nils and Jonathan Leibiusky! During the Docker Distributed Systems Summit in Berlin last October, they built ‘play-with-docker’. It is a a Docker playground which gives you the experience of having a free Alpine Linux Virtual Machine in the cloud where you can build and run Docker containers and even create clusters with Docker features like Swarm Mode. Under the hood DIND or Docker-in-Docker is used to give the effect of multiple VMs/PCs. Watch the video below to see how they built it and hear all about the new features.

@marcosnils & @xetorthio sharing at the Docker HQ meetup all the way from Buenos Aires! pic.twitter.com/kXqOZgClMz
— Docker (@docker) December 15, 2016

play-with-docker was a hit with the audience  and there was a line of attendees hoping to speak to Marcos and Jonathan after their talk! All in all, it was a great night thanks to our amazing speakers, Docker meetup members, the Prometheus user group and the CNCF who sponsored drinks and snacks.

New blog post w/ videos & slides from the Docker & @PrometheusIO joint meetup! To Tweet

The post Docker &; Prometheus Joint Holiday Meetup Recap appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Why Bluebee chose IBM as a strategic partner

It is essential for startups to build strategic partnerships with key players in the market.
For my company, Bluebee, a relatively small outfit, larger, key players do not necessarily come to mind as logical partners for success. Bluebee offers cloud-based services for genomics analytics on a global scale for research labs, clinical users, diagnostics companies, and next-generation sequencing (NGS) service providers. Data security, compliance and local regulations are critical to our business, given our background in financial technology and high-value payments.
The requirements for our technological architecture to support a global service were crystal clear: we needed the “real cloud,” a true, global cloud solution in which multiple data centers in several remote geographies virtually act as one pool of resources and are capable of elastically provisioning bare metal infrastructure. Today, IBM SoftLayer is the only provider capable of providing us with this service.
Here’s why:
A true cloud partnership
On our journey with IBM SoftLayer, we engaged in the early stages of product design with IBM Power Systems using IBM POWER8 technology. Our team members traveled to the IBM Austin labs to jointly research how to maximize throughput within our infrastructure. Very quickly, we realized that hosting our genomics analytics workloads on POWER8 in SoftLayer garnered the fastest results. We observed a substantial increase in performance when using POWER8 and FPGA&;s (field-programmable gate arrays) over x86.
The IBM partnership provided us with the exceptional computing capabilities our business demands for our customer’s data intensive workloads.
Flexibility to support genomics demands
Partnering with IBM also allowed us to become an early adopter of the FASP (Aspera) protocol in the genomics domain. Together, we now collaborate to provide our high-volume customers with a fast, flexible, highly scalable, on-demand software solution that overcomes the challenges of large genomic data transfer issues.
The largest Bluebee client samples are from cancer centers, and are typically upwards of 360 gigabytes per patient. The integration of Aspera’s patented FASP transfer technology now reduces the critical end-to-end turnaround time of computational data analysis while ensuring high-speed and reliable data transfers. This is particularly critical as Bluebee offers high-performance NGS data analysis solutions in 22 data centers across the globe. The combined technologies enable faster and more efficient patient diagnosis and treatment decisions.
Our next combined venture was with IBM Cloud Object Storage, which has a unique offering of resilient object storage across data centers. We again met an IBM agile team, ready to support and help us with a very competitive, long-term storage solution for our customers. In less than a month, we had jointly designed a solution that met our clients’ needs.
Even as a relatively small startup, our search for the right partner ended with a large, key player in the industry. From the beginning, we were focused on an unwavering goal: a global, competitive, state-of-the-art and secure solution for genomics analytics. Our aspirations and the ability of IBM to provide and support a multifaceted solution allowed us to form a truly strategic, successful partnership.
Learn more about infrastructure as a service.
The post Why Bluebee chose IBM as a strategic partner appeared first on news.
Quelle: Thoughts on Cloud

24 Of The Best Gadgets You Can Give This Year

All of 2016&;s standout tech, to gift (or get for yourself).

This year, I tried and tested dozens of new gadgets – from high-tech vaporizers to headphones to laptops – and what follows are my absolute favorite products of 2016. Whether you&;re looking for a last-minute stocking stuffer or hoping to splurge on yourself, there should be something in this guide for everyone on your list.

BuzzFeed News

~$50 and under~

~$50 and under~

Well-Kept Screen Cleansing Towelettes ($6 for 15)

Well-Kept Screen Cleansing Towelettes ($6 for 15)

I take a pack of these everywhere. You could try making a DIY screen cleaning spray, but getting a pack of these low lint wipes is easier. They remove oil, makeup, and germs from phones, keyboards, mice, and laptops – or any other surface that can harbor acne-causing bacteria.

sephora.com

Anker PowerCore 10000 ($24)

Anker PowerCore 10000 ($24)

Get this for your friend who always runs out of battery. They will be *so* grateful. This year&039;s portable battery pack from Anker is even smaller and lighter than previous models. It&039;s slightly larger than a credit card (though thicker, obviously) and weighs just six ounces.

The device can hold about three Galaxy S6 and four iPhone 6S charges. Those who need an ultra-high capacity power bank should opt for the PowerCore 20000 ($42), which can hold up to seven iPhone 6S charges.

Anker


View Entire List ›

Quelle: <a href="24 Of The Best Gadgets You Can Give This Year“>BuzzFeed

Announcing Federal Security and Compliance Controls for Docker Datacenter

Security and compliance are top of mind for IT organizations. In a technology-first era rife with cyber threats, it is important for enterprises to have the ability to deploy applications on a platform that adheres to stringent security baselines. This is especially applicable to U.S. Federal Government entities, whose wide-ranging missions, from public safety and national security, to enforcing financial regulations, are critical to keeping policy in order.

Federal agencies and many non-government organizations are dependent on various standards and security assessments to ensure their systems are operating in controlled environments. One such standard is NIST Special Publication 800-53, which provides a library of security controls to which technology systems should adhere. NIST 800-53 defines three security baselines: low, moderate, and high. The number of security controls that need to be met increases from the low to high baselines, and agencies will elect to meet a specific baseline depending on the requirements of their systems.
Another assessment process known as the Federal Risk and Authorization Management Program, or for short, further expands upon the NIST 800-53 controls by including additional security requirements at each baseline. FedRAMP is a program that ensures cloud providers meet stringent Federal government security requirements.
When an agency elects to deploy a system like Docker Datacenter for production use, they must complete a security assessment and grant the system an Authorization to Operate (ATO). The FedRAMP program already includes provisional ATOs at specific security baselines for a number of cloud providers, including AWS and Azure, with scope for on-demand compute services (e.g. Virtual Machines, Networking, etc). Since many cloud providers have already met the requirements defined by FedRAMP, an agency that leverages the provider’s services must only authorize the components of its own system that it deploys and manages at the chosen security baseline.
A goal of Docker is to help make it easier for organizations to build compliant enterprise container environments. As such, to help expedite the agency ATO process, we&;re excited to release NIST 800-53 Revision 4 security and privacy control guidance for Docker Datacenter at the FedRAMP Moderate baseline.
The security content is available in two forms:

An open source project where the community can collaborate on the compliance documentation itself and
System Security Plan (SSP) template for Azure Government

 

 
First, we&8217;ve made the guidance available as part of a project available here. The documentation in the repository is developed using a format known as OpenControl, an open source, &;compliance-as-code&; schema and toolkit that helps software vendors and organizations build compliance documentation. We chose to use OpenControl for this project because we&8217;re big fans of tools at Docker, and it really fits our development principals quite nicely. OpenControl also includes schema definitions for other standards including Payment Card Industry Data Security Standard (PCI DSS). This helps to address compliance needs for organizations outside of the public sector. We’re also licensing this project under CC0 Universal Public Domain. To accelerate compliance for container platforms, Docker is making this project public domain and inviting folks to contribute to the documentation to help enhance the container compliance story.
 
Second, we&8217;re including this documentation in the form of a System Security Plan (SSP) template for running Docker Datacenter on Microsoft Azure Government. The template can be used to help lessen the time it takes for an agency to certify Docker Datacenter for use. To obtain these templates, please contact compliance@docker.com.
We’ve also started to experiment with natural language processing which you’ll find in the project’s repository on GitHub. By using Microsoft’s Cognitive Services Text Analytics API, we put together a simple tool that vets the integrity of the actual security narratives and ensures that what’s written holds true to the NIST 800-53 control definitions. You can think of this as a form of automated proofreading. We’re hoping that this helps to open the door to new and exciting ways to develop content!

New federal security and compliance controls for on @Azure FedRAMP To Tweet

More resources for you:

See What’s New and Learn more about Docker Datacenter
Sign up for a free 30 day trial of Docker Datacenter
Learn More about Docker in public sector.

The post Announcing Federal Security and Compliance Controls for Docker Datacenter appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Enterprise cloud strategy: Platforms and infrastructure in a multi-cloud environment

In past posts about multi-cloud strategy, I&;ve focused on two principles for getting it right — governance and applications and data — and their importance when working with a cloud services provider (CSP).
The third and final element of your multi-cloud strategy is perhaps most crucial: platform and infrastructure effectiveness to support your application needs.
Deployment flexibility
When managing multiple clouds, you want to deploy applications on platforms that satisfy business, technical, security and compliance requirements. When those platforms come from a CSP, keep these factors in mind:

The platforms should be flexible and adaptable to your ever-changing business needs.
Your CSP should allow you to provision workloads on bare metal servers, where performance or strict compliance is needed and can support virtual servers and containers.
The CSP should be able build and support a private cloud on your premises. That cloud must fulfill your strictest compliance and security needs, as well as support a hybrid cloud model.
The CSP must provide capabilities that help you build applications by stitching together various platform-as-a-service (PaaS) services.
Many customers use containers to port applications. Find out whether your CSP provides container services backed by industry standards. Understand any customization to the standard container service that might create problems.

Seamless connectivity and networking
Applications, APIs and data must travel along networks. Seamless network connectivity across various cloud and on-premises environments is vital to success. Your CSP should be able to integrate with carrier hotels that enable on-demand, direct network connectivity to multiple cloud providers.
Interconnecting through carrier hotels enables automated, near-real-time provisioning of cloud services from multiple providers. It also provides enhanced service orchestration and management capabilities, along with shorter time to market.
Your CSP must also support software-defined and account-defined networks. This helps you maintain network abstraction standards that segregate customers as well as implement network segmentation and isolation.
The CSP should also control network usage with predefined policies. It must intelligently work with cloud-security solutions such as federated and identity-based security systems. Make sure the CSP isolates your data from other clients’ and segments it to meet security and compliance requirements.
Storage Interoperability and resiliency
Extracting data from a CSP to migrate applications in-house or to another CSP is the most challenging part in a multi-cloud deployment. In certain cases, such as software-as-a-service (SaaS) platforms, you may not have access to all the data. One reason: there are no standards for cloud storage interoperability. It only gets more complex when you maintain applications across multiple clouds for resiliency.
The solution is to demand that your data can move between clouds and support both open-standard and native APIs. Ask your CSP whether it supports “direct link&; co-location partnerships that can &;hold&8221; customer-owned storage devices for data egress or legacy workload migrations.
With a sound storage strategy, you&8217;ll have good resiliency in case of disaster. Again, questions matter. Does your CSP provide object storage in multi-tenant, single-tenant or on-premises &8220;flavors”?
As with everything else involving a CSP, look carefully under the hood. Find out whether the CSP&8217;s storage solution is true hybrid; that is, an on- or off-premises solution that simplifies multi-cloud governance and compliance.
For more information, read “IBM Optimizes Multicloud Strategies for Enterprise Digital Transformation.”
The post Enterprise cloud strategy: Platforms and infrastructure in a multi-cloud environment appeared first on news.
Quelle: Thoughts on Cloud

More details about containerd, Docker’s core container runtime component

Today we announced that Docker is extracting a key component of its platform, a part of the engine plumbing&; a core container runtime&8211;and commits to donating it to an open foundation. containerd is designed to be less coupled, and easier to integrate with other tools sets. And it is being written and designed to address the requirements of the major cloud providers and container orchestration systems.
Because we know a lot of Docker fans want to know how the internals work, we thought we would share the current state of containerd and what we plan for version 1.0. Before that, it’s a good idea to look at what Docker has become over the last three and a half years.
The Docker platform isn’t a container runtime. It is in fact a set of integrated tools that allow you to build ship and run distributed applications. That means Docker handles networking, infrastructure, build, orchestration, authorization, security, and a variety of other services that cover the complete distributed application lifecycle.

The core container runtime, which is containerd, is a small but vital part of the platform. We started breaking out containerd from the rest of the engine in Docker 1.11, planning for this eventual release.
This is a look at Docker Engine 1.12 as it currently is, and how containerd fits in.

You can see that containerd has just the APIs currently necessary to run a container. A GRPC API is called by the Docker Engine, which triggers an execution process. That spins up a supervisor and an executor which is charged with monitoring and running containers. The container is run (i.e. executed) by runC, which is another plumbing project that we open sourced as a reference implementation of the Open Container Initiative runtime standard.
When containerd reaches 1.0, we plan to have a number of other features from Docker Engine as well.

That feature set and scope of containerd is:

A distribution component that will handle pushing to a registry, without a preferencetoward a particular vendor.
Networking primitives for the creation of system interfaces and APIs to manage a container&;s network namespace
Host level storage for image and container filesystems
A GRPC API
A new metrics API in the Prometheus format for internal and container level metrics
Full support of the OCI image spec and runC reference implementation

A more detailed architecture overview is available in the project’s GitHub repository.
This is a look at a future version of Docker Engine leveraging containerd 1.0.

containerd is designed to be embedded into a larger system, rather than being used directly by developers or end-users; and in fact this evolution of Docker plumbing will go unnoticed by end-users. It has a CLI, ctr, designed for debugging and experimentation, and a GRPC API designed for embedding. It’s designed as a plumbing component, designed to be integrated into other projects that can benefit from the lessons we’ve learned running containers.
We are at containerd version 0.2.4, so a lot of work needs to be done. We’ve invited the container ecosystem to participate in this project and are please to have support from Alibaba, AWS, Google, IBM and Microsoft who are providing contributors to help developing containerd. You can find up-to-date roadmap, architecture and API definitions in the github repo, and learn more at the containerd livestream meetup Friday, December 16th at 10am PST. We also plan to organize a summit at the end of February to bring contributors together.

More details about containerd, @Docker’s core container runtime componentClick To Tweet

The post More details about containerd, Docker’s core container runtime component appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Analytics on cloud opens a new world of possibilities

Data has become the most valuable currency and the common thread that binds every function in today’s enterprise. The more an organization puts data to work, the better the outcomes.
How can we harness data in a way that makes lives easier, more efficient and more productive? Where can we find the insight from data that will give a business the edge it needs?
Data intelligence is the result of applying analytics to data. Data intelligence creates more insight, context and understanding, which enable better decisions. Digital intelligence with cloud empowers organizations to pursue game-changing opportunities.
In a Harvard Business Review Analytic Services’ study of business executives, respondents who said they effectively innovate new business models were almost twice as likely to use cloud.

Connecting more roles with more data
Cloud analytics facilitates the connection of all data and insights to all users. This helps lay the foundation for a cognitive business. Trusted access to data is essential for organizations. Including data in motion or at rest, internal or external, structured or unstructured.
Besides their own data, companies have many more data sources that can provide insights into their business. Some popular examples include:

social media data
weather data
Thompson Reuters
public sources such as the Center for Disease Control and Prevention and Internet of Things (IoT) sensor data

Cloud democratizes analytics by enabling companies to deliver more tools and data to more roles. Compared to on-premises solutions, cloud analytics deploys faster and offers a wider variety of analytics tools, including simple, natural language-based options.  With cloud’s scalability and flexibility, data volume and diversity have become almost limitless.
More accessible data and tools have created data-hungry professionals.
Application developers must turn creative ideas into powerful mobile, web and enterprise applications. Data scientists must discover hidden insights in data. Business professionals must create and act on insights faster. Data engineers must wrangle, mine and integrate relevant data to harness its power. The collaboration between these roles helps to extract more value from complex data.
Discovering more opportunities
Cloud-based analytics enables organizations to discover new opportunities, with data intelligence at the core. Organizations can uncover more insights by leveraging new technologies and approaches. A cloud platform provides faster and simplified access to the latest technologies. The ability to mix and match them, try things out, use what you want and put them back when you’re done.
Data science, machine learning and open source let organizations extract insights from large volumes of data in new, iterative ways:

Data science tools enable quick prototyping and design of predictive models.
Machine learning has advanced fraud detection, increased sales forecast accuracy and improved customer segmentation.
Open source tools, such as Apache Spark and Hadoop, help teams conduct complex analytics at high speeds.
More and more, new products and services are built on the cloud. It provides the ideal platform for users to fail fast and innovate quickly.

Accelerating insights with cloud
Organizations with cloud-based analytics speed up outcomes. They iterate, improve business models and release new offerings into the marketplace rapidly.
Cloud underpins this in three ways:

Providing easier access to new technologies sooner
Deploying new data models faster
Enabling quick embedding of insights into process, applications and services

Putting insights into production in real time has become easy and expected. For example, when a retailer wants to trigger the right offer for a customer shopping online, it should be immediate. Speed is essential in offering this personalized experience.
The cloud has helped companies use analytics to respond to volatile market dynamics, establish competitive differentiation and create new business paradigms.
Learn how innovative organizations have harnessed analytics on the cloud in the Harvard Business Review Analytics Services&; whitepaper, &;Powering Digital Intelligence with Cloud Analytics.&;
The post Analytics on cloud opens a new world of possibilities appeared first on news.
Quelle: Thoughts on Cloud

IDC stacks up top object storage vendors

If you’ve been thinking about object storage for just backup and archive, you’ve missed a turn. In a digital transformation journey, like many that I’ve seen in enterprises, managing unstructured content is key.
The latest &;MarketScape: Worldwide Object-Based Storage 2016 Vendor Assessment&; from IDC reminds us that:
Digital assets are the new IP and many businesses are actively trying to create new sources of revenue streams through it. For example, media streaming, the Internet of Things (IoT), and web 2.0, are some of the ways businesses are generating revenue in today&;s digitized world. IT buyers are looking for newer storage technologies that are built not just for unprecedented scale while reducing complexities and costs but also to support traditional (current-generation) and next-generation workloads.

Businesses need to not just be able to store and access data, but also to do something with that data to create value. The type and volume of stored data is rapidly changing, and businesses must look at storage approaches that support today’s storage needs and offer the flexibility needed for future requirements.
In its assessment, IDC placed IBM and IBM Cloud Object Storage (featuring technology from the acquisition of Cleversafe in 2015) in the “leader” category.
As a vendor, I personally cannot be happier or prouder.
Object storage solutions provide the scale and resiliency necessary to efficiently support a set of unstructured content (audio, video, images, scans, documents and so forth)  that are ever-growing in size and volume. Yet not all object storage solutions are the same. One key consideration is the platform that the vendor employs and the flexibility a vendor offers when it comes to deployment options.
Business processes are increasingly hybrid. There will be processes and applications that must run inside your data center, managed by your staff and on your servers. Others can run in the public cloud and even be optimized for pure public cloud deployment, while still other elements might be a mix of the two.
If you look at the vendors in the leader category, IBM Cloud Object Storage is the solution that provides proven, deployment dexterity: on premises, on the public cloud and in any mix of the two. The public cloud we run on is designed from the ground up, with the enterprise in mind. With over 50 IBM Cloud data centers around the world, support for open and industry standards, and the innovation that IBM Watson and IBM Bluemix enable, IBM Cloud Object Storage stands out from the pack. That’s not to say that the other leaders aren’t worth considering, and IDC makes it clear.
With data slated to hit 44 zettabytes by 2020, and 80 percent of that unstructured, according to IDC’s object storage forecast for 2016 to 2020, getting ahead of this dynamic is imperative. Doing it with a leader in object storage just makes business sense.
Try it for yourself. Provision your free tier of object storage on IBM Bluemix, learn more about the overall IBM Cloud Object Storage family and read the full IDC report on IBM.
Read the press release.
Learn more about IBM Cloud Object Storage.
The post IDC stacks up top object storage vendors appeared first on news.
Quelle: Thoughts on Cloud

Convert ASP.NET Web Servers to Docker with Image2Docker

A major update to Image2Docker was released last week, which adds ASP.NET support to the tool. Now you can take a virtualized web server in Hyper-V and extract a image for each website in the VM &; including ASP.NET WebForms, MVC and WebApi apps. 

Image2Docker is a PowerShell module which extracts applications from a Windows Virtual Machine image into a Dockerfile. You can use it as a first pass to take workloads from existing servers and move them to Docker containers on Windows.
The tool was first released in September 2016, and we&;ve had some great work on it from PowerShell gurus like Docker Captain Trevor Sullivan and Microsoft MVP Ryan Yates. The latest version has enhanced functionality for inspecting IIS &8211; you can now extract ASP.NET websites straight into Dockerfiles.
In Brief
If you have a Virtual Machine disk image (VHD, VHDX or WIM), you can extract all the IIS websites from it by installing Image2Docker and running ConvertTo-Dockerfile like this:
Install-Module Image2Docker
Import-Module Image2Docker
ConvertTo-Dockerfile -ImagePath C:win-2016-iis.vhd -Artifact IIS -OutputPath c:i2d2iis
That will produce a Dockerfile which you can build into a Windows container image, using docker build.
How It Works
The Image2Docker tool (also called &;I2D2&;) works offline, you don&8217;t need to have a running VM to connect to. It inspects a Virtual Machine disk image &8211; in Hyper-V VHD, VHDX format, or Windows Imaging WIM format. It looks at the disk for known artifacts, compiles a list of all the artifacts installed on the VM and generates a Dockerfile to package the artifacts.
The Dockerfile uses the microsoft/windowsservercore base image and installs all the artifacts the tool found on the VM disk. The artifacts which Image2Docker scans for are:

IIS & ASP.NET apps
MSMQ
DNS
DHCP
Apache
SQL Server

Some artifacts are more feature-complete than others. Right now (as of version 1.7.1) the IIS artifact is the most complete, so you can use Image2Docker to extract Docker images from your Hyper-V web servers.
Installation
I2D2 is on the PowerShell Gallery, so to use the latest stable version just install and import the module:
Install-Module Image2Docker
Import-Module Image2Docker
If you don&8217;t have the prerequisites to install from the gallery, PowerShell will prompt you to install them.
Alternatively, if you want to use the latest source code (and hopefully contribute to the project), then you need to install the dependencies:
Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201
Install-Module -Name Pester,PSScriptAnalyzer,PowerShellGet
Then you can clone the repo and import the module from local source:
mkdir docker
cd docker
git clone https://github.com/sixeyed/communitytools-image2docker-win.git
cd communitytools-image2docker-win
Import-Module .Image2Docker.psm1
Running Image2Docker
The module contains one cmdlet that does the extraction: ConvertTo-Dockerfile. The help text gives you all the details about the parameters, but here are the main ones:

ImagePath &8211; path to the VHD | VHDX | WIM file to use as the source
Artifact &8211; specify one artifact to inspect, otherwise all known artifacts are used
ArtifactParam &8211; supply a parameter to the artifact inspector, e.g. for IIS you can specify a single website
OutputPath &8211; location to store the generated Dockerfile and associated artifacts

You can also run in Verbose mode to have Image2Docker tell you what it finds, and how it&8217;s building the Dockerfile.
Walkthrough &8211; Extracting All IIS Websites
This is a Windows Server 2016 VM with five websites configured in IIS, all using different ports:

Image2Docker also supports Windows Server 2012, with support for 2008 and 2003 on its way. The websites on this VM are a mixture of technologies &8211; ASP.NET WebForms, ASP.NET MVC, ASP.NET WebApi, together with a static HTML website.
I took a copy of the VHD, and ran Image2Docker to generate a Dockerfile for all the IIS websites:
ConvertTo-Dockerfile -ImagePath C:i2d2win-2016-iis.vhd -Artifact IIS -Verbose -OutputPath c:i2d2iis
In verbose mode there&8217;s a whole lot of output, but here are some of the key lines &8211; where Image2Docker has found IIS and ASP.NET, and is extracting website details:
VERBOSE: IIS service is present on the system
VERBOSE: ASP.NET is present on the system
VERBOSE: Finished discovering IIS artifact
VERBOSE: Generating Dockerfile based on discovered artifacts in
:C:UserseltonAppDataLocalTemp865115-6dbb-40e8-b88a-c0142922d954-mount
VERBOSE: Generating result for IIS component
VERBOSE: Copying IIS configuration files
VERBOSE: Writing instruction to install IIS
VERBOSE: Writing instruction to install ASP.NET
VERBOSE: Copying website files from
C:UserseltonAppDataLocalTemp865115-6dbb-40e8-b88a-c0142922d954-mountwebsitesaspnet-mvc to
C:i2d2iis
VERBOSE: Writing instruction to copy files for -mvc site
VERBOSE: Writing instruction to create site aspnet-mvc
VERBOSE: Writing instruction to expose port for site aspnet-mvc
When it completes, the cmdlet generates a Dockerfile which turns that web server into a Docker image. The Dockerfile has instructions to installs IIS and ASP.NET, copy in the website content, and create the sites in IIS.
Here&8217;s a snippet of the Dockerfile &8211; if you&8217;re not familiar with Dockerfile syntax but you know some PowerShell, then it should be pretty clear what&8217;s happening:
# Install Windows features for IIS
RUN Add-WindowsFeature Web-server, NET-Framework-45-ASPNET, Web-Asp-Net45
RUN Enable-WindowsOptionalFeature -Online -FeatureName IIS-ApplicationDevelopment,IIS-ASPNET45,IIS-BasicAuthentication…

# Set up website: aspnet-mvc
COPY aspnet-mvc /websites/aspnet-mvc
RUN New-Website -Name ‘aspnet-mvc’ -PhysicalPath “C:websitesaspnet-mvc” -Port 8081 -Force
EXPOSE 8081
# Set up website: aspnet-webapi
COPY aspnet-webapi /websites/aspnet-webapi
RUN New-Website -Name ‘aspnet-webapi’ -PhysicalPath “C:websitesaspnet-webapi” -Port 8082 -Force
EXPOSE 8082
You can build that Dockerfile into a Docker image, run a container from the image and you&8217;ll have all five websites running in a Docker container on Windows. But that&8217;s not the best use of Docker.
When you run applications in containers, each container should have a single responsibility &8211; that makes it easier to deploy, manage, scale and upgrade your applications independently. Image2Docker support that approach too.
Walkthrough &8211; Extracting a Single IIS Website
The IIS artifact in Image2Docker uses the ArtifactParam flag to specify a single IIS website to extract into a Dockerfile. That gives us a much better way to extract a workload from a VM into a Docker Image:
ConvertTo-Dockerfile -ImagePath C:i2d2win-2016-iis.vhd -Artifact IIS -ArtifactParam aspnet-webforms -Verbose -OutputPath c:i2d2aspnet-webforms
That produces a much neater Dockerfile, with instructions to set up a single website:
# escape=`
FROM microsoft/windowsservercore
SHELL [“powershell”, “-Command”, “$ErrorActionPreference = ‘Stop';”]

# Wait-Service is a tool from Microsoft for monitoring a Windows Service
ADD https://raw.githubusercontent.com/Microsoft/Virtualization-Documentation/live/windows-server-container-tools/Wait-Service/Wait-Service.ps1 /

# Install Windows features for IIS
RUN Add-WindowsFeature Web-server, NET-Framework-45-ASPNET, Web-Asp-Net45
RUN Enable-WindowsOptionalFeature -Online -FeatureName IIS-ApplicationDevelopment,IIS-ASPNET45,IIS-BasicAuthentication,IIS-CommonHttpFeatures,IIS-DefaultDocument,IIS-DirectoryBrowsing

# Set up website: aspnet-webforms
COPY aspnet-webforms /websites/aspnet-webforms
RUN New-Website -Name ‘aspnet-webforms’ -PhysicalPath “C:websitesaspnet-webforms” -Port 8083 -Force
EXPOSE 8083

CMD /Wait-Service.ps1 -ServiceName W3SVC -AllowServiceRestart
Note &8211; I2D2 checks which optional IIS features are installed on the VM and includes them all in the generated Dockerfile. You can use the Dockerfile as-is to build an image, or you can review it and remove any features your site doesn&8217;t need, which may have been installed in the VM but aren&8217;t used.
To build that Dockerfile into an image, run:
docker build -t i2d2/aspnet-webforms .
When the build completes, I can run a container to start my ASP.NET WebForms site. I know the site uses a non-standard port, but I don&8217;t need to hunt through the app documentation to find out which one, it&8217;s right there in the Dockerfile: EXPOSE 8083.
This command runs a container in the background, exposes the app port, and stores the ID of the container:
$id = docker run -d -p 8083:8083 i2d2/aspnet-webforms
When the site starts, you&8217;ll see in the container logs that the IIS Service (W3SVC) is running:
> docker logs $id
The Service ‘W3SVC’ is in the ‘Running’ state.
Now you can browse to the site running in IIS in the container, but because published ports on Windows containers don&8217;t do loopback yet, if you&8217;re on the machine running the Docker container, you need to use the container&8217;s IP address:
$ip = docker inspect –format ‘{{ .NetworkSettings.Networks.nat.IPAddress }}’ $id
start “http://$($ip):8083″
That will launch your browser and you&8217;ll see your ASP.NET Web Forms application running in IIS, in Windows Server Core, in a Docker container:

Converting Each Website to Docker
You can extract all the websites from a VM into their own Dockerfiles and build images for them all, by following the same process &8211; or scripting it &8211; using the website name as the ArtifactParam:
$websites = @(“aspnet-mvc”, “aspnet-webapi”, “aspnet-webforms”, “static”)
foreach ($website in $websites) {
    ConvertTo-Dockerfile -ImagePath C:i2d2win-2016-iis.vhd -Artifact IIS -ArtifactParam $website -Verbose -OutputPath “c:i2d2$website” -Force
    cd “c:i2d2$website”
    docker build -t “i2d2/$website” .
}
Note. The Force parameter tells Image2Docker to overwrite the contents of the output path, if the directory already exists.
If you run that script, you&8217;ll see from the second image onwards the docker build commands run much more quickly. That&8217;s because of how Docker images are built from layers. Each Dockerfile starts with the same instructions to install IIS and ASP.NET, so once those instructions are built into image layers, the layers get cached and reused.
When the build finish I have four i2d2 Docker images:
> docker images
REPOSITORY                                    TAG                 IMAGE ID            CREATED              SIZE
i2d2/static                                   latest              cd014b51da19        7 seconds ago        9.93 GB
i2d2/aspnet-webapi                            latest              1215366cc47d        About a minute ago   9.94 GB
i2d2/aspnet-mvc                               latest              0f886c27c93d        3 minutes ago        9.94 GB
i2d2/aspnet-webforms                          latest              bd691e57a537        47 minutes ago       9.94 GB
microsoft/windowsservercore                   latest              f49a4ea104f1        5 weeks ago          9.2 GB
Each of my images has a size of about 10GB but that&8217;s the virtual image size, which doesn&8217;t account for cached layers. The microsoft/windowsservercore image is 9.2GB, and the i2d2 images all share the layers which install IIS and ASP.NET (which you can see by checking the image with docker history).
The physical storage for all five images (four websites and the Windows base image) is actually around 10.5GB. The original VM was 14GB. If you split each website into its own VM, you&8217;d be looking at over 50GB of storage, with disk files which take a long time to ship.
The Benefits of Dockerized IIS Applications
With our Dockerized websites we get increased isolation with a much lower storage cost. But that&8217;s not the main attraction &8211; what we have here are a set of deployable packages that each encapsulate a single workload.
You can run a container on a Docker host from one of those images, and the website will start up and be ready to serve requests in seconds. You could have a Docker Swarm with several Windows hosts, and create a service from a website image which you can scale up or down across many nodes in seconds.
And you have different web applications which all have the same shape, so you can manage them in the same way. You can build new versions of the apps into images which you can store in a Windows registry, so you can run an instance of any version of any app. And when Docker Datacenter comes to Windows, you&8217;ll be able to secure the management of those web applications and any other Dockerized apps with role-based access control, and content trust.
Next Steps
Image2Docker is a new tool with a lot of potential. So far the work has been focused on IIS and ASP.NET, and the current version does a good job of extracting websites from VM disks to Docker images. For many deployments, I2D2 will give you a working Dockerfile that you can use to build an image and start working with Docker on Windows straight away.
We&8217;d love to get your feedback on the tool &8211; submit an issue on GitHub if you find a problem, or if you have ideas for enhancements. And of course it&8217;s open source so you can contribute too.
Additional Resources

Image2Docker: A New Tool For Prototyping Windows VM Conversions
Containerize Windows Workloads With Image2Docker
Run IIS + ASP.NET on Windows 10 with Docker
Awesome Docker &8211; Where to Start on Windows

Convert @Windows aspnet VMs to Docker with Image2DockerClick To Tweet

The post Convert ASP.NET Web Servers to Docker with Image2Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Modeling complex applications with Kubernetes AppController

The post Modeling complex applications with Kubernetes AppController appeared first on Mirantis | The Pure Play OpenStack Company.
When you&;re first looking at Kubernetes applications, it&8217;s common to see a simple scenario that may include several pieces &; but not explicit dependencies. But what happens when you have an application that does include dependencies. For example, what happens if the database must always be configured before the web servers, and so on? It&8217;s common for situations to arise in which resources need to be created in a specific order, which isn&8217;t easily accomodated with today&8217;s templates.
To solve this problem, Mirantis Development Manager for Kubernetes projects Piotr Siwczak explained the concept and implementation of the Kubernetes AppController, which enables you to orchestrate and manage the creation of dependences for a multi-part application as part of the deployment process.
You can see the entire presentation below:

The post Modeling complex applications with Kubernetes AppController appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis