How To Dockerize Vendor Apps like Confluence

Docker Datacenter customer, Shawn Bower of Cornell University recently shared their experiences in containerizing Confluence as being the start of their Docker journey.
Through that project they were able to demonstrate a 10X savings in application maintenance, reduce the time to build a disaster recovery plan from days to 30 minutes and improve the security profile of their Confluence deployment. This change allowed the Cloudification team that Shawn leads to start spending the majority of their time helping Cornelians to use technology to be innovative.
Since the original blog was posted, there’s been a lot of requests to get the pragmatic info on how Cornell actually did this project.  In the post below, Shawn provides detailed instructions on how Confluence is containerized and how the Docker workflow is integrated with Puppet.

Written by Shawn Bower
As we started our Journey to move Confluence to the cloud using Docker we were emboldened by the following post from Atlassian. We use many of the Atlassian products and love how well integrated they are.  In this post I will walk you through the process we used to get Confluence in a container and running.
First we needed to craft a Dockerfile.  At Cornell we used image inheritance which enables our automated patching and security scanning process.  We start with the cannonical ubuntu image: https://hub.docker.com/_/ubuntu/ and then build on defaults used here at Cornell.  Our base image is available publicly on github here: https://github.com/CU-CommunityApps/docker-base.
Let’s take a look at the Dockerfile.
FROM ubuntu:14.04

# File Author / Maintainer
MAINTAINER Shawn Bower <my email address>

# Install.
RUN
apt-get update && apt-get install –no-install-recommends -y
build-essential
curl
git
unzip
vim
wget
ruby
ruby-dev
-daemon
openssh-client &&
rm -rf /var/lib/apt/lists/*

RUN rm /etc/localtime
RUN ln -s /usr/share/zoneinfo/America/New_York /etc/localtime

Clamav stuff
RUN freshclam -v &&
mkdir /var/run/clamav &&
chown clamav:clamav /var/run/clamav &&
chmod 750 /var/run/clamav

COPY conf/clamd.conf /etc/clamav/clamd.conf

RUN echo “gem: –no-ri –no-rdoc” > ~/.gemrc &&
gem install json_pure -v 1.8.1 &&
gem install puppet -v 3.7.5 &&
gem install librarian-puppet -v 2.1.0 &&
gem install hiera-eyaml -v 2.1.0

# Set environment variables.
ENV HOME /root

# Define working directory.
WORKDIR /root

# Define default command.
CMD [“bash”]

At Cornell we use Puppet for configuration management so we bake that directly into our base image.  We do a few other things like setting the timezone and installing the clamav agent as we have some applications that use that for virus scanning.  We have an automated project in Jenkins that pulls that latest ubuntu:14.04 image from Docker Hub and then builds this base image every weekend.  Once the base image is built we tag it with ‘latest’, a time stamp tag and automatically push it to our local Docker Trusted Registry.  This allows the brave to pull in patches continuously while allowing others to pin to a specific version until they are ready to migrate.  From that image we create a base Java image which installs Oracle’s JVM.
The Dockerfile is available here and explained below.
# Pull base image.
FROM DTR Repo path /cs/base

# Install Java.
RUN
apt-get update &&
apt-get -y install software-properties-common &&
add-apt-repository ppa:webupd8team/java -y &&
apt-get update &&
echo “oracle-java8-installer shared/accepted-oracle-license-v1-1 select true” | sudo debconf-set-selections &&
apt-get install -y oracle-java8-installer &&
apt-get install oracle-java8-set-default &&
rm -rf /var/lib/apt/lists/*

# Define commonly used JAVA_HOME variable
ENV JAVA_HOME /usr/lib/jvm/java-8-oracle

# Define working directory.
WORKDIR /data

# Define default command.
CMD [“bash”]

The same automated patching process is followed for the Java image as with the base image.  The Java image is automatically built after the base imaged and tagged accordingly so there is a matching set of base and java8.  Now that we have our Java image we can layer on Confluence.  Our Confluence repository is private but the important bits of the Dockerfile are below.
FROM DTR Repo path for cs/java8

# Configuration variables.
ENV CONF_HOME     /var/local/atlassian/confluence
ENV CONF_INSTALL  /usr/local/atlassian/confluence
ENV CONF_VERSION  5.8.18

ARG environment=local

# Install Atlassian Confluence and helper tools and setup initial home
# directory structure.
RUN set -x
&& apt-get update –quiet
&& apt-get install –quiet –yes –no-install-recommends libtcnative-1 xmlstarlet
&& apt-get clean
&& mkdir -p                “${CONF_HOME}”
&& chmod -R 700            “${CONF_HOME}”
&& chown daemon:daemon     “${CONF_HOME}”
&& mkdir -p                “${CONF_INSTALL}/conf”
&& curl -Ls                “http://www.atlassian.com/software/confluence/downloads/binary/atlassian-confluence-${CONF_VERSION}.tar.gz” | tar -xz –directory “${CONF_INSTALL}” –strip-components=1 –no-same-owner
&& chmod -R 700            “${CONF_INSTALL}/conf”
&& chmod -R 700            “${CONF_INSTALL}/temp”
&& chmod -R 700            “${CONF_INSTALL}/logs”
&& chmod -R 700            “${CONF_INSTALL}/work”
&& chown -R daemon:daemon  “${CONF_INSTALL}/conf”
&& chown -R daemon:daemon  “${CONF_INSTALL}/temp”
&& chown -R daemon:daemon  “${CONF_INSTALL}/logs”
&& chown -R daemon:daemon  “${CONF_INSTALL}/work”
&& echo -e                 “nconfluence.home=$CONF_HOME” >> “${CONF_INSTALL}/confluence/WEB-INF/classes/confluence-init.properties”
&& xmlstarlet              ed –inplace
–delete               “Server/@debug”
–delete               “Server/Service/Connector/@debug”
–delete               “Server/Service/Connector/@useURIValidationHack”
–delete               “Server/Service/Connector/@minProcessors”
–delete               “Server/Service/Connector/@maxProcessors”
–delete               “Server/Service/Engine/@debug”
–delete               “Server/Service/Engine/Host/@debug”
–delete               “Server/Service/Engine/Host/Context/@debug”
“${CONF_INSTALL}/conf/server.xml”

# bust cache
ADD version /version

# RUN Puppet
WORKDIR /
COPY Puppetfile /
COPY keys/ /keys

RUN mkdir -p /root/.ssh/ &&
cp /keys/id_rsa /root/.ssh/id_rsa &&
chmod 400 /root/.ssh/id_rsa &&
touch /root/.ssh/known_hosts &&
ssh-keyscan github.com >> /root/.ssh/known_hosts &&
librarian-puppet install &&
puppet apply –modulepath=/modules – hiera_config=/modules/confluence/hiera.yaml

–environment=${environment} -e “class { confluence::app': }” &&
rm -rf /modules &&
rm -rf /Puppetfile* &&
rm -rf /root/.ssh &&
rm -rf /keys

USER daemon:daemon

# Expose default HTTP connector port.
EXPOSE 8080

VOLUME [“/opt/atlassian/confluence/logs”]

# Set the default working directory as the installation directory.
WORKDIR /var/atlassian/confluence

# Run Atlassian Confluence as a foreground process by default.
CMD [“/opt/atlassian/confluence/bin/catalina.sh”, “run”]

We bring down the install media from Atlassian, explode that into the install path and do a bit of cleanup on some of the XML configs.  We use Docker build cache for that part of the process becauses it does not change often.  After the Confluence installation we bust the cache by adding a version file which changes each time the build runs in Jenkins.  This ensuers that Puppet will run in the container and configure the environment.  Puppet is used to lay down environment (dev, test, prod, etc.) configuration and use a docker build argument called ‘environment.’  This allows us to bake everything needed to run Confluence into the image so we can launch it on any machine with no extra configuration.  Whether to store the configuration in the image or outside is a contested subject for sure, but our decision was  to store all configurations directly in the image. We believe this ensures the highest level of portability.
Here are some general rules we follow with Docker

Use base images that are a part of the automated patching
Follow Dockerfile best practices
Keep the base infrastructure in a Dockerfile, and environment specific information in Puppet
Build one process per container
Keep all components of the stack in one repository
If the stack has multiple components (ie, apache, tomcat) they should live in the same repository
Use subdirectories for each component

Hope you enjoyed this post and gets you containerizing some vendored apps. This is just the beginning as we recently moved a legacy coldfusion app into Docker &; almost anything can probably be containerized!

Tips on how to dockerize @atlassian @Confluence by @Cornell&;s @drizzt51Click To Tweet

More Resources

Try Docker Datacenter free for 30 days
Learn more about Docker Datacenter
Read the blog post &8211; It all started with containerizing Confluence at Cornell
Watch the webinar featuring Shawn and Docker at Cornell

The post How To Dockerize Vendor Apps like Confluence appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Weather Company Data Service helps developers build smarter, more precise apps

Weather patterns are constantly changing. It’s frustrating to not have the umbrella at just the right time or the right jacket when a cold front comes through while you&;re at work or were looking forward to an outdoor picnic when a storm rolls through.
Inconsistency in weather can wreak even greater havoc on businesses. In 2014 alone, the U.S. economy lost nearly $50 billion in sales and 76,000 jobs because of weather, CNBC reported.
The good news is that it’s becoming easier than ever to build apps that can help to mitigate this damage. These apps give businesses the intelligence to more accurately plan for and predict for weather changes which can impact everything from sales to customer satisfaction. Recently, IBM launched The Weather Company Data Service on the Bluemix cloud platform. The service gives developers easier, instant access to APIs that provide extended forecast data, geo-specific weather intelligence and highly dependable information.

Using Bluemix to rapidly pull broad and accurate weather data streams into their apps, developers can now build the most predictive and smartest apps for industries in which weather is a concern. They can help predict a storm’s potential impact on consumer behavior or fluctuations in crop prices.
These new APIs are built on recent IBM acquisition The Weather Company’s data platform, as well as the previously available Insights for Weather service on Bluemix. They increase the access that developers have to extended forecasts — up to 48 hours — as well as new intra-day forecasts of up to 10 days, international weather alerts, geo-coding services, and daily and monthly almanac intelligence.
This intelligence and the ability that cloud provides to easily build with these APIs means that almost every industry can benefit from working weather knowledge into its operations. For example, in the aviation industry, weather data can help airlines improve the efficiency and performance of flights, from alerting them to turbulent air patterns to planning for fuel consumption and airport congestion.
In retail, these same insights can be used for a variety of planning strategies. They can help retailers optimize inventory based on weather-triggered purchase patterns, such as stocking more sweaters for an upcoming cold front, or better plan for staffing needs, perhaps increasing the number of sales associates when nice weather is likely to bump up foot traffic.
With these new capabilities on IBM Cloud, developers now have the ability to build into their apps a real-time forecast data grid which is 100 times more precise than publicly available sources, down to a 500-square-meter resolution. They can also tie into governmental alert headlines and details, as well as the world&8217;s most accurate meteorological models from The Weather Company.
To get started with the service, check out The Weather Company Data Service for IBM Bluemix catalog, or watch an overview video on the Bluemix Developers’ Community blog.
The post Weather Company Data Service helps developers build smarter, more precise apps appeared first on news.
Quelle: Thoughts on Cloud

How to Develop Cloud Applications for OpenStack using Murano, Part 2: Creating the Development Environment

The post How to Develop Cloud Applications for OpenStack using Murano, Part 2: Creating the Development Environment appeared first on Mirantis | The Pure Play OpenStack Company.
In part 1 of this series, we talked about what Murano is, and why you&;d want to use it as a platform for developing end user applications. Now in part 2 we&8217;ll help you get set up for doing the actual development,.
All that you need to develop your Murano App is:

A text editor to edit source code. There is no special IDE required; a plain text editor will do.
OpenStack with Murano. You will, of course, want to test your Murano App, so you&8217;ll need an environment in which to run it.

Since there&8217;s no special setup for the text editor, let&8217;s move on to getting a functional OpenStack cluster with Murano.
Where to find OpenStack Murano
If you don&8217;t already have access to a cloud with Murano deployed, that&8217;ll be your first task.  (You&8217;ll know Murano is available if you see an &;Application&; tab in Horizon.)
There are two possible ways to deploy OpenStack and Murano:

You can Install vanilla OpenStack (raw upstream code) using the DevStack scripts, but you&8217;ll need to do some manual configuration for Murano. If you want to take this route, you can find out how to install DevStack with Murano here.
You can take the easy way out and use one of the ready-to-use commercial distros that come with Murano to install OpenStack.

If this is your first time, I recommend that you start with one of the ready-to-use commercial OpenStack distros, for several reasons:

A distro is more stable and has fewer bugs, so you won’t waste your time on OpenStack deployment troubleshooting.
A distro will let you see how a correctly configured OpenStack cloud should look.
A distro doesn’t require a deep dive into OpenStack deployment, which means you can fully concentrate on developing your Murano App.

I recommend that you install the Mirantis OpenStack distro (MOS) because deploying Murano with  it can’t be more simple; you just need to click on one checkbox before deploying OpenStack and that’s all. (You can choose any other commercial distro, but the most of them are not able to install Murano in an automatic way. You can find out how to install Murano manually on an already deployed OpenStack Cloud here.)
Deploying OpenStack with Murano
You can get all of the details about Mirantis OpenStack in the Official Mirantis OpenStack Documentation, but here are the basic steps. You can follow them on Windows, Mac, or Linux; in my case, I&8217;m using a laptop running Mac OS X with 8GB RAM; we&8217;ll create virtual machines rather than trying to cobble together multiple pieces of hardware:

If you don&8217;t already have it installed, download and install Oracle VirtualBox. In this tutorial we’ll use VirtualBox 5.1.2 for OS X (VirtualBox-5.1.2-108956-OSX.dmg).
Download and install the Oracle VM VirtualBox Extension Pack. (Make sure you use the right download for your version of VirtualBox. In my case, that meansOracle_VM_VirtualBox_Extension_Pack-5.1.2-108956.vbox-extpack.)
Download the Mirantis OpenStack image.
Download the Mirantis OpenStack VirtualBox Scripts..
Unzip the script archive and copy the Mirantis OpenStack .ISO image to thevirtualbox/iso folder.
You can optionally edit config.sh if you want to set up a custom password or edit network settings. There are a lot of detailed comments, so it will not be a problem to configure your main parameters.
From the command line, launch the launch.sh script.
Unless you&8217;ve changed your configuration, when the scripts finish you’ll have one Fuel Master Node VM and three slave VMs running in VirtualBox.

Next we&8217;ll create the actual OpenStack cluster itself.
Creating the OpenStack cluster
At this point we&8217;ve installed Fuel, but we haven&8217;t actually deployed the OpenStack cluster itself. To do that, follow these steps:

Point your browser to http://10.20.0.2:8000/ and log in as an administrator using “admin” as your password (or the address and credentials you added in configure.sh).

Once you’ve logged into Fuel Master Node it lets you deploy the OpenStack Cloud and you can begin to explore it.

Click New OpenStack Environment.

Choose a name for your OpenStack Cloud and click Next:

Don’t change anything on the Compute tab, just click Next:

Don’t change anything on the Networking Setup tab, just click Next:

Don’t change anything on the Storage Backends tab, just click Next:

On the Additional Services tab tick the “Install Murano” checkbox and click Next:

On the Finish tab click Create:

From here you&8217;ll see the cluster&8217;s Dashboard.  Click Add Nodes.

Here you can see that the launch script automatically created three VirtualBox VMs, and that Fuel has automatically discovered them:

The next step is to assign roles to your nodes. In this tutorial you need at least two nodes:

The Controller Node &; This node manages all of the operations within an OpenStack environment and provides an external API.
The Compute Node &8211; This node provides processing resources to accommodate virtual machine workloads and it creates, manages and terminates VM instances. The VMs, or instances, that you create in Murano run on the compute nodes.
Assign a controller role to a node with 2GB RAM.

 Click Apply Changes and follow the same steps to add a 1 GB compute node. The last node will not be needed in our case, so you can remove it and give more hardware resources to other nodes later if you like.
Leave all of the other settings at their default values, but before you deploy, you will want to check your networking to make sure everything is configured properly.  (Fuel configures networking automatically, but it&8217;s always good to check.)  Click the Networks tab, then Connectivity Check in the left-hand pane. Click Verify Networks and wait a few moments.

Go to the Dashboard tab and click Deploy Changes to deploy your OpenStack Cloud.

When Fuel has finished you can login into the Horizon UI, http://172.16.0.3/horizon by default, or you can click the link on the Dashboard tab. (You also can go to the Health Check tab and run tests to ensure that your OpenStack Cloud was deployed properly.)

Log into Horizon using the credentials admin/admin (unless you changed them in the Fuel Settings tab).

As you can see by the Applications tab at the bottom of the left-hand pane, the Murano Application Catalog has been installed.
Tomorrow we&8217;ll talk about creating an application you can deploy with it.
The post How to Develop Cloud Applications for OpenStack using Murano, Part 2: Creating the Development Environment appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Introducing InfraKit, an open source toolkit for creating and managing declarative, self-healing infrastructure

Written by Bill Farner and David Chung
Docker’s mission is to build tools of mass innovation, starting with a programmable layer for the Internet that enables developers and IT operations teams to build and run distributed applications. As part of this mission, we have always endeavored to contribute software plumbing toolkits back to the community, following the UNIX philosophy of building small loosely coupled tools that are created to simply do one thing well. As Docker adoption has grown from 0 to 6 billion pulls, we have worked to address the needs of a growing and diverse set of distributed systems users. This work has led to the creation of many infrastructure plumbing components that have been contributed back to the community.

It started in 2014 with libcontainer and libnetwork. In 2015 we created runC and co-founded OCI with an industry-wide set of partners to provide a standard for container runtimes, a reference implementation based on libcontainer, and notary, which provides the basis for Docker Content Trust. From there we added containerd, a daemon to control runC, built for performance and density. Docker Engine was refactored so that Docker 1.11 is built on top of containerd and runC, providing benefits such as the ability to upgrade Docker Engine without restarting containers. In May 2016 at OSCON, we open sourced HyperKit, VPNKit and DataKit, the underlying components that enable us  to deeply integrate Docker for Mac and Windows with the native Operating System. Most recently,  in June, we unveiled SwarmKit, a toolkit for scheduling tasks and the basis for swarm mode, the built-in orchestration feature in Docker 1.12.
With SwarmKit, Docker introduced a declarative management toolkit for orchestrating containers. Today, we are doing the same for infrastructure. We are excited to announce InfraKit, a declarative management toolkit for orchestrating infrastructure. Solomon Hykes  open sourced it today during his keynote address at  LinuxCon Europe. You can find the source code at https://github.com/docker/infrakit
 
InfraKit Origins
Back in June at DockerCon, we introduced Docker for AWS and Azure beta to simplify the IT operations experience in setting up Docker and to optimally leverage the native capabilities of the respective cloud environment. To do this, Docker provided deep integrations into these platforms’ capabilities for storage, networking and load balancing.
In the diagram below, the architecture for these versions includes platform-specific network and storage plugins, but also a new component specific to infrastructure management.
While working on Docker for AWS and Azure, we realized the need for a standard way to create and manage infrastructure state that was portable across any type of infrastructure, from different cloud providers to on-prem.  One challenge is that each vendor has differentiated IP invested in how they handle certain aspects of their cloud infrastructure. It is not enough to just provision five servers;what IT ops teams need is a simple and consistent way to declare the number of servers, what size they should be, and what sort of base software configuration is required.  And in the case of server failures (especially unplanned), that sudden change needs to be reconciled against the desired state to ensure that any required servers are re-provisioned with the necessary configuration. We started InfraKit to solves these problems and to provide the ability to create a self healing infrastructure for distributed systems.
 
InfraKit Internals
InfraKit breaks infrastructure automation down into simple, pluggable components for declarative infrastructure state, active monitoring and automatic reconciliation of that state. These components work together to actively ensure the infrastructure state matches the user&;s specifications. InfraKit emphasizes primitives for building self-healing infrastructure but can also be used passively like conventional tools.
InfraKit at the core consists of a set of collaborating, active processes. These components are called plugins and different plugins can be written to meet different needs. These plugins are active controllers that can look at current infrastructure state and take action when the state diverges from user specification.
Initially, these plugins are implemented as servers listening on unix sockets and communicate using HTTP. By nature, the plugin interface definitions are language agnostic so it&8217;s possible to implement a plugin in a language other than Go. Plugins can be packaged and deployed differently, such as with Docker containers.
Plugins are the active components that provide the behavior for the primitives that InfraKit supports. InfraKit supports these primitives: groups, instances, and flavors. They are active components running as plugins.
Groups
When managing infrastructure like computing clusters, Groups make good abstraction, and working with groups is easier than managing individual instances. For example, a group can be made up of a collection of machines as individual instances. The machines in a group can have identical configurations (replicas, or so-called “cattle”). They can also have slightly different configurations and properties like identity,ordering, and persistent storage (as members of a quorum or so-called “pets”).
Instances
Instances are members of a group. An instance plugin manages some physical resource instances. It knows only about individual instances and nothing about Groups. Instance is technically defined by the plugin, and need not be a physical machine at all.   As part of the toolkit, we have included examples of instance plugins for Vagrant and Terraform. These examples show that it’s easy to develop plugins.  They are also examples of how InfraKit can play well with existing system management tools while extending their capabilities with active management.  We envision more plugins in the future &; for example plugins for AWS and Azure.
Flavors
Flavors help distinguish members of one group from another by describing how these members should be treated. A flavor plugin can be thought of as defining what runs on an Instance. It is responsible for configuring the physical instance and for providing health-check in an application-aware way.  It is also what gives the member instances properties like identity and ordering when they require special handling.  Examples of flavor plugins include plain servers, Zookeeper ensemble members, and Docker swarm mode managers.
By separating provisioning of physical instances and configuration of applications into Instance and Flavor plugins, application vendors can directly develop a Flavor plugin, for example, MySQL, that can work with a wide array of instance plugins.
Active Monitoring and Automatic Reconciliation
The active self-healing aspect of InfraKit sets it apart from existing infrastructure management solutions, and we hope it will help our industry build more resilient and self-healing systems. The InfraKit plugins themselves continuously monitor at the group, instance and flavor level for any drift in configuration and automatically correct it without any manual intervention.

The group plugin checks on the size, overall health of the group and decides on strategies for updating.
The instance plugin monitors for the physical presence of resources.
The flavor plugin can make additional determination beyond presence of the resource. For example the swarm mode flavor plugin would check not only that a swarm member node is up, but that the node is also a member of the cluster.  This provides an application-specific meaning to a node’s “health.”

This active monitoring and automatic reconciliation brings a new level of reliability for distributed systems.
The diagram below shows an example of how InfraKit can be used. There are three groups defined; one for a set of stateless cattle instances, one for a set of stateful and uniquely named pet instances and one defined for the Infrakit manager instances themselves. Each group will be monitored for their declared infrastructure state and reconciled independently of the other groups.  For example, if one of the nodes (blue and yellow) in the cattle group goes down, a new one will be started to maintain the desired size.  When the leader host (M2) running InfraKit goes down, a new leader will be elected (from the standby M1 and M3). This new leader will go into action by starting up a new member to join the quorum to ensure availability and desired size of the group.

InfraKit, Docker and Community
InfraKit was born out of our engineering efforts around Docker for AWS and Azure and future versions will see further integration of InfraKit into Docker and those environments, continuing the path building Docker with a set of reusable components.
As the diagram below shows, Docker Engine is already made up of a number of infrastructure plumbing components mentioned earlier.  The components are not only available separately to the community, but integrated together as the Docker Engine.  In a future release, InfraKit will also become part of the Docker Engine.
With community participation, we aim to evolve InfraKit into exciting new areas beyond managing nodes in a cluster.  There’s much work ahead of us to build this into a cohesive framework for managing infrastructure resources, physical, virtual or containerized, from cluster nodes to networks to load balancers and storage volumes.
We are excited to open source InfraKit and invite the community to participate in this project:

Help define and implement new and interesting plugins
Instance plugins to support different infrastructure providers
Flavor plugins to support a variety of systems like etcd or mysql clusters
Group controller plugins like metrics-driven auto scaling and more
Help define interfaces and implement new infrastructure resource types for things like load balancers, networks and storage volume provisioners

Check out the InfraKit repository README for more info, a quick tutorial and to start experimenting &; from plain files to Terraform integration to building a Zookeeper ensemble.  Have a look, explore, and send us a PR or open an issue with your ideas!

Introducing InfraKit: A new open source toolkit for declarative infrastructureClick To Tweet

More Resources:

Check out all the Infrastructure Plumbing projects
Sign up for Docker for AWS or Docker for Azure
Try Docker today

The post Introducing InfraKit, an open source toolkit for creating and managing declarative, self-healing infrastructure appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Develop Cloud Applications for OpenStack on Murano, Part 1: What is Murano, and why do I need it?

The post Develop Cloud Applications for OpenStack on Murano, Part 1: What is Murano, and why do I need it? appeared first on Mirantis | The Pure Play OpenStack Company.
So many apps, so little time.
Developing applications for cloud can be a complicated process; you need to think about resources, placement, scheduling, creating virtual machines, networking&; or do you?  The OpenStack Murano project makes it possible for you to create an application without having to worry about directly doing any of that.  Instead, you can create your application, package it with instructions, and let Murano do the rest.
In other worlds, Murano lets you much more easily distribute your applications &; users just have to click a few buttons to use them.
Every day this week we&;re going to look at the process of creating OpenStack Murano apps so that you can make your life easier &8212; and get your work out there for people to use without having to beg an administrator to install it for them.
We&8217;ll cover the following topics:

Day 1: What is Murano, and why do I need it?
In this article, we&8217;ll talk about what Murano is, who it helps, and how. We&8217;ll also start with the basic concepts you need to understand and let you know what you&8217;ll need for the rest of the series.
Day 2:  Creating the development environment
In this article, we&8217;ll look at deploying an OpenStack cluster with Murano so that you&8217;ve got the framework to work with.
Day 3:  The application, part 1:  Understanding Plone deployment
In our example, we&8217;ll show you how to use Murano to easily deploy the Plone enterprise CMS; in this article, we&8217;ll go over what Murano will actually have to do to install it.
Day 4:  The application, part 2:  Creating the Murano App
Next we&8217;ll go ahead and create the actual Murano App that will deploy Plone.
Day 5:  Uploading and troubleshooting the app
Now that we&8217;ve created the Plone Murano App, we&8217;ll go ahead and add it to the application catalog so that users can deploy it. We&8217;ll also look at some common issues and how to solve them.

Interested in seeing more? We&8217;ll showing you how to automate Plone deployments for OpenStack at Boston Plone October 17-23, 2016.
Before you start
Before you get started, let&8217;s make sure you&8217;re ready to go.
What you should know
Before we start, let&8217;s get the lay of the land. There&8217;s really not that much you need to know before building a Murano app, but it helps if you are familiar with the following concepts:

Virtualization: Wikipedia says that &;Hardware virtualization or platform virtualization refers to the creation of a virtual machine that acts like a real computer with an operating system.&; Perhaps that&8217;s an oversimplification, but it&8217;ll work for us here. For this series, it helps to have an understanding of virtualization fundamentals, as well as experience in the creation, configuration and deployment of virtual machines, and the creation and restoration of VM snapshots.
OpenStack: OpenStack is, of course, a platform that helps to orchestrate and manage these virtual resources for you; Murano is a project that runs on OpenStack.
UNIX-like OS fundamentals: It also helps to understand command line, basic commands and the structure of Unix-like systems. If you are not familiar with the UNIX command line you might want to study this Linux shell tutorial first.
SSH: It helps to know how to generate and manage multiple SSH keys, and how to connect to a remote host via SSH using SSH keys.
Networks: Finally, although you don&8217;t need to be a networking expert, it is useful if you are familiar with these concepts: IP, CIDR, Port, VPN, DNS, DHCP, and NAT.

If you are not familiar with these concepts, don&8217;t worry; you will be able to learn more about them as we move forward.
What you should have
In order to run the software we&8217;ll be talking about, your environment must meet certain prerequisites. You&8217;ll need a 64-bit host operating system with:

At least 8 GB RAM
300 GB of free disk space. It doesn’t matter if you have less than 300 GB of real free disk space, as it will be taken by demand. So, if you are going to deploy a lightweight application then maybe even 128 GB will be enough. It’s up to your application requirements. In the case of Plone, the recommendation is 40MB per site to be deployed.
Virtualization enabled in BIOS
Internet access

What is OpenStack Murano?
Imagine you&8217;re a cloud user. You just want to get things done. You don&8217;t care about all of the details, you just want the functionality that you need.
Murano is an OpenStack project that provides an application catalog, like the AppStore for iOS or GooglePlay for Android. Murano lets you easily browse for cloud applications you need by their name or category, and then enables you to rapidly deploy them to the cloud with just a few clicks.
For example, if you want a web server, rather than having to create a VM, find the software, deploy it, manage IP addresses and ports, and so on, Murano enables you to simply choose a web server application, name it, and go; Murano does the rest of the work.
Murano also makes it possible to easily deploy applications with multiple components.  For example, what if you didn&8217;t just want a web server, but you wanted a WordPress application, which includes a web server database, and web application? A pre-existing WordPress Murano app would make it possible for you to simply choose the app, specify a few parameters, and go.  (In fact, later in this series we&8217;ll look at creating an app for an even more complex CMS, Plone.)
Because it&8217;s so straightforward to deploy the applications, users can do it themselves, rather than relying on administrators.
Moreover, not only does Murano let users and administrators easily deploy complex cloud applications, it also completely manages application lifecycles such as auto scaling-up and scaling-down clusters, providing self-healing and more.
Murano’s main end users are:

Independant cloud users, who can use Murano to easily find and deploy applications themselves.
Cloud Service Owners, who can use Murano to save time when deploying and configuring applications to multiple instances or when deploying complex distributed applications with many dependent applications and services.
Developers, who can use Murano to easily deploy and redeploy on-demand applications, many times without cloud administrators, for their own purposes (for example for hosting a web-site, or for the development and testing of applications). They can also use Murano to make their applications available to other end users.

In short, Murano turns application deployment and managing processes into a very simple process that can be performed by administrators and users of all levels. It does this by encapsulating all of the deployment logic and dependencies for the application into a Murano App, which is a single zip file with a specific structure. You just need to upload it to your cloud and it&8217;s ready.
Why should I create a Murano app?
OK, so now that we know what a Murano app is, why should we create one?  Well, ask yourself these questions:

Do I want to spend less time deploying my applications?
Do I want my users to spend less time (and aggravation) deploying my applications?
Do I want my employees to spend more time actually getting work done and less time struggling with software deployment?

(Do you notice a theme here?)
There are also reasons for creating Murano Apps that aren&8217;t necessarily related to saving time or being more efficient:

You can make it easier for users to find your application by publishing it to the OpenStack Community Application Catalog, which provides access to a whole ecosystem of people  across fast-growing OpenStack markets around the world. (Take a look how huge it is by exploring OpenStack User-stories.)
You can develop your app as a robust and re-usable solution in your private OpenStack сloud to avoid error-prone manual work.

All you need to do to make these things possible is to develop a Murano App for your own application.
Where we go from here
OK, so now we know what a Murano App is, and why you&8217;d want to create one. Join us tomorrow to find out how to create the OpenStack and developer environment you&8217;ll need to make it work.
And let us know in the comments what you&8217;d like to see out of this series!
 
The post Develop Cloud Applications for OpenStack on Murano, Part 1: What is Murano, and why do I need it? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Mirantis at EDGE 2016 – Unlocked Private Clouds on IBM Power8

The post Mirantis at EDGE 2016 &; Unlocked Private Clouds on IBM Power8 appeared first on Mirantis | The Pure Play OpenStack Company.
On September 22, Mirantis&; Senior Technical Director, Greg Elkinbard, spoke at IBM&8217;s Edge 2016 IT infrastructure conference in Las Vegas. His short talk described Mirantis&8217; mission: to create clouds using OpenStack and Kubernetes under a &;Build, Operate, Transfer&; model. He enumerated some of the benefits Mirantis customers like Volkswagen are gaining from their large-scale clouds, including more-engaged developers, faster release cycles, platform delivery times reduced from months to hours, and significantly lower costs.
Greg wrapped up the session with a progress report on IBM and Mirantis&8217; recent collaboration to produce a reference architecture for compute node placement on IBM Power8 systems: a solution aimed at lowering costs and raising performance for database and similar demanding workloads. Mirantis is also validating Murano applications and other methods for deploying a wide range of apps on IBM Power hardware, including important container orchestration frameworks, NFV apps, Big Data tools, webservers and proxies, popular databases and developer toolchain elements.

Mirantis IBM Partner Page: https://www.mirantis.com/partners/ibm/
For more on IBM Power8 servers, please visit http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=POB03046USEN

The post Mirantis at EDGE 2016 &8211; Unlocked Private Clouds on IBM Power8 appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

The Colorful History of Mirantis Swag

The post The Colorful History of Mirantis Swag appeared first on Mirantis | The Pure Play OpenStack Company.
As many of you have probably noticed by now, Mirantis likes to get creative in our participation within the OpenStack and Open Source communities. For the upcoming OpenStack Summit in Barcelona, we have a special booth design that pays homage to many of our past designs. We&;re also asking summit attendees to play a game and find some of these designs within our booth, so we thought it would be a great opportunity to collect them all here in a retrospective that you can use as a reference at the summit. We hope you enjoyed them as much as we did, and we&8217;re looking forward to showing a few new designs in Barcelona.

Ever since our early days as OpenStack pioneers, we&8217;ve cultivated an iconoclastic and unconventional culture that is still going strong today. We&8217;re also not afraid to poke fun at ourselves. Paying &;homage&; to our company&8217;s Russian origins, playing off stereotypes of Russia from the Cold War, the Mirantis Bear made its first appearance in one of our earliest promotional designs.

 

As one of the leading contributors to the OpenStack source code, Mirantis has also started a number of projects within the &8220;Big Tent&8221;. One of those was Sahara, and the Sahara Elephant character made an appearance on a t-shirt design.

 

While OpenStack deployment is still not easy, several years ago it was considerably more difficult. Mirantis parodied this complexity in an Ikea-inspired t-shirt design that remains one of our most popular designs.

 

Capitalizing on the popularity of &8220;Keep Calm&8221; memes from a couple of years ago, Mirantis makes a t-shirt that hints at the &8220;calming influence&8221; of OpenStack deployments using Fuel

 

To celebrate with our neighbors to the North and their love of ice hockey, Mirantis made a maple leaf logo to wear proudly on our hockey jerseys at the summit in Vancouver.

 

As one of the early innovators with the Murano project, Mirantis was instrumental in launching the OpenStack Community App Catalog, and produced this Zelda-inspired app &8220;inventory&8221; t-shirt design.

 

Having established ourselves as the leading Pure Play OpenStack company, at the summit in Paris we &8220;highlighted&8221; the purity of our OpenStack distribution in several tongue-in-cheek designs.

 

Underscoring our commitment to open, vendor-agnostic OpenStack, at the summit in Tokyo we launched Megarantis, our mechanized defender of Pure Play OpenStack.

 

As Mirantis and the OpenStack community continued to invest in Fuel as the leading purpose-built OpenStack installer, our lovable &8220;sick cloud&8221; made its first appearance on a special edition t-shirt for Fuel design session participants.

 

As a tribute to the ongoing success of OpenStack and its consistent semi-annual releases with alphabetical names, Mirantis created unique beer labels for each release and distributed them on stickers in our &8220;OpenStack Bar&8221; at the summit in Austin.

 

For over 200 successful enterprise deployments, we&8217;ve proudly served Mirantis OpenStack, the King of Distros. In Austin, this design was included in the beer label set of stickers.

 

To complement our OpenStack Bar&8217;s &8220;urban&8221; theme, our booth staff in Austin were outfitted in OpenStack graffiti tag hats and t-shirts with a popular design based on the Run-D.M.C. logo.

 

As the highlight of our OpenStack Bar, and arguably our design team&8217;s tour-de-force, this booth backdrop generated nearly universal praise for its originality and visual impact. A 10-foot tall reproduction was printed for an interior wall at Mirantis HQ.

 

The post The Colorful History of Mirantis Swag appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Your Docker agenda for the month of October

From webinars to workshops, meetups to conference talks, check out our list of events that are coming up in October!

Online
Oct 13: Docker for Windows Server 2016 by Michael Friis
Oct 18: Docker Datacenter Demo by Moni Sallama and Chris Hines.
 
Official Docker Training Course
View the full schedule of instructor led training courses here!
Introduction to Docker: This is a two-day, on-site or classroom-based training course which introduces you to the Docker platform and takes you through installing, integrating, and running it in your working environment.
Oct 11-12: Introduction to Docker with Xebia &; Paris, France
Oct 19-20: Introduction to Docker with Contino &8211; London, United Kingdom
Oct 24-25: Introduction to Docker with AKRA &8211; Krakow, Germany
 
Docker Administration and Operations: The Docker Administration and Operations course consists of both the Introduction to Docker course, followed by the Advanced Docker Topics course, held over four consecutive days.
Oct 3-6: Docker Administration and Operations with Azca &8211; Madrid, Spain
Oct 11-15: Docker Administration and Operations with TREEPTIK &8211; Paris, France
Oct 18-21: Docker Administration and Operations with Vizuri &8211; Raleigh, NC
Oct 18-22: Docker Administration and Operations with TREEPTIK &8211; Aix en Provence, France
Oct 24-27: Docker Administration and Operations with AKRA &8211; Krakow, Germany
Oct 31- Nov 3: Docker Administration and Operations by Luis Herrera, Docker Captain &8211; Lisboa, Portugal
 
Advanced Docker Operations: This two day course is designed to help new and experienced systems administrators learn to use Docker to control the Docker daemon, security, Docker Machine, Swarm, and Compose.
Oct 10-11 Advanced Docker Operations with Ben Wootton, Docker Captain &8211; London, UK
Oct 26-27: Advanced Docker Operations with AKRA &8211; Krakow, Poland
 
North America & Latin America
Oct 5th: DOCKER MEETUP AT MELTMEDIA &8211; Tempe, AZ
The speaker, @leodotcloud, will discuss the background, present ecosystem of the Container Network Interface (CNI) for containers.
Oct 6th: DOCKER MEETUP AT RACKSPACE &8211; Austin, TX
Jeff Lindsay will give a preview talk to container days where he will cover what the different components of a cluster manager are and what are things you should pay attention to if you really wanted to build your own cluster management solution.
Oct 11th: DOCKER MEETUP AT REPLICATED &8211; Los Angeles, CA
Marc Campbell will share some best practices of using Docker in production, starting with using Content Trust and signed images (including the internals of how Content Trust is built), and then discussing a Continuous Integration/Delivery workflow that can reliably and securely deliver and run Docker containers in any environment.
Oct 12th: DOCKER MEETUP IN BATON ROUGE &8211; Baton Rouge, LA
This Docker meetup will be hosted by Brandon Willmott of the local VMware User Group.
Oct 12th: DOCKER MEETUP AT TUNE &8211; Seattle, WA
Join this meetup to hear talks from Nick Thompson from TUNE, Avi Cavali from Shippable and DJ Enriquez from OpenMail. Also Wes McNamee, a winner of the Docker 1.12 Hackathon, will also be presenting his project Swarm-CI. This is not to be missed!
Oct 13th: DOCKER MEETUP AT CAPITAL ALE HOUSE &8211; Richmond, VA
Scott Cochran, Master Software Engineer at Capital One, will be talking about his journey in adopting docker containers to solve business problems and the things he learned along the way.
Oct 17th: DOCKER MEETUP AT BRAINTREE &8211; Chicago, IL
Tsvi Korren, director of technical services at Aqua, is going to present a talk entitled &;Docker Container Application Security Deep Dive&; where he will discuss how to integrate compliance and security checks into your pipeline and how to produce a secure, verifiable image.
Oct 18th: DOCKER MEETUP AT THE INNEVATION CENTER &8211; Las Vegas, NV
Using the Docker volume plug-in with external container storage allows data to be persisted, allows per-container volume management and high-availability for stateful apps. Join this informative meetup with Gou Rao, CTO and co-founder of Portworx, where we’ll discuss: Best practices for managing stateful containerized applications.
Oct 18th: DOCKER MEETUP AT WILDBIT &8211; PHILADELPHIA, PA
Ben Grissinger, Solutions Engineer at Docker, will discuss Docker Swarm!  He will cover the requirements for using swarm mode and take a peak at what we can expect in the near future from Docker regarding swarm mode. Last but not he will be doing a demo using swarm mode and using a visualizer tool to display what is taking place in the swarm cluster during the demo of swarm mode in action.
Oct 18th: DOCKER MEETUP AT SANTANDER &8211; Sao Paulo, Brazil
Join Docker São Paulo for their 8th meetup. Get in touch if you would like to submit a talk.
Oct 29th: DOCKER MEETUP AT CI&T &8211; Campinas, Brazil
Save the date for the first Docker Campinas meetup. More details to follow soon.
 
Europe
Oct 4th: LINUXCON EUROPE / CONTAINERCON EU  &8211; Berlin, Germany
We had such a great time attending and speaking at LinuxCon and ContainerCon North America, that we are doing it again next week in Berlin – only bigger and better this time! Make sure to come visit us at booth and check out the awesome Docker sessions we have lined up.
Oct 4th: THE INCREDIBLE AUTOMATION DAY (TIAD) PARIS &8211; Paris, France
Roberto Hashioka from Docker will share how to build a powerful real-time data processing pipeline & visualization solution using Docker Machine and Compose, Kafka, Cassandra and Spark in 5 steps.
Oct 4th: DOCKER MEETUP IN COPENHAGEN &8211; Copenhagen, Denmark
Learn to be a DevOps &8211; workshop for beginners.
Oct 5th: WEERT SOFTWARE DEVELOPMENT MEETUP &8211; Weert, Netherlands
Kabisa will host a Docker workshop. The workshop is intended for people who are interested in Docker. Last year you have heard and read a lot about Docker. “Our workshop is a next step for you to gain some hands-on experience.”
Oct 6th: DOCKER MEETUP AT ZANOX &8211; Berlin, Germany
Patrick Chanezon: What&8217;s new with Docker, covering Docker announcements from the past 6 months, with a demo of the latest and greatest Docker products for dev and ops.
Oct 6th: TECH UNPLUGGED &8211; Amsterdam, The Netherlands
Docker Captain Nigel Poulton is presenting on container security at @techunplugged in Amsterdam.
Oct 11th: DOCKER MEETUP AT MONDAY CONSULTING GMBH &8211; Hamburg, Germany
Tom Hutter prepared some material about: aliases and bash-completion, Dockerfile, docker-compose, bind mount: access folders outside build root, supervisord, firewalls (iptables), housekeeping.
Oct 11th: London Dev Community Meetup &8211; London, United Kingdom
Building Microservices with Docker.
Oct 12th: GOTO &8211; LONDON &8211; London, United Kingdom
GOTO London will give you the opportunity to talk with people across all different disciplines of software development! Join Docker captain Adrian Mouat talk about Docker.
Oct 13th: DOCKER MEETUP AT YNOV BORDEAUX &8211; Bordeaux, France
David Gageot from Docker will be presenting.
Oct 15th: DOCKER MEETUP AT BKM &8211; Istanbul, Turkey
Event will be handled by Derya SEZEN and Huseyin BABAL and there will be cool topics about Docker with real life best practices and also we have some challenges for you. Do not forget to bring your laptops with you.
Oct 15th: DOCKER MEETUP AT BUCHAREST TECH HUB &8211; Bucharest, Romania
Welcome to the second workshop of the free Docker 101 Workshop Meetups!
This is going to be a 5h+ Workshop, so be prepared! This workshop is an introduction in the world of Docker containers. It provides an overview about what exactly is Docker and how can it benefit both developers looking to build applications quickly and  IT team looking to manage the IT environment.
Oct 17th: OSCON LONDON &8211; London, UK
Hear the latest about the Docker project from Patrick Chanezon.
Oct 18th: DOCKER MEETUP AT TRADESHIFT &8211; Denmark, Copenhagen
We are going to talk about Continuous Integration, Continuous Deployment. Why is that important, why should you care? CI/CD as it is abbreviated is not only about the technical, it is also about how you can improve your team with new tools that help you deliver features faster with fewer errors.
Oct 18th: DOCKER MEETUP AT HORTONWORKS BUDAPEST &8211; Budapest, Hungary
This Meetup will focus on the new features of Docker 1.12.
Oct 26th: DOCKER MEETUP AT DIE MOBILIAR &8211; Zürich, Switzerland
We are happy to announce the 11th Docker Switzerland meetup. Talks include an introduction into swarmkit by Michael Müller from Container Solutions.
Oct 26th: DOCKER MEETUP AT BENTOXBOX &8211; Verona, Italy
Join us for our first meetup! Docker Captain Lorenzo Fontana, DevOps Expert at Kiratech, will be joining us!
 
APAC
Oct 18th: DOCKER MEETUP AT DIMENSION DATA &8211; Sydney, Australia
“Docker inside out, reverse engineering Docker” By Anthony Shaw, “Group Director, Innovation and Technical Development” at Dimension Data. Summary: In this talk Anthony will be explaining how Docker works by reverse engineering the core concepts and illustrating the technology by building a Docker clone live during the talk.
Oct 18th: DOCKER MEETUP IN MELBOURNE &8211; Melbourne, Australia
Continuous Integration & Deployment for Docker Workloads on Azure Container Services. Presenter: Ken Thompson (OSS TSP, Microsoft).
Oct 18th: DOCKER MEETUP IN SINGAPORE &8211; Singapore, Singapore
Docker for AWS (Vincent de Smet) with a demo on using docker machine with a remote host by Sergey Shishkin.
Oct 22nd: DOCKER CLUSTERING WITH TECH NEXT MEETUP &8211; Pune, India
Dockerize a multi-container data crunching app.
 
The post Your Docker agenda for the month of October appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Your Guide to LinuxCon and ContainerCon Europe

Hey Dockers! We had such a great time attending and speaking at and North America, that we are doing it again next week in Berlin &; only bigger and better this time! Make sure to come visit us at booth and check out the awesome Docker sessions we have lined up:
Keynote!
Solomon Hykes, Docker’s Founder and CTO, will kick off LinuxCon with the first keynote at 9:25. If you aren’t joining us in Berlin, you can live stream his and the other keynotes by registering here.
Sessions
Tuesday October 4th:
11:15 &8211; 12:05 Docker Captain Adrian Mouat will deliver a comparison of orchestration tools including Docker Swarm, Mesos/Marathon and Kubernetes.
 
12:15 &8211; 1:05 Patrick Chanezon and David Chung from Docker’s technical team along with Docker Captain and maintainer Phil Estes will demonstrate how to build distributed systems without Docker, using Docker plumbing projects, including RunC, containerd, swarmkit, hyperkit, vpnkit, datakit.
 
2:30 &8211; 3:20 Docker’s Mike Goelzer will introduce the audience to Docker Services in Getting Started with Docker Services, explain what they are and how to use them to deploy multi-tier applications. Mike will also cover load balancing, service discovery, scaling, security, deployment models, and common network topologies.
 
3:30 &8211; 4:20 Stephen Day, Docker Technical Staff, will introduce SwarmKit: Docker&;s Simplified Model for Complex Orchestration. Stephen will dive into the model driven design and demonstrate how the components fit together to build a user-friendly orchestration system designed  to handle modern applications.
 
3:30 &8211; 4:20 Docker’s Paul Novarese will dive into User namespace and Seccomp support in Docker Engine, covering new features that respectively allow users to run Containers without elevated privileges and provide different containment methods.  
 
3:30 &8211; 4:20 Docker Captain Laura Frank will show how to use Docker Engine, Registry and Compose to quickly and efficiently test software in her session: Building Efficient Parallel Testing Platforms with Docker.
 
Wednesday October 5th:
2:30 &8211; 3:20 Docker Captain Phil Estes goes into details on why companies are choosing to use containers because of their security &8211; not in spite of it. In How Secure is your Container? A Docker Engine Security Update, Phil will demonstrate recent additions to the Docker engine in 2016 such as user namespaces and seccomp and how they continue to enable better container security and isolation.
 
3:40 &8211; 4:30 Aaron Lehmann, Docker Technical Staff, will cover Docker Orchestration: Beyond the Basics and discuss best practices for running a cluster using Docker Engine&8217;s orchestration features &8211; from getting started to keeping a cluster perfomant, secure, and reliable.
 
4:40 &8211; 5:30 Docker’s Riyaz Faizullabhoy and Lily Guo will deliver When The Going Gets Tough, Get TUF Going! The Update Framework (TUF) helps developers secure new or existing software update systems. In this session, you will learn the attacks that TUF protects against and how it actually does so in a usable manner.
 
Thursday October 6th:
10:50 &8211; 11:40 Docker Technical Staff Drew Erny will explain the mechanisms used in the core Docker Engine orchestration platform to tolerate failures of services and machines, from cluster state replication and leader-election to container re-scheduling logic when a host goes down in his session Orchestrating Linux Containers while Tolerating Failures.
11:50 &8211; 12:40 Docker’s Amir Chaudhry will explain Unikernels: When you Should and When you Shouldn’t to help you weigh the pros and cons of using unikernels and help you decide when when it may be appropriate to consider a library OS for your next project.
18:45: Docker Berlin meetup: Patrick Chanezon: What&8217;s new with Docker, covering Docker announcements from the past 6 months, with a demo of the latest and greatest Docker products for dev and ops.
Friday October 7th:
9:00am – 12:00 pm Docker Captain Neependra Khare will lead a Tutorial on Comparing Container Orchestration Tools.
1:00 pm – 5:00 pm In this 3 hour tutorial, Jerome Petazzoni will teach attendees how to Orchestrate Containers in Production at Scale with Docker Swarm.
 
In addition to our Docker talks, we have an amazing Docker Berlin meetup lined up just for you on Thursday October 6th. The meetup kicks off with Patrick Chanezon, a member of technical staff at Docker, will cover Docker announcements from the past 6 months and demo the latest and greatest Docker products for dev and ops. Then,  Paul J. Adams,  Engineering Manager at Crate.io, will demonstrate how easy it is to setup and manage a Crate database cluster using Docker Engine and Swarm Mode.
[Tweet “We&8217;re excited to be at LinuxCon + ContainerCon next week in Berlin! Here&8217;s our guide to the best sessions”]
CLICK TO TWEET
 
The post Your Guide to LinuxCon and ContainerCon Europe appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

OpenStack Developer Mailing List Digest September 24-30

Candidate Proposals for TC are now open

Candidate proposals for the Technical committee (9 positions) are open and will remain open until 2016-10-01, 23:45 UTC.
Candidacies must submit a text file to the openstack/election repository [1].
Candidates for the Technical Committee can be any foundation individual member, except the seven TC members who were elected for a one year seat in April [2].
The election will be held from October 3rd through to 23:45 October 8th.
The electorate are foundation individual members that are committers to one of the official programs projects [3] over the Mitaka-Newton timeframe (September 5, 2015 00:00 UTC to September 4, 2016 23:59 UTC).
Current accepted candidates [4]
Full thread

Release countdown for week R-0, 3-7 October

Focus: Final release week. Most project teams should be preparing for the summit in Barcelona.
General notes:

Release management team will tag the final Newton release on October 6th.

Project teams don&;t have to do anything. The release management team will re-tag the commit used in the most recent release candidate listed in openstack/releases.

Projects not following the milestone model will not be re-tagged.
Cycle-trailing projects will be skipped until the trailing deadline.

Release actions

Projects not follow the milestone-based release model who want stable/newton branches created should talk to the release team about their needs. Unbranched projects include:

cloudkitty
fuel
monasca
openstackansible
senlin
solum
tripleo

Important dates:

Newton final release: October 6th
Newton cycle-trailing deadline: October 20th
Ocata Design Summit: October 24-28

Full thread

Removal of Security and OpenStackSalt Project Teams From the Big Tent (cont.)

The change to remove Astara from the big tent was approval by the TC [4].
The TC has appointed Piet Kruithof as PTL of the UX team [5].
Based on the thread discussion [6] and engagements of the team, the Security project team will be kept as is and Rob Clark continuing as PTL [7].
The OpenStackSalt team did not produce any deliverable within the Newton cycle. The removal was approved by the current Salt team PTL and the TC [8].
Full thread

 
[1] &; http://governance.openstack.org/election/-to-submit-your-candidacy
[2] &8211; https://wiki.openstack.org/wiki/TC_Elections_April_2016#
[3] &8211; http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=sept-2016-elections
[4] &8211; https://review.openstack.org/#/c/376609/
[5] &8211; http://eavesdrop.openstack.org/meetings/tc/2016/tc.2016-09-27-20.01.html
[6] &8211; http://lists.openstack.org/pipermail/openstack-dev/2016-September/thread.html#
[7] &8211; http://eavesdrop.openstack.org/meetings/tc/2016/tc.2016-09-27-20.01.html
[8] &8211; https://review.openstack.org/#/c/377906/
Quelle: openstack.org