Helping PTG attendees and other developers get to the OpenStack Summit

Although the OpenStack design events have changed, developers and operators still have a critical perspective to bring to the OpenStack Summits. At the PTG, a common whisper heard in the hallways was, &;I really want to be at the Summit, but my [boss/HR/approver] doesn&;t understand why I should be there.&; To help you out, we took our original &8220;Dear Boss&8221; letter and made a few edits for the PTG crowd. If you&8217;re a contributor or developer who wasn&8217;t able to attend the PTG, with a few edits, this letter can also work for you. (Not great with words? Foundation wordsmith Anne can help you out&;anne at openstack.org)
 
Dear [Boss],
 
I would like to attend the OpenStack Summit in Boston, May 8-11, 2017. At the Pike Project Team Gathering in Atlanta (PTG), I was able to learn more about the new development event model for OpenStack. In the past I attended the Summit to participate in the Design Summit, which encapsulated the feedback and planning as well as design and development of creating OpenStack releases. One challenge was that the Design Summit did not leave enough time for “head down” work within upstream project teams (some teams ended up traveling to team-specific mid-cycle sprints to compensate for that). At the Pike PTG, we were able to kickstart the Pike cycle development, working heads down for a full week. We made great progress on both single project and OpenStack-wide goals, which will improve the software for all users, including our organization.
 
Originally, I––and many other devs––were under the impression that we no longer needed to attend the OpenStack Summit. However, after a week at the PTG, I see that I have a valuable role to play at the Summit’s “Forum” component. The Forum is where I can gather direct feedback and requirements from operators and users, and express my opinion and our organization’s about OpenStack’s future direction. The Forum will let me engage with other groups with similar challenges, project desires and solutions.
 
While our original intent may have been to send me only to the PTG, I would strongly like us to reconsider. The Summit is still an integral part of the OpenStack design process, and I think my attendance is beneficial to both my professional development and our organization. Because of my participation in the PTG, I received a free pass to the Summit, which I must redeem by March 14.      
 
Thank you for considering my request.
[Your Name]
Quelle: openstack.org

InfraKit and Docker Swarm Mode: A Fault-Tolerant and Self-Healing Cluster

Back in October 2016, released , an open source toolkit for creating and managing declarative, self-healing infrastructure. This is the second in a two part series that dives more deeply into the internals of InfraKit.
Introduction
In the first installment of this two part series about the internals of InfraKit, we presented InfraKit’s design, architecture, and approach to high availability.  We also discussed how it can be combined with other systems to give distributed computing clusters self-healing and self-managing properties. In this installment, we present an example of leveraging Docker Engine in Swarm Mode to achieve high availability for InfraKit, which in turn enhances the Docker Swarm cluster by making it self-healing.  
Docker Swarm Mode and InfraKit
One of the key architectural features of Docker in Swarm Mode is the manager quorum powered by SwarmKit.  The manager quorum stores information about the cluster, and the consistency of information is achieved through consensus via the Raft consensus algorithm, which is also at the heart of other systems like Etcd. This guide gives an overview of the architecture of Docker Swarm Mode and how the manager quorum maintains the state of the cluster.
One aspect of the cluster state maintained by the quorum is node membership &; what nodes are in the cluster, who are the managers and workers, and their statuses. The Raft consensus algorithm gives us guarantees about our cluster’s behavior in the face of failure, and fault tolerance of the cluster is related to the number of manager nodes in the quorum. For example, a Docker Swarm with three managers can tolerate one node outage, planned or unplanned, while a quorum of five managers can tolerate outages of up to two members, possibly one planned and one unplanned.
The Raft quorum makes the Docker Swarm cluster fault tolerant; however, it cannot fix itself.  When the quorum experiences outage of manager nodes, manual steps are needed to troubleshoot and restore the cluster.  These procedures require the operator to update or restore the quorum’s topology by demoting and removing old nodes from the quorum and joining new manager nodes when replacements are brought online.  
While these administration tasks are easy via the Docker command line interface, InfraKit can automate this and make the cluster self-healing.  As described in our last post, InfraKit can be deployed in a highly available manner, with multiple replicas running and only one active master.  In this configuration, the InfraKit replicas can accept external input to determine which replica is the active master.  This makes it easy to integrate InfraKit with Docker in Swarm Mode: by running InfraKit on each manager node of the Swarm and by detecting the leadership changes in the Raft quorum via standard Docker API, InfraKit achieves the same fault-tolerance as the Swarm cluster. In turn, InfraKit’s monitoring and infrastructure orchestration capabilities, when there’s an outage, can automatically restore the quorum, making the cluster self-healing.
Example: A Docker Swarm with InfraKit on AWS
To illustrate this idea, we created a Cloudformation template that will bootstrap and create a cluster of Docker in Swarm Mode managed by InfraKit on AWS.  There are couple of ways to run this: you can clone the InfraKit examples repo and upload the template, or you can use this URL to launch the stack in the Cloudformation console.
Please note that this Cloudformation script is for demonstrations only and may not represent best practices.  However, technical users should experiment and customize it to suit their purposes.  A few things about this Cloudformation template:

As a demo, only a few regions are supported: us-west-1 (Northern California), us-west-2 (Oregon), us-east-1 (Northern Virginia), and eu-central-1 (Frankfurt).
It takes the cluster size (number of nodes), SSH key, and instance sizes as the primary user input when launching the stack.
There are options for installing the latest Docker Engine on a base Ubuntu 16.04 AMI or using images which we have pre-installed Docker and published for this demonstration.
It bootstraps the networking environment by creating a VPC, a gateway and routes, a subnet, and a security group.
It creates an IAM role for InfraKit’s AWS instance plugin to describe and create EC2 instances.
It creates a single bootstrap EC2 instance and three EBS volumes (more on this later).  The bootstrap instance is attached to one of the volumes and will be the first leader of the Swarm.  The entire Swarm cluster will grow from this seed, as driven by InfraKit.

With the elements above, this Cloudformation script has everything needed to boot up an Infrakit-managed Docker in Swarm Mode cluster of N nodes (with 3 managers and N-3 workers).  
About EBS Volumes and Auto-Scaling Groups
The use of EBS volumes in our example demonstrates an alternative approach to managing Docker Swarm Mode managers.  Instead of relying on manually updating the quorum topology by removing and then adding new manager nodes to replace crashed instances, we use EBS volumes attached to the manager instances and mounted at /var/lib/docker for durable state that survive past the life of an instance.  As soon as the volume of a terminated manager node is attached to a new replacement EC2 instance, we can carry the cluster state forward quickly because there’s much less state changes to catch up to.  This approach is attractive for large clusters running many nodes and services, where the entirety of cluster state may take a long time to be replicated to a brand new manager that just joined the Swarm.  
The use of persistent volumes in this example highlights InfraKit’s philosophy of running stateful services on immutable infrastructure:

Use compute instances for just the processing cores;  they can come and go.
Keep state on persistent volumes that can survive when compute instances don’t.
The orchestrator has the responsibility to maintain members in a group identified by fixed logical ID’s.  In this case these are the private IP addresses for the Swarm managers.
The pairing of logical ID (IP address) and state (on volume) need to be maintained.

This brings up a related implementation detail &8212; why not use the Auto-Scaling Groups implementations that are already there?  First, auto-scaling group implementations vary from one cloud provider to the next, if even available.  Second, most auto-scalers are designed to manage cattle, where individual instances in a group are identical to one another.  This is clearly not the case for the Swarm managers:

The managers have some kind of identity as resources (via IP addresses)
As infrastructure resources, members of a group know about each other via membership in this stable set of IDs.
The managers identified by these IP addresses have state that need to be detached and reattached across instance lifetimes.  The pairing must be maintained.

Current auto-scaling group implementations focus on managing identical instances in a group.  New instances are launched with assigned IP addresses that don’t match the expectations of the group, and volumes from failed instances in an auto-scaling group don’t carry over to the new instance.  It is possible to work around these limitations with sweat and conviction; InfraKit, through support of allocation, logical IDs and attachments, support this use case natively.
Bootstrapping InfraKit and the Swarm
So far, the Cloudformation template implements what we called ‘bootstrapping’, or the process of creating the minimal set of resources to jumpstart an InfraKit managed cluster.  With the creation of the networking environment and the first “seed” EC2 instance, InfraKit has the requisite resources to take over and complete provisioning of the cluster to match the user’s specification of N nodes (with 3 managers and N-3 workers).   Here is an outline of the process:
When the single “seed” EC2 instance boots up, a single line of code is executed in the UserData (aka cloudinit), in Cloudformation JSON:
“docker run –rm “,{“Ref”:”InfrakitCore”},” infrakit template –url “,
{“Ref”:”InfrakitConfigRoot”}, “/boot.sh”,
” –global /cluster/name=”, {“Ref”:”AWS::StackName”},
” –global /cluster/swarm/size=”, {“Ref”:”ClusterSize”},
” –global /provider/image/hasDocker=yes”,
” –global /infrakit/config/root=”, {“Ref”:”InfrakitConfigRoot”},
” –global /infrakit/docker/image=”, {“Ref”:”InfrakitCore”},
” –global /infrakit/instance/docker/image=”, {“Ref”:”InfrakitInstancePlugin”},
” –global /infrakit/metadata/docker/image=”, {“Ref”:”InfrakitMetadataPlugin”},
” –global /infrakit/metadata/configURL=”, {“Ref”:”MetadataExportTemplate”},
” | tee /var/lib/infrakit.boot | sh n”
Here, we are running InfraKit packaged in a Docker image, and most of this Cloudformation statement references the Parameters (e.g. “InfrakitCore” and “ClusterSize”) defined at the beginning of the template.  Using parameters values in the stack template, this translates to a single statement like this that will execute during bootup of the instance:
docker run –rm infrakit/devbundle:0.4.1 infrakit template
–url https://infrakit.github.io/examples/swarm/boot.sh
–global /cluster/name=mystack
–global /cluster/swarm/size=4 # many more …
| tee /var/lib/infrakit.boot | sh # tee just makes a copy on disk

This single statement marks the hand-off from Cloudformation to InfraKit.  When the seed instance starts up (and installs Docker, if not already part of the AMI), the InfraKit container is run to execute the InfraKit template command.  The template command takes a URL as the source of the template (e.g. https://infrakit.github.io/examples/swarm/boot.sh, or a local file with a URL like file://) and a set of pre-conditions (as the &;global variables) and renders.  Through the &8211;global flags, we are able to pass a set of parameters entered by the user when launching the Cloudformation stack. This allows InfraKit to use Cloudformation as authentication and user interface for configuring the cluster.
InfraKit uses templates to simplify complex scripting and configuration tasks.  The templates can be any text that uses { { } } tags, aka “handle bar” syntax.  Here InfraKit is given a set of input parameters from the Cloudformation template and a URL referencing the boot script.  It then fetches the template and renders a script that is executed to perform the following during boot-up of the instance:
 

Formatting the EBS if it’s not already formatted
Stopping Docker if currently running and mount the volume at /var/lib/docker
Configure the Docker engine with proper labels, restarting it.
Starts up an InfraKit metadata plugin that can introspect its environment.  The AWS instance plugin, in v0.4.1, can introspect an environment formed by Cloudformation, as well as, using the instance metadata service available on AWS.   InfraKit metadata plugins can export important parameters in a read-only namespace that can be referenced in templates as file-system paths.  
Start the InfraKit containers such as the manager, group, instance, and Swarm flavor plugins.
Initializes the Swarm via docker swarm init.
Generates a config JSON for InfraKit itself.  This JSON is also rendered by a template (https://github.com/infrakit/examples/blob/v0.4.1/swarm/groups.json) that references environmental parameters like region, availability zone, subnet id’s and security group id’s that are exported by the metadata plugins.
Performs a infrakit manager commit to tell InfraKit to begin managing the cluster.

See https://github.com/infrakit/examples/blob/v0.4.1/swarm/boot.sh for details.
When the InfraKit replica begins running, it notices that the current infrastructure state (of only one node) does not match the user’s specification of 3 managers and N-3 worker nodes.  InfraKit will then drive the infrastructure state toward user’s specification by creating the rest of the managers and workers to complete the Swarm.
The topic of metadata and templating in InfraKit will be the subjects of future blog posts.  In a nutshell, metadata is information exposed by compatible plugins organized and accessible in a cluster-wide namespace.  Metadata can be accessed in the InfraKit CLI or in templates with file-like path names.  You can think of this as a cluster-wide read-only sysfs.  InfraKit template engine, on the other hand, can make use of this data to render complex configuration script files or JSON documents. The template engine supports fetching a collection of templates from local directory or from a remote site, like the example Github repo that has been configured to serve up the templates like a static website or S3 bucket.
 
Running the Example
You can either fork the examples repo or use this URL to launch the stack on AWS console.   Here we first bootstrap the Swarm with the Cloudformation template, then InfraKit takes over and provisions the rest of the cluster.  Then, we will demonstrate fault tolerance and self-healing by terminating the leader manager node in the Swarm to induce fault and force failover and recovery.
When you launch the stack, you have to answer a few questions:

The size of the cluster.  This script always starts a Swarm with 3 managers, so use a value greater than 3.

The SSH key.

There’s an option to install Docker or use an AMI with Docker pre-installed.  An AMI with Docker pre-installed gives shorter startup time when InfraKit needs to spin up a replacement instance.

Once you agree and launches the stack, it takes a few minutes for the cluster to be up.  In this case, we start a 4 node cluster.  In the AWS console we can verify that the cluster is fully provisioned by InfraKit:

Note the private IP addresses 172.31.16.101, 172.31.16.102, and 172.31.16.103 are assigned to the Swarm managers, and they are the values in our configuration. In this example the public IP addresses are dynamically assigned: 35.156.207.156 is bound to the manager instance at 172.31.16.101.  
Also, we see that InfraKit has attached the 3 EBS volumes to the manager nodes:

Because InfraKit is configured with the Swarm Flavor plugin, it also made sure that the manager and worker instances successfully joined the Swarm.  To illustrate this, we can log into the manager instances and run docker node ls. As a means to visualize the Swarm membership in real-time, we log into all three manager instances and run
watch -d docker node ls  
The watch command will by default refresh docker node ls every 2 seconds.  This allows us to not only watch the Swarm membership changes in real-time but also check the availability of the Swarm as a whole.

Note that at this time, the leader of the Swarm is just as we expected, the bootstrap instance, 172.31.16.101.  
Let’s make a note of this instance’s public IP address (35.156.207.156), private IP address (172.31.16.101), and its Swarm Node cryptographic identity (qpglaj6egxvl20vuisdbq8klr).  Now, to test fault tolerance and self-healing, let’s terminate this very leader instance.  As soon as this instance is terminated, we would expect the quorum leadership to go to a new node, and consequently, the InfraKit replica running on that node will become the new master.

Immediately the screen shows there is an outage:  In the top terminal, the connection to the remote host (172.31.16.101) is lost.  In the second and third terminals below, the Swarm node lists are being updated in real time:

When the 172.31.16.101 instance is terminated, the leadership of the quorum is transferred to another node at IP address 172.31.16.102 Docker Swarm Mode is able to tolerate this failure and continue to function (as seen by the continuously functioning of docker node ls by the remaining managers).  However, the Swarm has noticed that the 172.31.16.101 instance is now Down and Unreachable.

As configured, a quorum of 3 managers can tolerate one instance outage.   At this point, the cluster continues operation without interruption.  All your apps running on the Swarm continue to work and you can deploy services as usual.  However, without any automation, the operator needs to intervene at some point and perform some tasks to restore the cluster before another outage to the remaining nodes occur.  
Because this cluster is managed by InfraKit, the replica running on 172.31.16.102 now becomes the master when the same instance assumes leadership of the quorum.  Because InfraKit is tasked to maintain the specification of 3 manager instances with IP addresses 172.31.16.101, 172.31.16.102, and 172.31.16.103, it will take action when it notices 172.31.16.101 is missing.  In order to correct the situation, it will

Create a new instance with the private IP address 172.31.16.101
Attach the EBS volume that was previously associated with the downed instance
Restore the volume, so that Docker Engine and InfraKit starts running on that new instance.
Join the new instance to the Swarm.

As seen above, the new instance at private IP 172.31.16.101 now has an ephemeral public IP address 35.157.163.34, when it was previously 35.156.207.156.  We also see that the EBS volume has been re-attached:

Because of re-attaching the EBS volume as /var/lib/docker for the new instance and using the same IP address, the new instance will appear exactly as though the downed instance was resurrected and rejoins the cluster.  So as far as the Swarm is concerned, 172.31.16.101 may as well have been subjected to a temporary network partition and has since recovered and rejoined the cluster:

At this point, the cluster has recovered without any manual intervention.  The managers are now showing as healthy, and the quorum lives on!
Conclusion
While this example is only a proof-of-concept, we hope it demonstrates the potential of InfraKit as an active infrastructure orchestrator which can make a distributed computing cluster both fault-tolerant and self-healing.  As these features and capabilities mature and harden, we will incorporate them into Docker products such as Docker Editions for AWS and Azure.
InfraKit is a young project and rapidly evolving, and we are actively testing and building ways to safeguard and automate the operations of large distributed computing clusters.   While this project is being developed in the open, your ideas and feedback can help guide us down the path toward making distributed computing resilient and easy to operate.
Check out the InfraKit repository README for more info, a quick tutorial and to start experimenting &8212; from plain files to Terraform integration to building a Zookeeper ensemble. Have a look, explore, and join us on Github or online at the Docker Community Slack Channel (infrakit).  Send us a PR, open an issue, or just say hello.  We look forward to hearing from you!
More Resources:

Check out all the Infrastructure Plumbing projects
The InfraKit examples GitHub repo
Sign up for Docker for AWS or Docker for Azure
Try Docker today 

Part 2: InfraKit and Docker Swarm Mode: A Fault-Tolerant and Self-Healing Cluster by @dchungsfClick To Tweet

The post InfraKit and Docker Swarm Mode: A Fault-Tolerant and Self-Healing Cluster appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing the Docker Certification Program for Infrastructure, Plugins and Containers

In conjunction with the introduction of Enterprise Edition (EE), we are excited to announce the Docker Certification Program and availability of partner technologies through Docker Store. A vibrant ecosystem is a sign of a healthy platform and by providing a program that aligns Docker’s commercial platform with the innovation coming from our partners; we are collectively expanding choice for customers investing in the Docker platform.
The Docker Certification Program is designed for both technology partners and enterprise customers to recognize Containers and Plugins that excel in quality, collaborative support and compliance. Docker Certification is aligned to the available Docker EE infrastructure and gives enterprises a trusted way to run more technology in containers with support from both Docker and the publisher. Customers can quickly identify the Certified Containers and Plugins with visible badges and be confident that they were built with best practices, tested to operate smoothly on Docker EE.
Save Your Seat: Webinar &; Docker Certified and Store on March 21st.
There are three categories of Docker Certified technology available:

Certified Infrastructure: Include operating systems and cloud providers that the Docker platform is integrated and optimized for and tested for certification. Through this, Docker provides a great user experience and preserves application portability.
Certified Container: Independent Software Vendors (ISV) are able to package and distribute their software as containers directly to the end user. These containers are tested, built with Docker recommended best practices, are scanned for vulnerabilities, and are reviewed before posting on Docker Store.
Certified Plugin: Networking and Volume plugins for Docker EE are now available to be packaged and distributed to end users as containers.  These plugin containers are built with Docker recommended best practices are scanned for vulnerabilities, and must pass an additional suite of API compliance testing before they are reviewed before posting on Docker Store. Apps are portable across different network and storage infrastructure and work with new plugins without recoding.

Docker Certification presents an evolution of the Docker platform from Linux hackers to a broader community of developers and IT ops teams at businesses of all sizes looking to build and deploy apps on Docker for Linux and Windows on any infrastructure. Many components of their enterprise environment will come from third parties and Docker Certified accelerates the adoption of those technologies into Docker environments with assurances and support.
From the Ecosystem to Docker Certified Publisher
The Docker Certified badge is a great way for technology partners to differentiate solutions to the millions of Docker users out there today. Upon completion of testing, review and posting into Docker Store, these certified listings will display the  badge for customers to quickly understand which containers and plugins meet this extra criteria. Docker Store provides a marketplace for publishers to distribute, sell and manage their listings and for customers to easily browse, evaluate and purchase 3rd party technology as containers.  Customers will be able to manage all subscriptions (Docker products and 3rd party Store content) from a single place.
 

New Docker Certification Program, designed for both technology partners & enterprise customersClick To Tweet

Docker Store is the launch pad for all Docker container based software, plugins and more &8211; and to kick off the program, we have the following Docker Certified technologies  available starting today.

AVI Networks AviVantage
Cisco Contiv Network Plugin
Bleemeo Smart Agent
BlobCity DB
Blockbridge Volume Plugin
CodeCov Enterprise
Datadog
Gitlab Enterprise
Hedvig Docker Volume Plugin
HPE Sitescope
Hypergrid HyperCloud Block Storage Volume Plugin 
Kaazing Enterprise Gateway
Koekiebox Fluid
Microsoft winservercore, nanoserver, mssql-server-linux, mssql-server-windows-express, aspnet, dotnet core, iis
NetApp NDVP Volume Plugin
Nexenta Volume Plugin
Nimble Storage Volume Plugin
Nutanix Volume Plugin
Polyverse Microservice Firewall
Portworx PX-Developer
Sysdig Cloud Monitoring Agent
Weaveworks Weave Cloud Agent

New Docker Certification Program, designed for both technology partners & enterprise customersClick To Tweet

Get started today with the latest Docker Community and Enterprise Edition platforms and browse the Docker Store for Certified Containers and Plugins for a great new Docker experience.

Register for the Webinar featuring Docker Certified and Store
Search and Browse Certified Content and Plugins Docker Store.  
Interested in publishing? Apply to be a Docker Publisher Partner.
Learn more about Docker CE and Docker EE

The post Introducing the Docker Certification Program for Infrastructure, Plugins and Containers appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Network Deployment Engineer

The post Network Deployment Engineer appeared first on Mirantis | Pure Play Open Cloud.
We are looking for talented OpenStack Network Deployment Engineer, who is willing to work on intersection of IT and software engineering, be passioned about open-source and be able to design and deploy cloud network infrastructure build on top of open-source components.Responsibilities: Plan and deploy networks / SDNs for OpenStack and kubernetes cloud solutions for our customers;Work with NFV components to deliver end to end network solutions for our customers;Extend functionality for OpenStack networking &; supporting developers in a network architecture;Facilitate knowledge transfer to the customers during deployment projects; Work with geographically distributed international teams on technical challenges and process improvements; Contribute to Mirantis’ deployment knowledge base; Continuously improve tooling and technologies set.<spanMinimum requirements:At least 1 year of practical administration or monitoring experience in Linux (RHEL, CentOS, Ubuntu) as a server platform. Required experience with Linux operation system itself as well as with production level software and hardware. Practical experience of organization of highly available clusters is also required; At least 3 years of practical administration experience in legacy networks on CCNP level minimum (certification NOT required). At least 2 years of practical experience in conventional Linux administrator&;s script language Bash-script; Ability to understand and troubleshoot code written in Python. English language on an intermediate level; Ability to travel abroad for 3-6 months if neededWill be a plus:Practical experience of Python programming;Practical experience in configuration automation tool (Puppet, Ansible, Salt)Knowledge and experience of SDN and NFV;CCNP or CCIE certifications (or similar).Knowledge of OpenStack is a big plus;Knowledge of Juniper Contrail is a big plus; Knowledge of Linux Containers is a big plusWe offer:High-energy atmosphere of a young companyBuild large scale, innovative systems for mission-critical useCollaborate with exceptionally passionate, talented and engaging colleaguesCompetitive compensation package with strong benefits planLots of freedom for creativity and personal growth.            DON&8217;T PANIC JUST BUILD, OPERATE, TRANSFER and APPLY!The post Network Deployment Engineer appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

What’s new at DockerCon 2017

If you’ve attended multiple DockerCons, you should know that the team is always looking for new and exciting programs to improve on the previous editions. Last year, we introduced a ton of new DockerCon programs including a new Black Belt Track, a DockerCon scholarships, Workshops, etc. This year we’re excited to introduce more DockerCon goodness!
Using Docker and Docker Deep Dive Tracks
In the past editions, we received great attendee feedback requesting to split the Docker, Docker, Docker track into two separate tracks. We’ve heard you and as a result are happy to introduce the Using Docker and Docker Deep Dive tracks.
The Using Docker track is for everyone who’s getting started with Docker or wants to better implement Docker in their workflow. Whether you’re a .NET, Java or Node.js  developer looking to modernizing your applications, or an IT Pro who wants to learn about Docker orchestration and application troubleshooting, this track will have specific sessions for you to get up to speed with Docker.
The Docker Deep Dive track focuses on the technical details associated with the different components of the Docker platform: advanced orchestration, networking, security, storage, management and plug-ins. The Docker engineering leads will walk you through the best way to build, ship and run distributed applications with Docker as well as give you a hint at what’s on the roadmap.
More Community Theater
Located in the Ecosystem Expo, the Community Theater features cool Docker hacks and lightning talks by various community members on a range of topics. Because this “expo track” was very popular last year and in order to showcase more cool projects and use cases from the community, we’ve decided to add a second community theater! Check out the talks and abstracts from the 30 extra speakers featured in that track.
Adding a third day to the conference!
Repeating top sessions
With all these tracks and awesome sessions, we know that it can be difficult to choose which ones to attend &;  especially if they are scheduled at the same time. This year, based on your session ratings during the conference, the top 8 sessions will be delivered again on Thursday!
Mentor Summit
Also new this year, we will host a summit for current and aspiring Docker Mentors on Thursday, April 20th. Mentorship can be a fun and rewarding experience and you don&;t need to be an expert in order to mentor someone. Come learn the ins and outs of being an awesome mentor both in industry and in the Docker Community!
Docker Internals Summit
Finally, we’re excited to host a Docker Internals Summit. This is a collaborative event for advanced Docker Operators who are actively maintaining, contributing or generally involved in the design and development of the following Docker open source projects: Infrakit, SwarmKit, Hyperkit, Notary, containerd, runC, libnetwork and underlying technologies TUF, IPVS, Raft, etc.
The goals of the summit are twofold:

Get everyone up to speed with each project’s mission, scope, insights into their architecture, roadmap and integration with other systems.
Drive architecture, design decisions and code contributions through collaboration with project maintainers during the hands-on sessions.

 

What’s new at DockerCon: tracks, community lightning talks and internal summits! Click To Tweet

The post What’s new at DockerCon 2017 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

User Group Newsletter February 2017

Welcome to 2017! We hope you all had a lovely festive season. Here is our first edition of the User Group newsletter for this year.

AMBASSADOR PROGRAM NEWS
2017 sees some new arrivals and departures to our Ambassador program. Read about them here.
 
WELCOME TO OUR NEW USER GROUPS
We have some new user groups which have joined the community.
Bangladesh
Ireland &; Cork
Russia &8211; St Petersburg
Phoenix &8211; United States
Romania &8211; Bucharest
We wish them all the best with their OpenStack journey and can’t wait to see what they will achieve!
Looking for a your local group? Are you thinking of starting a user group? Head to the groups portal for more information.

MAY 2017 OPENSTACK SUMMIT
We’re going to Boston for our first summit of 2017!!
You can register and stay updated here.
Consider it your pocket guide for all things Boston summit. Find out about the featured speakers, make your hotel bookings, find your FAQ and read about our travel support program.
 
NEW BOARD OF DIRECTORS
The community has spoken! A new board of directors has been elected for 2017.
Read all about it here. 

MAKE YOUR VOICE HEARD!
Submit your response the latest OpenStack User Survey!
All data is completely confidential. Submissions close on the 20th of February 2017.
You can complete it here. 

CONTRIBUTING TO UG NEWSLETTER
If you’d like to contribute a news item for next edition, please submit to this etherpad.
Items submitted may be edited down for length, style and suitability.
This newsletter is published on a monthly basis. 
Quelle: openstack.org

Planning for OpenStack Summit Boston begins

The post Planning for OpenStack Summit Boston begins appeared first on Mirantis | Pure Play Open Cloud.
The next OpenStack summit will be held in Boston May 8 through May 11, 2017, and the agenda is in progress.  Mirantis folks, as well as some of our customers, have submitted talks, and we&;d like to invite you to take a look, and perhaps to vote to show your support in this process.  The talks include:

From Point and Click to CI/CD: A real world look at accelerating OpenStack deployment, improving sustainability, and painless upgrades! (Bruce Mathews, Ryan Day, Amit Tank (AT&T))

Terraforming the OpenStack Landscape (Mykyta Gubenko)

Virtualized services delivery using SDN/NFV: from end-to-end in a brownfield MSO environment (Bill Coward (Cox Business Services))

Operational automation of elements, api calls, integrations, and other pieces of MSO SDN/NFV cloud (Bill Coward (Cox Business Services))

The final word on Availability Zones (Craig Anderson)

m1.Boaty.McBoatface: The joys of flavor planning by popular vote (Craig Anderson)

Proactive support and Customer care (Anton Tarasov)

OpenStack with SaltStack for complete deployment automation (Ales Komarek)

Resilient RabbitMQ cluster automation with Kubernetes (Alexey Lebedev)

How fast is fast enough? The science behind bottlenecks (Christian Huebner)

Approaches for cloud transformation of Big Data use case (Christian Huebner)

Workload Onboarding and Lifecycle Management with Heat (Florin Stingaciu)

Preventing Nightmares: Data Protection for OpenStack environments (Christian Huebner)

Deploy a Distributed Containerized OpenStack Control Plane Infrastructure (Rama Darbha (Cumulus), Stanley Karunditu)

Saving one cloud at a time with tenant care (Bryan Langston, Holly Bazemore (Comcast), Shilla Saebi (Comcast))

CI/CD in Documentation (Alexandra Settle (Rackspace), Olga Gusarenko)

Kuryr-Kubernetes: The seamless path to adding Pods to your datacenter networking (Antoni Segura Puimedon (RedHat), Irena Berezovsky (Huawei), Ilya Chukhnakov)

Cinder Stands Alone (Scott DAngelo (IBM), Ivan Kolodyazhny, Walter A. Boring IV (IBM))

NVMe-over-Fabrics and Openstack (Tushar Gohad (Intel), Michał Dulko (Intel), Ivan Kolodyazhny)

Episode 2: Log Book: VW Ops team’s adventurous journey to the land of OpenStack &; Go Global (Gerd Pruessmann, Tilman Schulz (Volkswagen))

OpenStack: pushing to 5000 nodes and beyond (Dina Belova, Georgy Okrokvertskhov)

Turbo Charged VNFs at 40 gbit/s. Approaches to deliver fast, low latency networking using OpenStack (Greg Elkinbard)

Using Top of the Rack Switch as a fast L2 and L3 Gateway on OpenStack (Greg Elkinbard)

Deploy a Distributed Containerized OpenStack Control Plane Infrastructure (Stanley Karunditu)

While you&8217;re in Boston, consider taking a little extra time in Beantown to take advantage of Mirantis Training&8217;s special Summit training, which includes a bonus introduction module on the Mirantis Compute Platform (MCP).  You&8217;ll get to the summit up to speed with the technology, and even (if you pass the exam) the OCM100 OpenStack certification.  Can&8217;t make it to Boston?  You can also take the class live from the comfort of your own home (or office).
The post Planning for OpenStack Summit Boston begins appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

IBM and VMware extend their partnership to IBM Business Partners

At IBM InterConnect 2016, IBM and VMware announced a strategic partnership to accelerate enterprise hybrid cloud adoption. Since then, the two companies have jointly provided additional services and solutions for clients to optimize, integrate and extend their VMware environments to the cloud with ease and speed.
This initiative has grown to include more than 1,000 clients and 4,000 IBM service professionals.
And now, in an industry first, IBM and VMware are pleased to announce in conjunction with the PartnerWorld Leadership Conference that they’ve expanded their partnership to IBM Business Partners, providing even more reach to VMware and IBM Cloud users around the world with a portfolio of services that includes planning, architecture, migration, and end-to-end management.  This exclusive agreement enables IBM Business Partners to resell VMware Cloud Foundation and VMware software licenses on IBM Cloud.
VMware Cloud Foundation
This standardized software defined data center solution brings together IBM Bluemix infrastructure with VMware vSphere, Virtual SAN, NSX and SDDC Manager for a seamless hybrid cloud experience.
VMware software licenses
Organizations can deploy a custom VMware environment by combining IBM Bluemix infrastructure and VMware software licenses to fit their unique business requirements.
Another benefit is that IBM Business Partners and their clients can bring their own VMware licenses to the IBM Bluemix infrastructure.
VMware will be joining IBM at InterConnect 2017, 19 &; 23 March. Check out the VMware on IBM Cloud sessions.

The post IBM and VMware extend their partnership to IBM Business Partners appeared first on news.
Quelle: Thoughts on Cloud

Introduction to Salt and SaltStack

The post Introduction to Salt and SaltStack appeared first on Mirantis | Pure Play Open Cloud.
The amazing world of configuration management software is really well populated these days. You may already have looked at Puppet, Chef or Ansible but today we focus on SaltStack. Simplicity is at its core, without any compromise on speed or scalability. In fact, some users have up to 10,000 minions or more. In this article, we&;re going to give you a look at what Salt is and how it works.
Salt architecture
Salt remote execution is built on top of an event bus, which makes it unique. It uses a server-agent communication model where the server is called the salt master and the agents the salt minions.
Salt minions receive commands simultaneously from the master and contain everything required to execute commands locally and report back to salt master. Communication between master and minions happens over a high-performance data pipe that use ZeroMQ or raw TCP, and messages are serialized using MessagePack to enable fast and light network traffic. Salt uses public keys for authentication with the master daemon, then uses faster AES encryption for payload communication.
State description is done using YAML and remote execution is possible over a CLI, and programming or extending Salt isn’t a must.
Salt is heavily pluggable; each function can be replaced by a plugin implemented as a Python module. For example, you can replace the data store, the file server, authentication mechanism, even the state representation. So when I said state representation is done using YAML, I’m talking about the Salt default, which can be replaced by JSON, Jinja, Wempy, Mako, or Py Objects. But don’t freak out. Salt comes with default options for all these things, which enables you to jumpstart the system and customize it when the need arises.
Terminology
It&8217;s easy to be overwhelmed by the obscure vocabulary that Salt introduces, so here are the main salt concepts which make it unique.

salt master &; sends commands to minions
salt minions &8211; receives commands from master
execution modules &8211; ad hoc commands
grains &8211; static information about minions
pillar &8211; secure user-defined variables stored on master and assigned to minions (equivalent to data bags in Chef or Hiera in Puppet)
formulas (states) &8211; representation of a system configuration, a grouping of one or more state files, possibly with pillar data and configuration files or anything else which defines a neat package for a particular application.
mine &8211; area on the master where results from minion executed commands can be stored, such as the IP address of a backend webserver, which can then be used to configure a load balancer
top file &8211; matches formulas and pillar data to minions
runners &8211; modules executed on the master
returners &8211; components that inject minion data to another system
renderers &8211; components that run the template to produce the valid state of configuration files. The default renderer uses Jinja2 syntax and outputs YAML files.
reactor &8211; component that triggers reactions on events
thorium &8211; a new kind of reactor, which is still experimental.
beacons &8211; a little piece of code on the minion that listens for events such as server failure or file changes. When it registers on of these events, it informs the master. Reactors are often used to do self healing.
proxy minions &8211; components that translate Salt Language to device specific instructions in order to bring the device to the desired state using its API, or over SSH.
salt cloud &8211; command to bootstrap cloud nodes
salt ssh &8211; command to run commands on systems without minions

You’ll find a great overview of all of this on the official docs.
Installation
Salt is built on top of lots of Python modules. Msgpack, YAML, Jinja2, MarkupSafe, ZeroMQ, Tornado, PyCrypto and M2Crypto are all required. To keep your system clean, easily upgradable and to avoid conflicts, the easiest installation workflow is to use system packages.
Salt is operating system specific; in the examples in this article, I’ll be using Ubuntu 16.04 [Xenial Xerus]; for other Operating Systems consult the salt repo page.  For simplicity&8217;s sake, you can install the master and the minion on a single machine, and that&8217;s what we&8217;ll be doing here.  Later, we&8217;ll talk about how you can add additional minions.

To install the master and the minion, execute the following commands:
$ sudo su
# apt-get update
# apt-get upgrade
# apt-get install curl wget
# echo “deb [arch=amd64] http://apt.tcpcloud.eu/nightly xenial tcp-salt” > /etc/apt/sources.list
# wget -O – http://apt.tcpcloud.eu/public.gpg | sudo apt-key add –
# apt-get clean
# apt-get update
# apt-get install -y salt-master salt-minion reclass

Finally, create the  directory where you’ll store your state files.
# mkdir -p /srv/salt

You should now have Salt installed on your system, so check to see if everything looks good:
# salt –version
You should see a result something like this:
salt 2016.3.4 (Boron)

Alternative installations
If you can’t find packages for your distribution, you can rely on Salt Bootstrap, which is an alternative installation method, look below for further details.
Configuration
To finish your configuration, you&8217;ll need to execute a few more steps:

If you have firewalls in the way, make sure you open up both port 4505 (the publish port) and 4506 (the return port) to the Salt master to let the minions talk to it.
Now you need to configure your Minion to connect to your master.  Edit the file /etc/salt/minion.d/minion.conf  and Change the following lines as indicated below:

# Set the location of the salt master server. If the master server cannot be
# resolved, then the minion will fail to start.
master: localhost

# If multiple masters are specified in the ‘master’ setting, the default behavior
# is to always try to connect to them in the order they are listed. If random_master is
# set to True, the order will be randomized instead. This can be helpful in distributing

# Explicitly declare the id for this minion to use, if left commented the id
# will be the hostname as returned by the python call: socket.getfqdn()
# Since salt uses detached ids it is possible to run multiple minions on the
# same machine but with different ids, this can be useful for salt compute
# clusters.
id: saltstack-m01

# Append a domain to a hostname in the event that it does not exist.  This is
# useful for systems where socket.getfqdn() does not actually result in a
# FQDN (for instance, Solaris).
:

As you can see, we&8217;re telling the minion where to find the master so it can connect &; in this case, it&8217;s just localhost, but if that&8217;s not the case for you, you&8217;ll want to change it.  We&8217;ve also given this particular minion an id of saltstack-m01; that&8217;s a completely arbitrary name, so you can use whatever you want.  Just make sure to substitute in the examples!
Before being able you can play around, you&8217;ll need to restart the required Salt services to pick up the changes:
# service salt-minion restart
# service salt-master restart

Make sure services are also started at boot time:
# systemctl enable salt-master.service
# systemctl enable salt-minion.service

Before the master can do anything on the minion, the master needs to trust it, so accept the corresponding key of each of your minion as follows:
# salt-key
Accepted Keys:
Denied Keys:
Unaccepted Keys:
saltstack-m01
Rejected Keys:

Before accepting it, you can validate it looks good. First inspect it:
# salt-key -f saltstack-m01
Unaccepted Keys:
saltstack-m01:  98:f2:e1:9f:b2:b6:0e:fe:cb:70:cd:96:b0:37:51:d0

Then compare it with the minion key:
# salt-call –local key.finger
local:
98:f2:e1:9f:b2:b6:0e:fe:cb:70:cd:96:b0:37:51:d0

It looks the same, so go ahead and accept it:/span>
salt-key -a saltstack-m01

Repeat this process of installing salt-minion and accepting the keys to add new minions to your environment. Consult the documentation to get more details regarding the configuration of minions or more generally this documentation for all salt configuration options.
Remote execution
Now that everything&8217;s installed and configured, let&8217;s make sure it&8217;s actually working. The first, most obvious thing we could do with our master/minion infrastructure is to run a command remotely. For example we can test whether the minion is alive by using the test.ping command:
# salt ‘saltstack-m01′ test.ping
saltstack-m01:
   True
As you can see here, we&8217;re calling salt, and we&8217;re feeding it a specific minion, and a command to run on that minion.  We could, if we wanted to, send this command to more than one minion. For example, we could send it to all minions:
# salt ‘*’ test.ping
saltstack-m01:
   True
In this case, we have only one, but if there were more, salt would cycle through all of them giving you the appropriate response.
So that should get you started. Next time, we&8217;ll look at some of the more complicated things you can do with Salt.
The post Introduction to Salt and SaltStack appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

How to turn APM metrics into better apps

Digital applications are the lifeblood of business today. They are the primary means of interaction with customers. It’s imperative applications are always available, provide optimal performance and deliver exceptional customer experiences. If not, you can probably expect your customers to say goodbye.
Applications run on top of a complex web of software components. Some provide the platform and connectivity needed to deliver services and others move data between devices. IBM provides a robust set of components, including WebSphere Application Server, MQ, IBM Integration Bus, Datapower and more. Each helps deliver numerous applications.
To ensure your applications perform optimally, you should manage the health and welfare of software components. IBM Application Performance Management (APM) monitors the performance and availability of your critical IBM applications to identify problems before impacting users, visualize performance bottlenecks and more.
Fine-tune your software with APM metrics
Many people think of monitoring as being alerted about a problem and guided to the issue source to fix it. But another motivation for monitoring is to proactively avoid those problems in the first place. Adopting DevOps methodology, you can take information from monitoring to your developers for fine-tuning to improve application performance. You can gather metrics at short intervals, making it more likely that you’ll spot trends or anomalies that indicate bottlenecks.
APM allows you to monitor the key metrics of your IBM software environment for optimal behavior. But it’s also important to measure the CPU, memory and network utilization to ensure bottlenecks aren’t at the platform level.
To illustrate, here are some of the metrics you can use from APM to tune WebSphere Application Server:

Heap utilization and garbage collection statistics to determine if memory leaks are occurring
Database Connection pools to identify if they are too small to handle the load that is placed on them
Thread pools to determine if they are too small to handle the load
Web Services &; identify most used web services and performance problems, including if it is a code or underlying resource problem

Speed resolution with APM metrics
You can also use APM to identify problems and speed up resolution time. It works similarly to fine-tuning, but with more frequent metric gathering and alerting rather than reporting. This is also where IBM APM outshines other solutions, offering quicker troubleshooting and resolution.
Transaction tracking can dramatically improve problem diagnosis by isolating the source. This ensures the issue is routed to the right SME and does not involve others responsible for different areas of the application environment. IBM Operations Analytics – Predictive Insights automatically determines baselines for metrics and will alert you about deviations from that baseline. It can also identify related metrics helpful for quicker identification of problems’ cause.
If you’re using IBM components to build applications, you should consider coupling them with IBM APM’s monitoring designed to help tune those components for optimal performance and quick problem resolution. Result? An optimal customer experience.
Want hands-on experience with IBM APM? Attend IBM InterConnect for countless sessions, labs, and educational opportunities.
The post How to turn APM metrics into better apps appeared first on news.
Quelle: Thoughts on Cloud