Global Mentor Week: Thank you Docker Community!

Danke, рақмет сізге, tak, धन्यवाद, cảm ơn bạn, شكرا, mulțumesc, Gracias, merci, asante, ευχαριστώ, thank you community for an incredible Docker Global Mentor Week! From Tokyo to Sao Paulo, Kisimu to Copenhagen and Ottowa to Manila, it was so awesome to see the energy from the community coming together to celebrate and learn about Docker!

Over 7,500 people registered to attend one of the 110 mentor week events across 5 continents! A huge thank you to all the Docker meetup organizers who worked hard to make these special events happen and offer Docker beginners and intermediate users an opportunity to participate in Docker courses.
None of this would have been possible without the support (and expertise!) of the 500+ advanced Docker users who signed up as mentors to help newcomers .
Whether it was mentors helping attendees, newcomers pushing their first image to Docker Hub or attendees mingling and having a good time, everyone came together to make mentor week a success as you can see on social media and the Facebook photo album.
Here are some of our favorite tweets from the meetups:
 

@Docker LearnDocker at Grenoble France 17Nov2016 @HPE_FR pic.twitter.com/8RSxXUWa4k
— Stephane Bureau (@SBUCloud) November 18, 2016

Awesome turnout at tonight&;s @DockerNYC learndocker event! We will be hosting more of these &; Keep tabs on meetup: https://t.co/dT99EOs4C9 pic.twitter.com/9lZocCjMPb
— Luisa M. Morales (@luisamariethm) November 18, 2016

And finally&; &;Tada&; Docker Mentor Weeklearndocker pic.twitter.com/6kzedIoGyB
— Károly Kass (@karolykassjr) November 17, 2016

 
Learn Docker
In case you weren’t able to attend a local event, the five courses are now available to everyone online here: https://training.docker.com/instructor-led-training
Docker for Developers Courses
Developer &8211; Beginner Linux Containers
This tutorial will guide you through the steps involved in setting up your computer, running your first containers, deploying a web application with Docker and running a multi-container voting app with Docker Compose.
Developer &8211; Beginner Windows Containers
This tutorial will walk you through setting up your environment, running basic containers and creating a Docker Compose multi-container application using Windows containers.
Developer &8211; Intermediate (both Linux and Windows)
This tutorial teaches you how to network your containers, how you can manage data inside and between your containers and how to use Docker Cloud to build your image from source and use developer tools and programming languages with Docker.
Docker for Operations courses
This courses are step-by-step guides where you will build your own Docker cluster, and use it to deploy a sample application. We have two solutions for you to create your own cluster.

Using play-with-docker

Play With Docker is a Docker playground that was built by two amazing Docker captains: Marcos Nils and Jonathan Leibiusky during the Docker Distributed Systems Summit in Berlin last October.
Play with Docker (aka PWD) gives you the experience of having a free Alpine Linux Virtual Machine in the cloud where you can build and run Docker containers and even create clusters with Docker features like Swarm Mode.
Under the hood DIND or Docker-in-Docker is used to give the effect of multiple VMs/PCs.
To get started, go to http://play-with-docker.com/ and click on ADD NEW INSTANCE five times. You will get five &8220;docker-in-docker&8221; containers, all on a private network. These are your five nodes for the workshop!
When the instructions in the slides tell you to &8220;SSH on node X&8221;, just go to the tab corresponding to that node.
The nodes are not directly reachable from outside; so when the slides tell you to &8220;connect to the IP address of your node on port XYZ&8221; you will have to use a different method.
We suggest to use &8220;supergrok&8221;, a container offering a NGINX+ngrok combo to expose your services. To use it, just start (on any of your nodes) the jpetazzo/supergrok image. The image will output further instructions:
docker run –name supergrok -d jpetazzo/supergrok
docker logs –follow supergrok
The logs of the container will give you a tunnel address and explain you how to connected to exposed services. That&8217;s all you need to do!
You can also view this excellent video by Docker Brussels Meetup organizer Nils de Moor who walks you through the steps to build a Docker Swarm cluster in a matter of seconds through the new play-with-docker tool.

 
Note that the instances provided by Play-With-Docker have a short lifespan (a few hours only), so if you want to do the workshop over multiple sessions, you will have to start over each time &8230; Or create your own cluster with option below.

Using Docker Machine to create your own cluster

This method requires a bit more work to get started, but you get a permanent cluster, with less limitations.
You will need Docker Machine (if you have Docker Mac, Docker Windows, or the Docker Toolbox, you&8217;re all set already). You will also need:

credentials for a cloud provider (e.g. API keys or tokens),
or a local install of VirtualBox or VMware (or anything supported by Docker Machine).

Full instructions are in the prepare-machine subdirectory.
Once you have decided what option to choose to create your swarm cluster, you ready to get started with one of the operations course below:
Operations &8211; Beginner
The beginner part of the Ops tutorial will teach you how to set up a swarm, how to use it to host your own registry, how to build your app container images and how to deploy and scale a distributed application called Dockercoins.
Operations &8211; Intermediate
From global container scheduling, overlay networks troubleshooting, dealing with stateful services and node management, this tutorial will show you how to operate your swarm cluster at scale and take you on a swarm mode deep dive.

Danke, Gracias, Merci, Asante, ευχαριστώ, thank you Docker community for an amazing&8230;Click To Tweet

The post Global Mentor Week: Thank you Docker Community! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

OpenStack Developer Mailing List Digest November 26 – December 2

Updates

Nova Resource Providers update [2]
Nova blueprints update [16]
OpenStack-Ansible deploy guide live! [6]

The Future of OpenStack Needs You [1]

Need more mentors to help run Upstream Trainings at the summits
Interested in doing an abridged version at smaller more local events
Contact ildikov or diablo_rojo on IRC if interested

New project: Nimble [3]

Interesting chat about bare metal management
The project name is likely to change
(Will this lead to some discussions about whether or not allow some parallel experiments in the OpenStack Big Tent?)

Community goals for Pike [4]

As Ocata is a short cycle it’s time to think about goals for Pike [7]
Or give feedback on what’s already started [8]

Exposing project team&;s metadata in README files (Cont.) [9]

Amrith agrees with the value of Flavio’s proposal that a short summary would be good for new contributors
Will need a small API that will generate the list of badges

Done- as a part of governance
Just a graphical representation of what’s in the governance repo
Do what you want with the badges in README files

Patches have been pushed to the projects initiating this change

Allowing Teams Based on Vendor-specific Drivers [10]

Option 1: https://review.openstack.org/403834 &; Proprietary driver dev is unlevel
Option 2: https://review.openstack.org/403836 &8211; Driver development can be level
Option 3: https://review.openstack.org/403839 &8211; Level playing fields, except drivers
Option 4:  https://review.openstack.org/403829 &8211; establish a new &;driver team&; concept
Option 5: https://review.openstack.org/403830 &8211; add resolution requiring teams to accept driver contributions

Thierry prefers this option
One of Flavio’s preferred options

Option 6: https://review.openstack.org/403826 &8211; add a resolution allowing teams based on vendor-specific drivers

Flavio’s other preferred option

Cirros Images to Change Default Password [11]

New password: gocubsgo
Not ‘cubswin:)’ anymore

Destructive/HA/Fail-over scenarios

Discussion started about adding end-user focused test suits to test OpenStack clusters beyond what’s already available in Tempest [12]
Feedback is needed from users and operators on what preferred scenarios they would like to see in the test suite [5]
You can read more in the spec for High Availability testing [13] and the user story describing destructive testing [14] which are both on review

Events discussion [15]

Efforts to remove duplicated functionality from OpenStack in the sense of providing event information to end-users (Zaqar, Aodh)
It is also pointed out that the information in events can be sensitive which needs to be handled carefully

 
[1] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108084.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107982.html
[3] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107961.html
[4] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108167.html
[5] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108062.html
[6] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108200.html
[7] https://etherpad.openstack.org/p/community-goals
[8] https://etherpad.openstack.org/p/community-goals-ocata-feedback
[9] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107966.html
[10] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108074.html
[11] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108118.html
[12] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108062.html
[13] https://review.openstack.org/#/c/399618/
[14] https://review.openstack.org/#/c/396142
[15] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108070.html
[16] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108089.html
Quelle: openstack.org

Your Docker Agenda for December 2016

Thank you community for your amazing Global Mentor Week Events last month! In November, the community organized over 110 Docker Global Mentor Week events and more than 8,000 people enrolled in at least one of the courses for 1000+ course completions and counting! The five self-paced courses are now available for everyone free online. Check them out here!
As you gear up for the holidays, make sure to check out all the great events that are scheduled this month in Docker communities all over the world! From webinars to workshops, to conference talks, check out our list of events that are coming up in December.
Official Docker Training Courses
View the full schedule of instructor led training courses here!
 
Introduction to Docker:
This is a two-day, on-site or classroom-based training course which introduces you to the Docker platform and takes you through installing, integrating, and running it in your working environment.
Dec 7-8: Introduction to Docker with AKRA Hamburg City, Germany
 
Docker Administration and Operations:
The Docker Administration and Operations course consists of both the Introduction to Docker course, followed by the Advanced Docker Topics course, held over four consecutive days.
Dec 5-8 Docker Administration and Operations with Amazic &; London, United Kingdom
Dec 6-9: Docker Administration and Operations with Vizuri &8211; Atlanta, GA
Dec 12-15: Docker Administration and Operations with Docker Captain, Luis Herrera &8211; Madrid, Spain
Dec 12-15: Docker Administration and Operations with Kiratech &8211; Milan, Italy
Dec 13-16: Docker Administration and Operations with TREEPTIK &8211; Aix en Provence, France
Dec 19-22: Docker Administration and Operations with TREEPTIK &8211; Paris, France
 
Advanced Docker Operations:
This two day course is designed to help new and experienced systems administrators learn to use Docker to control the Docker daemon, security, Docker Machine, Swarm Mode, and Compose.
Dec 7-8: Advanced Docker Operations with Amazic &8211; London, United Kingdom
Dec 15-16: Advanced Docker Operations with Docker Captain, Benjamin Wootton &8211; London, United Kingdom
North America 
Dec 3rd: DOCKER MEETUP AT VISA &8211; Reston, VA
Visa is hosting this month’s meetup! A talk entitled &;Docker UCP 2.0 and DTR 2.1 GA&; by Ben Grissinger (from Docker) followed by &8216;Docker security&8217; by Paul Novarese (from Docker).
Dec 3rd: DOCKER MEETUP IN HAVANA &8211; Havana, Cuba
Join Docker Havana for their 1st ever meetup! Work through the training materials from Docker’s Global Mentor Week series and !
Dec 4th: GDG DEVFEST 2016 &8211; Los Angeles, CA
Docker&8217;s Mano Marks with be keynoting DevFest LA.
Dec 7th: DOCKER MEETUP AT MELTMEDIA &8211; Phoenix, AZ
Join Docker Phoenix for a &8216;Year in Review and Usage Roundtable&8217;. 2016 was a big year for Docker, let&8217;s talk about it!
Dec 13th: DOCKER MEETUP AT TORCHED HOP BREWING &8211; Atlanta, GA
This month we&8217;re going to have a social event without a presentation in combination with the Go and Kubernetes Meetups at Torched Hop Brewing.Come hang out and have a drink or food with us!
Dec 13th: DOCKER MEETUP AT GOOGLE &8211; Seattle, WA
Tiffany Jernigan will do a talk Docker Orchestration (Docker Swarm Mode) and Metrics Collection and then Tsvi Korren will follow with a talk on securing your container environment.
Dec 14th: DOCKER MEETUP AT PUPPET LABS &8211; Portland, OR
A talk by Nan Liu from Intel entitled, &8216;Trust but verify. Testing docker containers.&8217;
Dec 14th: DOCKER MEETUP AT DOCKER HQ &8211; San Francisco, CA
Docker is joining forces with the Prometheus meetup group for a holiday mega-meetup with talks on using Docker with Prometheus and OpenTracing. As a special holiday gift we will be giving away a free DockerCon 2017 ticket to one lucky attendee! Don’t miss out &8211; RSVP now!
 
Dec 15th: DOCKER MEETUP AT GOGO &8211; Chicago, Il
We will be welcoming Loris Degioanni of sysdig as he takes us through monitoring containers. The good, the bad.. and best practice!
 
Europe
Dec 5th: DEVOPSCON MUNICH &8211; Munich, Germany
Docker Captains Philipp Garbe, Gianluca Arbezzano, Viktor Farcic and Dieter Reuter will all be speaking at DevOpsCon.
Dec 6th: DOCKER MEETUP AT FOO CAFE STOCKHOLM &8211; Stockholm, Sweden
In this session, you’ll learn about the container technology built natively into Windows Server 2016 and how you can reuse your knowledge, skills and tools from Docker on Linux. This session will be a mix of presentations, giving you an overview of the technology, and hands-on experiences, so make sure to bring your laptop.
Dec 6th: D cubed: Decision Trees, Docker and Data Science in the Cloud &8211; London, United Kingdom
Steve Poole, DevOps practitioner (leading a team of engineers on cutting edge DevOps exploration) and a long time IBM Java developer, leader and evangelist, will explain what Docker is, and how it works.
Dec 8th: Docker Meetup at Pentalog Romania &8211; Brasov, Romania
Come for a full overview of DockerCon 2016        !
Dec 8th: DOCKER FOR .NET DEVELOPERS AND AZURE MACHINE LEARNING &8211; Copenhagen, Denmark
For this meetup we get a visit from Ben Hall who will talk about Docker for .NET applications, and Barbara Fusińska who will talk about Azure Machine Learning.
Dec 8th: Introduction to Docker for Java Developers &8211; Brussels, Belgium
Join us for the last session of 2016 and discover what Docker has to offer you!
Dec 14th: DOCKER MEETUP AT LA CANTINE NUMERIQUE &8211; Tours, France
What&8217;s new in the Docker ecosystem plus a few more talks on Docker compose and Swarm Mode.
Dec 15th: Docker Meetup at Stylight HQ &8211; Munich, Germany
Join us for our end of the year holiday meetup! Check event page for more details.
Dec 15th: Docker Meetup at ENSEIRB &8211; Bordeaux, France
Jeremiah Monsinjob and Florian Garcia will talk about Docker under dynamic platform and microservices.
Dec 16th: Thessaloniki .NET Meetup about Docker &8211; Thessaloniki, Greece
Byron Papadopoulos will talk about the following: What is the Docker technology, in which cases used, security, scaling, monitoring. What are the tools we use Docker. (Docker Engine and Docker Compose). Container Orchestrator Engines, Docker in Azure (show Docker Swarm Mode). Docker for Devops, and Docker for developers.
Dec 19th: Modern Microservices Architecture using Docker &8211; Herzliyya, Israel
Microservices are all the rage these days. Docker is a tool which makes managing Microservices a whole lot easier. But what do Microservices really mean? What are the best practices of composing your application with Microservices? How can you leverage Docker and the public cloud to help you build a more agile DevOps process? How does the Azure Container Service fit in? Join us in order to find out the answers.
Dec 21st: Docker Meetup at Campus Madrid &8211; Madrid, Spain
Two talks. First talk by Diego Martínez Gil: Dockerized apps running on Windows.
Diego will present the new features available in Windows 10 and Windows Server 2016 to run dockerized applications. Second talk is by Pablo Chico de Guzmán: Docker 1.13. Pablo will demo some of the features available in Docker 1.13.
 
Asia
Dec 10th: DOCKER MEETUP AT MANGALORE INFOTECH &8211; Mangaluru, India
We are hosting the Mangalore edition of &;The Docker Global Mentor Week.&; Our goal is to provide easy paced self learning courses that will take you through the basics of Docker and make you well acquainted with most aspects of application delivery using Docker.
Dec 10th: BIMONTHLY MEETUP 2016 &8211; DOCKER FOR PHP DEVELOPERS &8211; Pune, India
If you are aching to get started with docker, but not sure how to, this meetup is right platform. In this meetup, we will first start by explaining basic docker concepts like what docker is, its benefits, images, registry, containers, docker files etc, followed by an optional workshop for some practical.
Dec 12th: DOCKER MEETUP AT MICROSOFT &8211; Singapore, Singapore
Join us for our next meetup event!
Dec 20th: DOCKER METUP AT MICROSOFT &8211; Riyadh, Saudi Arabia
Join us for a deep dive into Docker technology and how Microsoft and Docker work together. Learn about Azure IaaS and how to run Docker on Microsoft Azure.
Oceania
Dec 5th: DOCKER MEETUP AT CATALYST IT &8211; Wellington, New Zealand
Join us for our next meetup!
Dec 5th: DOCKER MEETUP AT VERSENT PTY LTD &8211; Melbourne, Australia
Yoav Landman, the CTO of JFrog, will talk to us about how new tools often introduce new paradigms. Yoav will examine the patterns and the anti-patterns for Docker image management, and what impact the new tools have on the battle-proven paradigms of the software development lifecycle.
Dec 13th: Action Cable & Docker &8211; Wellington, New Zealand
Come check out a live demo of adding Docker to a rails app.
Africa
Dec 16th: Docker Meetup at Skylabase Inc. &8211; Buea, Cameroon
Join us for a Docker Study Jam!

Check out the list of docker events, meetups, workshops, trainings for the month of December!Click To Tweet

The post Your Docker Agenda for December 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Solutions Delivery Executive

The post Solutions Delivery Executive appeared first on Mirantis | The Pure Play OpenStack Company.
Mirantis, Inc. is looking for an experienced Solutions Delivery Executive to help lead our clients on their journey to the cloud. This highly-visible, senior leadership role within the Mirantis Services organization is a functional peer to the Enterprise Sales executives aligned to our most strategic global accounts.Your top-level responsibilities will include: overall ownership of the end-to-end service delivery experience, building & executing multi-year client account plans, establishing/maintaining corporate governance, driving cross-functional collaboration & communications with client executives and business stakeholders, and ensuring operational excellence & successful business outcomes for the clientCandidates considered for this role must have a good mix of strong operational & business skills combined with strategic thought-leadership and a mind for tactical execution. Acting as a liaison between the client and Mirantis worldwide, you should be a strong advocate for the client, but with the goals of sound business judgments and mutual assured success for both parties.Primary ResponsibilitiesLead the global service delivery experience; single point of ownership and accountability for all client service delivery related activitiesBuilding and maintaining trusted advisor relationships with influential client decision-makers for the successful adoption and deployment of cloud services and technologiesWork in collaboration with the client Sales team to create and execute multi-year business plans to accelerate the adoption of cloud across the client’s business units, exceed revenue goals, and driving client referrals and referencesManage client level P&L &; drive revenue recognition, achieve and/or exceed quarterly PS revenue, cost, utilization & profitability objectivesEnsure client-specific operational, change management and compliance practices are implemented and adhered-to; continually seek to improve processes, reduce complexity and drive predictability for clientsAct as escalation lead for all service delivery-related issues that could impact client relationshipParticipate in contract and financial negotiations (MSAs, SOWs, ELAs, T&Cs)Qualifications10+ years experience in infrastructure/cloud solutions Services company,ideally as an executive within a large Enterprise IT organization, consulting firm or global systems integration companyBachelor’s degree (Business, Science, Technology, Engineering, Math) or equivalent experienceAnalytical decision-making and detail-oriented thinking combined with strong management skillsDemonstrated experience managing large, cross-functional teams within matrix organizationsSuperior interpersonal, written, verbal, listening and presentation skills &8211; ability to communicate cross-functionally with most senior-level executivesHighly organized, able to track multiple concurrent tasks and activities simultaneously; first-hand Change Management and Business Process Mapping experienceHistory of leading successful business transformations using cloud & related technologiesIn-depth knowledge of OpenStack or similar cloud technologies (AWS, Azure, CloudStack)Ability to travel freely between client sites and Mirantis HQ as neededWhat We OfferPartner with exceptionally passionate, talented and engaging colleagues.Implement cloud solutions for some of the best known brands in the industry for use in mission critical applications.High-energy atmosphere of a young company, competitive compensation package with strong benefits plan and stock options.Environment that fosters creativity and personal growth.The post Solutions Delivery Executive appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Senior CI Engineer

The post Senior CI Engineer appeared first on Mirantis | The Pure Play OpenStack Company.
Mirantis is the leading global provider of Software and Services for OpenStack(TM), a massively scalable and feature-rich Open Source Cloud Operating System. OpenStack is used by hundreds of companies, including AT&T, Cisco, Symantec, NASA, Dell, PayPal and many more.Mirantis has more experience delivering OpenStack clouds to more customers than any other company in the world. We build the infrastructure that makes OpenStack work. We are proud to serve on the OpenStack Foundation Board and to be one of the top contributors to OpenStack.Mirantis is looking for a qualified candidate with experience in continuous integration, release engineering, or quality assurance, to join our CI Services team, which designs and implements CI/CD pipelines to build and test product artifacts and deliverables of the Mirantis Openstack distribution.Responsibilities:design and implement CI/CD pipelines,develop a unified CI framework based on existing tools (Zuul, Jenkins Job Builder, fabric, Gerrit, etc.),define and manage test environments required for different types of automated tests,drive cross-team communications to streamline and unify build and test processes,track and optimize hardware utilization by CI/CD pipelines,provide and maintain specifications and documentation for CI systems,provide support for users of CI systems (developers and QA engineers),produce and deliver technical presentations at internal knowledge transfer sessions, public workshops and conferences,participate in upstream OpenStack community, working together with OpenStack Infra team on common CI/CD tools and processes.Required Skills:Linux system administration &; package management, services administration, networking, KVM-based virtualization;scripting with Bash and Python;experience with the DevOps configuration management methodology and tools (Puppet, Ansible);ability to describe and document systems design decisions;familiarity with development workflows &8211; feature design, release cycle, code-review practices;English, both written and spoken.Will Be a Plus:knowledge of CI tools and frameworks (Jenkins, Buildbot, etc.);release engineering experience &8211; branching, versioning, managing security updates;understanding of release engineering and QA practices of major Linux distributions;experience in test design and automation;experience in project management;involvement in major Open Source communities (developer, package maintainer, etc.).What We Offer:challenging tasks, providing room for creativity and initiative,work in a highly-distributed international team,work in the Open Source community, contributing patches to upstream,opportunities for career growth and relocation,business trips for meetups and conferences, including OpenStack Summits,strong benefits plan,medical insurance.The post Senior CI Engineer appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Your Agenda for HPE Discover London 2016

Next week HPE will host more than 10,000 top IT executives, architects, engineers, partners and thought-leaders from across Europe at Discover 2016 London, November 29th &; December 1st in London.
Come visit Docker in Booth  to learn how Docker’s Containers-as-a-Service platform is transforming modern application infrastructures, allowing business to benefit from a more agile development environment.
Docker experts will be on-hand to for in-booth demos, hands-on-labs, breakout sessions and Transformation Zone sessions to demonstrate how Docker’s infrastructure platform, provides businesses with a unifying framework to embrace hybrid infrastructures and optimize resource utilization across legacy and modern Linux and Windows applications.
Not attending Discover London? Don’t miss a thing and “Save the Date” for the live streaming of keynotes and top sessions beginning November 29th at 11:00 GMT and through the duration of the event.

Save the date &8211; General Session Day 1
Save the date &8211; General Session Day 2

Be sure to add these key Docker sessions to your HPE Discover London agenda:
Ongoing: Transformation Zone Hours Show Floor
DEMO315: HPE IT Docker success stories
Supercharge your container deployments on bare metal and VMs by orchestrating large workloads using simple Docker mechanisms. See how the HPE team automated hosting applications using HPE OneView, running Docker containers on bare metal and VMs for deployment and management of traditional R&D tools for build and test.
 
Tuesday,   November 29, 2016
10:30 &8211; 11:00   Theater 1
T10749: Pick up the pace with infrastructure optimized for Docker and DevOps
Docker and DevOps can accelerate app development, but what are you doing to accelerate your Docker platform? Improving software release velocity and efficiency requires infrastructure that can keep pace with Docker. During this session, you will receive practical tips on how to quickly spin up and manage Docker DevOps environments. Take advantage of our development experiences and reference architecture best practices to leverage the HPE Hyper Converged platform so that you will have more time to focus on developing your apps.
11:30 &8211; 12:00  Discussion Forum 6: 
DF11870: Meet the expert, tips to accelerate your IT with composable infrastructure, containers, virtualization and microservices
Spend time with a Hewlett Packard Enterprise infrastructure automation expert to explore new ways to accelerate delivery of applications and IT services. Learn how to bring infrastructure as code to bare metal with HPE OneView and composable infrastructure. Find out how containers can provide an ideal environment for service deployment. Get best-practice guidance for using a microservices architecture to create small services with light use of resources, coupled with fast deployment and easy portability.
12:30 &8211; 13:30   Capital Suite, Rm 16: 
BB11866: Developer-friendly IT accelerates adoption of continuous integration and delivery to drive greater value
Are your marching orders, “Everything as code and automate everything?” If your answer is, “Yes,” then come to this Breakout Session to hear Hewlett Packard Enterprise experts share real-world use cases that address compliance at velocity, configuration drift and bare-metal provisioning. During this session, you’ll also gain best-practice insight on patch management, containers and workflow optimization strategies.

Tuesday, November 29, 2016 12:30 &8211; 13:00   Theater 11
T11827: HPE and Docker, accelerating modern application architectures in the hybrid IT world
Businesses require a hybrid infrastructure that supports continuous delivery of new applications and services. With HPE and Docker, businesses are now able to build and run distributed applications in a hybrid IT environment faster and more cost-effectively. This partnership provides the flexibility of a true hybrid solution, with your own container and Docker apps that can run in a public or private cloud. Join us to see how HPE and Docker provide a comprehensive solution that spans the app lifecycle, and helps cut cost and reduce complexity.
 
Wednesday,   November 30, 2016 11:00 &8211; 12:00  Innovation Theater 10
SL11392: The future belongs to the fast, transform your business with IT Operations Management
Join Tony Sumpster, Senior Vice President and General Manager of Hewlett Packard Enterprise Software, along with a panel of customers, to discuss the challenges and opportunities in digital transformation. You’ll also hear about how IT operations can accelerate your transition to the digital enterprise. Transformation is driven by business needs, and innovations in hybrid cloud, machine learning and collaboration can help you realize rapid time to value and time to market, while also managing risk.
 
Wednesday,   November 30, 2016 11:30 &8211; 12:00  Connect Community
DF12121: Connect Tech Forum, from automation to Docker and Azure, a practical guide to build your cloud journey
Businesses of all sizes are feeling the need for infrastructure that’s faster and lighter on its feet. The C-Suite is looking for IT to be a catalyst for change, not a constraint. Your business is looking for public-cloud-like convenience and speed, things you, as IT Director, will be hard-pressed to provide with incremental changes. Through a company assessment, you will learn how to start your cloud journey and discover the route to Hybrid IT through practical use cases.
Read more about Docker for the Virtualization Admin in our eBook by Docker Technical Evangelist Mike Coleman and to learn more about Docker’s enterprise platform, Docker Datacenter, watch the on-demand webinar What&;s New in Docker Datacenter with Engine 1.12.
To start learning more about Docker and HPE, check out these additional resources:

Go to: www.docker.com/hpe
Sign up for a free 30 day trial
Read the Containers as a Service white paper

Visit @Docker at in London 11/29-12/1Click To Tweet

The post Your Agenda for HPE Discover London 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

What’s New in Docker Datacenter with Engine 1.12 – Demo Q&A

Last week we announced the latest release of (DDC) with Engine 1.12 integration, which includes Universal Control Plane (UCP) 2.0 and Docker Trusted Registry (DTR) 2.1. Now, IT operations teams can manage and secure their environment more effectively and developers can self-service select from an even more secure image base. Docker Datacenter with Engine 1.12 boasts improvements in orchestration and operations, end to end security (image signing, policy enforcement, mutual TLS encryption for clusters), enables Docker service deployments and includes an enhanced UI. Customers also have backwards compatibility for Swarm 1.x and Compose.

 
To showcase some of these new features we hosted a webinar where we provided an overview of Docker Datacenter, talked through some of the new features and showed a live demo of solution. Watch the recording of the webinar below:
 

 
We hosted a Q&A session at the end of the webinar and have included some of the most common audience questions  received.
Audience Q&A
Can I still deploy run and deploy my applications built with a previous Docker Engine version?
Yes. UCP 2.0 automatically sets up and manages a Swarm cluster alongside the native built-in swarm-mode cluster from Engine 1.12 on the same set of nodes. This means that when you use “docker run” commands, they are handled by the Swarm 1.x part of the UCP cluster and thus ensures full backwards compatibility with your existing Docker applications. The best part is, no additional product installation or configuration is required by the admin to make this work.In addition to this, previous versions of the Docker Engine (1.10 and 1.11) will still be supported as part of Docker Datacenter.
 
Will Docker Compose continue to work in Docker Datacenter?  I.e Deploy containers to multiple hosts in a DDC cluster, as opposed to only on a single host?
In UCP, “docker-compose up” will deploy to multiple hosts on the cluster. This is different from an open-source Engine 1.12 swarm-mode, where it will only deploy on a single node, because UCP offers full backwards compatibility (using the parallel Swarm 1.x cluster, as described above). Note that you will have to use Compose v2 in order to deploy across multiple hosts, as Compose v1 format does not support multi-host deployment.
 
For the built in HTTP routing mesh, which External LB&;s are supported? Nginx, HAProxy, AWS EC2 Elastic LB? Does this work similar to what Interlock was doing?
The experimental HTTP routing mesh (HRM) feature is focused on providing correct routing between hostnames and services, so it will  work across any of the above load balancers, as long as you configure them appropriately for this purpose.
The HRM and Interlock LB/SD feature sets provide similar capabilities but for different application architectures. HRM is used for swarm-mode based services, while Interlock is used for non-swarm-mode “docker run” containers.
For more information on these features, check out our blog post on DDC networking updates and the updated reference architecture linked within that post.
 
Will the HTTP routing mesh feature be available also in the open source free version of the docker engine?
Docker Engine 1.12 (open-source) contains the TCP-based routing mesh, which allows you to route based on ports. Docker Datacenter also provides the HTTP routing mesh feature which extends the open-source feature to allow you to route based on hostnames.
 
What is “docker service” used for and why?
A Docker service is a construct within swarm-mode that consists of a group of containers (“tasks”) from the same image. Services follow a declarative model that allows you to specify the desired state of your application: you specify how many instances of the container image you want, and swarm-mode ensures that those instances are deployed on the cluster. If any of those instances go down (e.g. because a host is lost), swarm-mode automatically reschedules them elsewhere on the cluster. The service also provides integrated load balancing and service discovery for its container instances.
 
What type of monitoring of host health is built in?
The new swarm-mode in Docker Engine 1.12 uses a RAFT-based consensus algorithm to determine the health of nodes in the cluster. Each swarm manager sends regular pings to workers (and to other managers) in order to determine their current status. If the pings return an unhealthy response or do not meet the latency minimums for the cluster (configurable in the settings), then that node might be declared unhealthy and containers will be scheduled elsewhere in the cluster. In Universal Control Plane (UCP), the status of nodes is described in detail in the web UI on the dashboard and Nodes pages.
 
What kind of role based access controls (RBAC) are available for networks and load balancing features?
The previous version of UCP (1.1) had the ability to provide granular label-based access control for containers. We’ve since expanded that granular access control to include both services and networks, so you can use labels to define which networks a team of users has access to, and what level of access that team has. The load balancing features make use of both services and networks so will be access controlled through those resources.
 
Is it possible to enforce a criteria that only allows production DTR run only containers that are signed?
Yes, you can accomplish this using a combination of features in the new version of Docker Datacenter. DTR 2.1 contains a Notary server (Docker Content Trust), which allows you to provide your users cryptographic keys to sign images. UCP 2.0 has the ability to run only signed images on the cluster. Furthermore, you can use “delegations” to define which teams must sign the image prior to it being deployed; for example in a low security cluster you could allows any UCP user to sign, whereas in production, you might require signatures from Release Management, Security, and Developer teams. Learn more about running images with Docker Content Trust here.
 
As a very large enterprise doing various POC&8217;s for Docker, one of the big questions is vulnerabilities in the open source code that can be part of the base images. Is there anything that Docker is developing to counter this?
Earlier this year, we announced Docker Security Scanning, which provides a detailed security profile of Docker images for risk management and software compliance purposes. Docker Security Scanning is currently available for private repositories in Docker Cloud private and coming soon to Docker Datacenter.
 
Is there any possibility to trace which user is accessing a container?
Yes, you can use audit logging. To provide auditing of your cluster, you can utilize UCP’s Remote Log Server feature. This allows you to send system debug information to a syslog server of your choice, including a full list of all commands run against the UCP cluster. This would include information such as which user attempted to deploy or access a container.
 
What checks does the new DDC have for potential noisy neighbor container scenarios, or for rogue containers that can potentially hog the underlying infrastructure?
One of the ways you can provide a check against noisy neighbor scenarios is through the use of runtime resource constraints. These allow you to set limits on how much system resources (e.g. cpu, memory) that any given container is allowed to use. These are configurable within the UI.
 
Do you have a trial license for Docker Datacenter ?
We offer a free 30-day trial of Docker Datacenter. Trial software  can be accessed by visiting the Docker Store &; www.docker.com/trial
 
For pricing, is a node defined as a host machine or a container?
The subscription is licensed and priced on a per node per year basis. A node is anything with the Docker Commercially Supported (CS) Engine installed on it. It could be a bare metal server, cloud instance or within a virtual machine. More pricing details are available here.
 
More Resources:

Request a demo: of the latest Docker Datacenter
See What’s New in Docker Datacenter
Learn more by visiting the Docker Datacenter webpage
Sign up for a free 30 day trial

Check out the FAQ from last week’s Docker Datacenter demo! To Tweet

The post What’s New in Docker Datacenter with Engine 1.12 &8211; Demo Q&;A appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Best practices for running RabbitMQ in OpenStack

The post Best practices for running RabbitMQ in OpenStack appeared first on Mirantis | The Pure Play OpenStack Company.
OpenStack is dependent on message queues, so it&;s crucial that you have the best possible setup. Most deployments include RabbitMQ, so let&8217;s take a few minutes to look at best practices for making certain it runs as efficiently as possible.
Deploy RabbitMQ on dedicated nodes
With dedicated nodes, RabbitMQ is isolated from other CPU-hungry processes, and hence can sustain more stress.
This isolation option is available in Mirantis OpenStack starting from version 8.0. For more information, do a search for ‘Detach RabbitMQ’ on the validated plugins page.
Run RabbitMQ with HiPE
HiPE stands for High Performance Erlang. When HiPE is enabled, the Erlang application is pre-compiled into machine code before being executed. Our benchmark showed that this gives RabbitMQ a performance boost up to 30%. (If you&8217;re into that sort of thing, you can find the benchmark details here and the results are here.)
The drawback with doing things this way is that application initial start time increases considerably while the Erlang application is compiled. With HiPE, the first RabbitMQ start takes around 2 minutes.
Another subtle drawback we have discovered is that if HiPE is enabled, debugging RabbitMQ might be hard as HiPE can spoil error tracebacks, rendering them unreadable.
HiPE is enabled in Mirantis OpenStack starting with version 9.0.
Do not use queue mirroring for RPC queues
Our research shows that enabling queue mirroring on a 3-node cluster makes message throughput drop twice. You can see this effect in publicly available data produced by Mirantis Scale team &; test reports.
On the other side, RPC messages become obsolete pretty quickly (1 minute) and if messages are lost, it leads only to failure of current operations in progress, so overall RPC queues without mirroring seem to be a good tradeoff.
At Mirantis, you generally enable queue mirroring only for Ceilometer queues, where messages must be preserved. You can see how we define such a RabbitMQ policy here.
The option to turn off queue mirroring is available in MOS starting in Mirantis OpenStack 8.0 and is enabled by default for RPC queues starting in version 9.0.
Use a separate RabbitMQ cluster for Ceilometer
In general, Ceilometer doesn&8217;t send many messages through RabbitMQ. But if Ceilometer gets stuck, its queues overflow. That leads to RabbitMQ crashing, which in turn causes outages for other OpenStack services.
The ability to use a separate RabbitMQ cluster for notifications is available starting with OpenStack Mitaka (MOS 9.0) and is not supported in MOS out of the box. The feature is not documented yet, but you can find the implementation here.
Reduce Ceilometer metrics volume
Another best practice when it comes to running RabbitMQ beneath OpenStack is to reduce the number of metrics sent and/or their frequency. Obviously that reduces stress put on RabbitMQ, Ceilometer and MongoDB, but it also reduces the chance of messages piling up in RabbitMQ if Ceilometer/MongoDB can&8217;t cope with their volume. In turn, messages piling up in a queue reduce overall RabbitMQ performance.
You can also mitigate the effect of messages piling up by using RabbitMQ’s lazy queues feature (available starting with RabbitMQ 3.6.0), but as of this writing, MOS does not make use of lazy queues..
(Carefully) consider disabling queue mirroring for Ceilometer queues
In the Mirantis OpenStack architecture, queue mirroring is the only ‘persistence’ measure used. We do not use durable queues, so do not disable queue mirroring if losing Ceilometer notifications will hurt you. For example, if notification data is used for billing, you can&8217;t afford to lose those notifications.
The ability to disable mirroring for Ceilometer queues is available in Mirantis OpenStack starting with version 8.0, but it is disabled by default.
So what do you think?  Did we leave out any of your favorite tips? Let us know in the comments!
The post Best practices for running RabbitMQ in OpenStack appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

OpenStack Developer Mailing List Digest November 5-18

SuccessBot Says

mriedem: We’re now running neutron by default in Ocata CI jobs [1].
stevemar: fernet token format is now the default format in keystone! thanks lbragstad samueldmq and dolphm for making this happen!
Ajaegar: developer.openstack.org is now hosted by OpenStack infra.
Tonyb: OpenStack requirements on pypi [2] is now a thing!
All

Registration Open For the Project Teams Gathering

The first OpenStack Project Teams Gathering event geared toward existing upstream team members, providing a venue for those project teams to meet, discuss and organize the development work for the Pike release.
Where: Atlanta, GA
When: The week of February 20, 2017
Register and get more info [3]
Read the FAQ for any questions. If you still have questions, contact Thierry (ttx) over IRC on free node, or email foundation staff at ptg@openstack.org.
Full thread

Follow up on Barcelona Review Cadence Discussions

Summary of concerns were Nova is a complex beast. Very few people know even most of it well.
There are areas in Nova where mistakes are costly and hard to rectify later.
Large amount of code does not merge quickly.
Barrier of entry for Nova core is very high.
Subsystem maintainer model has been pitched [4].
Some believe this is still worth giving a try again in attempt to merge good code quickly.
Nova today uses a list of experts [5] to sign off on various changes today.
Nova PTL Matt Riedemann’s take:

Dislikes the constant comparison of Nova and the Linux kernel. Lets instead say all of OpenStack is the Linux Kernel, and the subsystems are Nova, Cinder, Glance, etc.
The bar for Nova core isn’t as high as some people make it out to be:

Involvement
Maintenance
Willingness to own and fix problems.
Helpful code reviews.

Good code is subjective. A worthwhile and useful change might actually break some other part of the system.

Nova core Jay Pipes is supportive of the proposal of subsystems, but with a commitment to gathering data about total review load, merge velocity, and some kind of metric to assess code quality impact.
Full thread

Embracing New Languages in OpenStack

Technical Committee member Flavio Percoco proposes a list of what the community should know/do before accepting a new language:

Define a way to share code/libraries for projects using the language

A very important piece is feature parity on the operator.
Oslo.config for example, our config files shouldn&;t change because of a different implementation language.
Keystone auth to drive more service-service interactions through the catalog to reduce the number of things an operator needs to configure directly.
oslo.log so the logging is routed to the same places and same format as other things.
oslo.messaging and oslo.db as well

Work on a basic set of libraries for OpenStack base services
Define how the deliverables are distributed
Define how stable maintenance will work
Setup the CI pipelines for the new language

Requirements management and caching/mirroring for the gate.

Longer version of this [6].

Previous notes when the Golang discussion was started to work out questions [7].
TC member Thierry Carrez says the most important in introducing the Go should not another way for some of our community to be different, but another way for our community to be one.
TC member Flavio Percoco sees part of the community wide concerns that were raised originated from the lack of an actual process of this evaluation to be done and the lack of up front work, which is something trying to be addressed in this thread.
TC member Doug Hellmann request has been to demonstrate not just that Swift needs Go, but that Swift is willing to help the rest of the community in the adoption.

Signs of that is happening, for example discussion about how oslo.config can be used in the current version of Swift.

Flavio has started a patch that documents his post and the feedback from the thread [8]
Full thread

API Working Group News

Guidelines that have been recently merged:

Clarify why CRUD is not a great descriptor [9]
Add guidelines for complex queries [10]
Specify time intervals based filtering queries [11]

Guidelines currently under review:

Define pagination guidelines [12]
WIP add API capabilities discovery guideline [13]
Add the operator for “not in” to the filter guideline [14]

Full thread

OakTree &; A Friendly End-user Oriented API Layer

The OpenStack summit results of the Interop Challenge shown on stage was awesome. 17 different people from 17 different clouds ran the same workload!
One of the reasons it worked is because they all used the Ansible modules we wrote based on the Shade library.

Shade contains business logic needed to hide vendor difference in clouds.
This means that there is a fantastic OpenStack interoperability story &8211; but only if you program in Python.

OakTree is a gRPC-based APO service for OpenStack that is based on the Shade library.
Basing OakTree on Shade gets not only the business logic, Shade understands:

Multi-cloud world
Caching
Batching
Thundering herd protection sorted to handle very high loads efficiently.

The barrier to deployers adding it to their clouds needs to be as low as humanly possible.
Exists in two repositories:

openstack/oaktree [15]
openstack/oaktreemodel [16]

OakTree model contains the Protobuf definitions and build scripts to produce Python, C++ and Go code from them.
OakTree itself depends on python OakTree model and Shade.

It can currently list and search for flavors, images, and floating ips.
A few major things that need good community design listed in the todo.rst [17]

Full thread

 
Quelle: openstack.org

Three Considerations for Planning your Docker Datacenter Deployment

Congratulations! You&;ve decided to make the change your application environment with Docker Datacenter. You&8217;re now on your way to greater agility, portability and control within your environment. But what do you need to get started? In this blog, we will cover things you need to consider (strategy, infrastructure, migration) to ensure a smooth POC and migration to production.
1. Strategy
Strategy involves doing a little work up-front to get everyone on the same page. This stage is critical to align expectations and set clear success criteria for exiting the project. The key focus areas are to determining your objective, plan out how to achieve it and know who should be involved.
Set the objective &; This is a critical step as it helps to set clear expectations, define a use case and outline the success criteria for exiting a POC. A common objective is to enable developer productivity by implementing a Continuous Integration environment with Docker Datacenter.
Plan how to achieve it &8211; With a clear use case and outcome identified, the next step is to look at what is required to complete this project. For a CI pipeline, Docker is able to standardize the development environment, provide isolation of the applications and their dependencies and eliminate any &;works on my machine&; issues to facilitate the CI automation. When outlining the plan, make sure to select the pilot application. The work involved will vary depending on whether it is a legacy application refactoring or new application development.
Integration between source control and CI allows Docker image builds to be automatically triggered from a standard Git workflow.  This will drive the automated building of Docker images. After Docker images are built they are shipped to the secure Docker registry to store them (Docker Trusted Registry) and role based access controls enable secure collaboration. Images can then be pulled and deployed across a secure cluster as running applications via the management layer of Docker Datacenter (Universal Control Plane).
Know who should be involved &8211; The solution will involve multiple teams and it is important to include the correct people early to avoid any potential barriers later on. These teams can include the following teams, depending on the initial project: development, middleware, security, architects, networking, database, and operations. Understand their requirements and address them early and gain consensus through collaboration.
PRO TIP &8211; Most first successes tend to be web applications with some sort of data tier that can either utilize traditional databases or be containerized with persistent data being stored in volumes.
 
2. Infrastructure
Now that you understand the basics of building a strategy for your deployment, it’s time to think about infrastructure.  In order to install Docker Datacenter (DDC) in a highly available (HA) deployment, the minimum base infrastructure is six nodes.  This will allow for the installation of three UCP managers and three DTR replicas on worker nodes in addition to the worker nodes where the workloads will be deployed. An HA set up is not required for an evaluation but we recommend a minimum of 3 replicas and managers for production deployments so your system can handle failures.
PRO TIP &8211; A best practice is to not deploy and run any container workloads on the UCP managers and DTR replicas. These nodes perform critical functions within DDC and are best if they only run the UCP or DTR services.
Nodes are defined as cloud, virtual or physical servers with Commercially Supported (CS) Docker Engine installed as a base configuration.
Each node should consist of a minimum of:

4GB of RAM
16GB storage space
For RHEL/CentOS with devicemapper: separate block device OR additional free space on the root volume group should be available for Docker storage.
Unrestricted network connectivity between nodes
OPTIONAL Internet access to Docker Hub to ease the initial downloads of the UCP/DTR and base content images
Installed with Docker supported operating system 
Sudo access credentials to each node

Other nodes may be required for related CI tooling. For a POC built around DDC in a HA deployment with CI/CD, ten nodes are recommended. For a POC built around DDC in a non-HA deployment with CI/CD, five nodes are recommended.
Below are specific requirements for the individual components of the DDC platform:
Universal Control Plane

Commercially Supported (CS) Docker Engine must be used in conjunction with DDC.
TCP Load balancer should be available for UCP in an HA configuration.
A valid DNS entry should be created for the load balancer VIP.
SSL certificate from a trusted root CA should be created (a self-signed certificate is created for UCP and may be used but additional configuration is required).
DDC License for 30 day trial or annual subscription must be obtained or purchased for the POC.

Docker Trusted Registry

Commercially Supported (CS) Docker Engine must be used in conjunction with DDC.
TCP Load balancer should be available for DTR in an HA configuration.
A valid DNS entry should be created for the load balancer VIP.
Image Storage options include a clustered filesystem for HA or blob storage (AWS S3, Azure, S3 compatible storage, or OpenStack Swift)
SSL certificate from a trusted root CA should be created (a self-signed certificate is created for DTR and may be used but additional configuration is required).
LDAP/AD is available for authentication; managed built-in authentication can also be used but requires additional configuration
DDC License for 30 day trial or annual subscription must be obtained or purchased for the POC.

The POC design phase is the ideal time to assess how Docker Datacenter will integrate into your existing IT infrastructure, from CI/CD, networking/load balancing, volumes for persistent data, configuration management, monitoring, and logging systems. During this phase, understand how  how the existing tools fit and discover any  gaps in your tooling. With the strategy and infrastructure prepared, begin the POC installation and testing. Installation docs can be found here.
 
3. Moving from POC Into Production
Once you have the built out your POC environment, how do you know if it’s ready for production use? Here are some suggested methods to handle the migration.

Perform the switchover from the non-Dockerized apps to Docker Datacenter in pre-production environments. Have Dev, Test, and Prod environments, switchover Dev and/or Test and run through a set burn in cycle to allow for the proper testing of the environment to look for any unexpected or missing functionality. Once non-production environments are stable, switch over to the production environment.

Start integrating Docker Datacenter alongside your existing application deployments. This method requires that the application can run with multiple instances running at the same time. For example, if your application is fronted by a load balancer, add the Dockerized application to the existing load balancer pool and begin sending traffic to the application running in Docker Datacenter. Should issues arise, remove the Dockerized application running  from the load balancer pool until issues can be resolved.

Completely cutover to a Dockerized environment all in one go. As additional applications begin to utilize Docker Datacenter, continue to use a tested pattern that works best for you to provide a standard path to production for your applications.

We hope these tips, learned from first hand experience with our customers help you in planning for your deployment. From standardizing your application environment and simultaneously adding more flexibility for your application teams, Docker Datacenter gives you a foundation to build, ship and run containerized applications anywhere.

3 Considerations for a successful deployment Click To Tweet

Enjoy your Docker Datacenter POC

Get started with your Docker Datacenter POC
See What’s New in Docker Datacenter
Learn more by visiting the Docker Datacenter webpage
Sign up for a free 30 day trial

The post Three Considerations for Planning your Docker Datacenter Deployment appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/