OpenStack Developer Mailing List Digest November 26 – December 2

Updates

Nova Resource Providers update [2]
Nova blueprints update [16]
OpenStack-Ansible deploy guide live! [6]

The Future of OpenStack Needs You [1]

Need more mentors to help run Upstream Trainings at the summits
Interested in doing an abridged version at smaller more local events
Contact ildikov or diablo_rojo on IRC if interested

New project: Nimble [3]

Interesting chat about bare metal management
The project name is likely to change
(Will this lead to some discussions about whether or not allow some parallel experiments in the OpenStack Big Tent?)

Community goals for Pike [4]

As Ocata is a short cycle it’s time to think about goals for Pike [7]
Or give feedback on what’s already started [8]

Exposing project team&;s metadata in README files (Cont.) [9]

Amrith agrees with the value of Flavio’s proposal that a short summary would be good for new contributors
Will need a small API that will generate the list of badges

Done- as a part of governance
Just a graphical representation of what’s in the governance repo
Do what you want with the badges in README files

Patches have been pushed to the projects initiating this change

Allowing Teams Based on Vendor-specific Drivers [10]

Option 1: https://review.openstack.org/403834 &; Proprietary driver dev is unlevel
Option 2: https://review.openstack.org/403836 &8211; Driver development can be level
Option 3: https://review.openstack.org/403839 &8211; Level playing fields, except drivers
Option 4:  https://review.openstack.org/403829 &8211; establish a new &;driver team&; concept
Option 5: https://review.openstack.org/403830 &8211; add resolution requiring teams to accept driver contributions

Thierry prefers this option
One of Flavio’s preferred options

Option 6: https://review.openstack.org/403826 &8211; add a resolution allowing teams based on vendor-specific drivers

Flavio’s other preferred option

Cirros Images to Change Default Password [11]

New password: gocubsgo
Not ‘cubswin:)’ anymore

Destructive/HA/Fail-over scenarios

Discussion started about adding end-user focused test suits to test OpenStack clusters beyond what’s already available in Tempest [12]
Feedback is needed from users and operators on what preferred scenarios they would like to see in the test suite [5]
You can read more in the spec for High Availability testing [13] and the user story describing destructive testing [14] which are both on review

Events discussion [15]

Efforts to remove duplicated functionality from OpenStack in the sense of providing event information to end-users (Zaqar, Aodh)
It is also pointed out that the information in events can be sensitive which needs to be handled carefully

 
[1] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108084.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107982.html
[3] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107961.html
[4] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108167.html
[5] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108062.html
[6] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108200.html
[7] https://etherpad.openstack.org/p/community-goals
[8] https://etherpad.openstack.org/p/community-goals-ocata-feedback
[9] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107966.html
[10] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108074.html
[11] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108118.html
[12] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108062.html
[13] https://review.openstack.org/#/c/399618/
[14] https://review.openstack.org/#/c/396142
[15] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108070.html
[16] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108089.html
Quelle: openstack.org

Enterprise cloud strategy: Applications and data in a multi-cloud environment

Say you&;ve decided to hedge your IT bets with a multi-cloud environment. That’s excellent, except, what&8217;s your applications and data strategy?
That&8217;s not an idle question. The hard reality is that if you don&8217;t coordinate your cloud environments, innovative applications will struggle to integrate with traditional systems. Cost management, security and compliance — like organizational swords of Damocles — will hover over your entire operation.
In working with clients who effectively manage multiple clouds, I see five key elements of applications and data strategy:
Data residency and locality
Data residency (sometimes called data sovereignty) defines where a company’s data physically resides, with rules for how it&8217;s handled and transferred, including backup and disaster recovery scenarios. It&8217;s often governed by countries or regions such as the European Union.
Data locality, on the other hand, determines how and where data should be stored for processing.
Taken together, data residency and locality affect your applications and your efforts to globally digitize more than anything else. Different cloud providers allow various levels of control over data placement. They also provide the tools to verify and ensure compliance with residency laws. In this regard, it&8217;s crucial to have a common set of tools and processes.
Data backup and restoration across clouds are necessities. Your cloud services provider (CSP) must be able to handle this, telling you exactly where it places the data in its cloud. Likewise, you should know where the CSP stores copies of the data so you can replicate them to another location in case of a disaster or audit.
Security and compliance
You need a common set of security policies and implementations across your multi-cloud environment. This includes rules for identity management, authentication, vulnerability assessment, intrusion detection and other security areas.
In an environment with high compliance requirements, customer-managed encryption keys are also essential. You should pay attention to how and where they&8217;re stored, as well as who has access to decrypted data, particularly CSP personnel.
Additionally, your CSP&8217;s platform capabilities must securely manage infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), software-as-a-service (SaaS), business-process-as-a-service (BPaaS) and database-as-a-service (DBaaS) deployment models.
Also, the CSP&8217;s cloud will invariably house multiple tenants. Data should be segregated from them with top-level access policies, segmentation and isolation.
Integration: APIs and API management
APIs are the connective tissue of your applications. They need effective lifecycle management across traditional, private cloud and public cloud applications.
Your CSP should provide an API lifecycle solution that includes &;create, run, secure, manage&; actions in a single offering. That solution should also have flexible deployment options — multi-cloud and on premises — managed from a single control pane. That gives you the ability to manage APIs, products, policies and users through one view across your cloud environments.
In assessing a CSP, it’s also worth knowing whether it can integrate PaaS services through a common gateway. Its API platform should be a distributed model to implement security and traffic policies, as well as proactively monitor APIs.
Portability and migration
When taking applications to a multi-cloud environment, you must choose among three migration models. You can &8220;lift and shift&8221; with a direct port of the application to the cloud, perform a full refactor that completely customizes the application, or choose a partial refactor in which only parts of the application are customized.
A lot rides on your CSP&8217;s ability to support these models. Since legacy applications depend on infrastructure resiliency to satisfy uptime, you may not be able to fit them to the CSP’s deployment standards. In fact, such modifications may delay cloud benefits. To address this problem, consider containers for new applications deployed across your different clouds.
Some enterprises tackle migration by installing a physical appliance on the CSP’s premises or in co- located facilities, then integrating it with their applications. If you go this route, understand what the options are, particularly the technical limits with respect to data volumes, scale and latency.
New applications and tooling
To ensure efficiency, operations such as building, testing, and deploying applications should be linked together to create a continuous integration/continuous deployment (CI/CD) pipeline. These tool chains often require customizations when part of a multi-cloud environment. One common error: new applications are designed for scalability in public cloud IaaS or PaaS scenarios, but their performance service-level agreements (SLAs) are not addressed early enough in the cycle. Understanding your CSP&8217;s SLAs, along with designing and testing for performance, are crucial for successful global deployment.
For more information, read IBM Optimizes Multicloud Strategies for Enterprise Digital Transformation.
The post Enterprise cloud strategy: Applications and data in a multi-cloud environment appeared first on news.
Quelle: Thoughts on Cloud

Your Docker Agenda for December 2016

Thank you community for your amazing Global Mentor Week Events last month! In November, the community organized over 110 Docker Global Mentor Week events and more than 8,000 people enrolled in at least one of the courses for 1000+ course completions and counting! The five self-paced courses are now available for everyone free online. Check them out here!
As you gear up for the holidays, make sure to check out all the great events that are scheduled this month in Docker communities all over the world! From webinars to workshops, to conference talks, check out our list of events that are coming up in December.
Official Docker Training Courses
View the full schedule of instructor led training courses here!
 
Introduction to Docker:
This is a two-day, on-site or classroom-based training course which introduces you to the Docker platform and takes you through installing, integrating, and running it in your working environment.
Dec 7-8: Introduction to Docker with AKRA Hamburg City, Germany
 
Docker Administration and Operations:
The Docker Administration and Operations course consists of both the Introduction to Docker course, followed by the Advanced Docker Topics course, held over four consecutive days.
Dec 5-8 Docker Administration and Operations with Amazic &; London, United Kingdom
Dec 6-9: Docker Administration and Operations with Vizuri &8211; Atlanta, GA
Dec 12-15: Docker Administration and Operations with Docker Captain, Luis Herrera &8211; Madrid, Spain
Dec 12-15: Docker Administration and Operations with Kiratech &8211; Milan, Italy
Dec 13-16: Docker Administration and Operations with TREEPTIK &8211; Aix en Provence, France
Dec 19-22: Docker Administration and Operations with TREEPTIK &8211; Paris, France
 
Advanced Docker Operations:
This two day course is designed to help new and experienced systems administrators learn to use Docker to control the Docker daemon, security, Docker Machine, Swarm Mode, and Compose.
Dec 7-8: Advanced Docker Operations with Amazic &8211; London, United Kingdom
Dec 15-16: Advanced Docker Operations with Docker Captain, Benjamin Wootton &8211; London, United Kingdom
North America 
Dec 3rd: DOCKER MEETUP AT VISA &8211; Reston, VA
Visa is hosting this month’s meetup! A talk entitled &;Docker UCP 2.0 and DTR 2.1 GA&; by Ben Grissinger (from Docker) followed by &8216;Docker security&8217; by Paul Novarese (from Docker).
Dec 3rd: DOCKER MEETUP IN HAVANA &8211; Havana, Cuba
Join Docker Havana for their 1st ever meetup! Work through the training materials from Docker’s Global Mentor Week series and !
Dec 4th: GDG DEVFEST 2016 &8211; Los Angeles, CA
Docker&8217;s Mano Marks with be keynoting DevFest LA.
Dec 7th: DOCKER MEETUP AT MELTMEDIA &8211; Phoenix, AZ
Join Docker Phoenix for a &8216;Year in Review and Usage Roundtable&8217;. 2016 was a big year for Docker, let&8217;s talk about it!
Dec 13th: DOCKER MEETUP AT TORCHED HOP BREWING &8211; Atlanta, GA
This month we&8217;re going to have a social event without a presentation in combination with the Go and Kubernetes Meetups at Torched Hop Brewing.Come hang out and have a drink or food with us!
Dec 13th: DOCKER MEETUP AT GOOGLE &8211; Seattle, WA
Tiffany Jernigan will do a talk Docker Orchestration (Docker Swarm Mode) and Metrics Collection and then Tsvi Korren will follow with a talk on securing your container environment.
Dec 14th: DOCKER MEETUP AT PUPPET LABS &8211; Portland, OR
A talk by Nan Liu from Intel entitled, &8216;Trust but verify. Testing docker containers.&8217;
Dec 14th: DOCKER MEETUP AT DOCKER HQ &8211; San Francisco, CA
Docker is joining forces with the Prometheus meetup group for a holiday mega-meetup with talks on using Docker with Prometheus and OpenTracing. As a special holiday gift we will be giving away a free DockerCon 2017 ticket to one lucky attendee! Don’t miss out &8211; RSVP now!
 
Dec 15th: DOCKER MEETUP AT GOGO &8211; Chicago, Il
We will be welcoming Loris Degioanni of sysdig as he takes us through monitoring containers. The good, the bad.. and best practice!
 
Europe
Dec 5th: DEVOPSCON MUNICH &8211; Munich, Germany
Docker Captains Philipp Garbe, Gianluca Arbezzano, Viktor Farcic and Dieter Reuter will all be speaking at DevOpsCon.
Dec 6th: DOCKER MEETUP AT FOO CAFE STOCKHOLM &8211; Stockholm, Sweden
In this session, you’ll learn about the container technology built natively into Windows Server 2016 and how you can reuse your knowledge, skills and tools from Docker on Linux. This session will be a mix of presentations, giving you an overview of the technology, and hands-on experiences, so make sure to bring your laptop.
Dec 6th: D cubed: Decision Trees, Docker and Data Science in the Cloud &8211; London, United Kingdom
Steve Poole, DevOps practitioner (leading a team of engineers on cutting edge DevOps exploration) and a long time IBM Java developer, leader and evangelist, will explain what Docker is, and how it works.
Dec 8th: Docker Meetup at Pentalog Romania &8211; Brasov, Romania
Come for a full overview of DockerCon 2016        !
Dec 8th: DOCKER FOR .NET DEVELOPERS AND AZURE MACHINE LEARNING &8211; Copenhagen, Denmark
For this meetup we get a visit from Ben Hall who will talk about Docker for .NET applications, and Barbara Fusińska who will talk about Azure Machine Learning.
Dec 8th: Introduction to Docker for Java Developers &8211; Brussels, Belgium
Join us for the last session of 2016 and discover what Docker has to offer you!
Dec 14th: DOCKER MEETUP AT LA CANTINE NUMERIQUE &8211; Tours, France
What&8217;s new in the Docker ecosystem plus a few more talks on Docker compose and Swarm Mode.
Dec 15th: Docker Meetup at Stylight HQ &8211; Munich, Germany
Join us for our end of the year holiday meetup! Check event page for more details.
Dec 15th: Docker Meetup at ENSEIRB &8211; Bordeaux, France
Jeremiah Monsinjob and Florian Garcia will talk about Docker under dynamic platform and microservices.
Dec 16th: Thessaloniki .NET Meetup about Docker &8211; Thessaloniki, Greece
Byron Papadopoulos will talk about the following: What is the Docker technology, in which cases used, security, scaling, monitoring. What are the tools we use Docker. (Docker Engine and Docker Compose). Container Orchestrator Engines, Docker in Azure (show Docker Swarm Mode). Docker for Devops, and Docker for developers.
Dec 19th: Modern Microservices Architecture using Docker &8211; Herzliyya, Israel
Microservices are all the rage these days. Docker is a tool which makes managing Microservices a whole lot easier. But what do Microservices really mean? What are the best practices of composing your application with Microservices? How can you leverage Docker and the public cloud to help you build a more agile DevOps process? How does the Azure Container Service fit in? Join us in order to find out the answers.
Dec 21st: Docker Meetup at Campus Madrid &8211; Madrid, Spain
Two talks. First talk by Diego Martínez Gil: Dockerized apps running on Windows.
Diego will present the new features available in Windows 10 and Windows Server 2016 to run dockerized applications. Second talk is by Pablo Chico de Guzmán: Docker 1.13. Pablo will demo some of the features available in Docker 1.13.
 
Asia
Dec 10th: DOCKER MEETUP AT MANGALORE INFOTECH &8211; Mangaluru, India
We are hosting the Mangalore edition of &;The Docker Global Mentor Week.&; Our goal is to provide easy paced self learning courses that will take you through the basics of Docker and make you well acquainted with most aspects of application delivery using Docker.
Dec 10th: BIMONTHLY MEETUP 2016 &8211; DOCKER FOR PHP DEVELOPERS &8211; Pune, India
If you are aching to get started with docker, but not sure how to, this meetup is right platform. In this meetup, we will first start by explaining basic docker concepts like what docker is, its benefits, images, registry, containers, docker files etc, followed by an optional workshop for some practical.
Dec 12th: DOCKER MEETUP AT MICROSOFT &8211; Singapore, Singapore
Join us for our next meetup event!
Dec 20th: DOCKER METUP AT MICROSOFT &8211; Riyadh, Saudi Arabia
Join us for a deep dive into Docker technology and how Microsoft and Docker work together. Learn about Azure IaaS and how to run Docker on Microsoft Azure.
Oceania
Dec 5th: DOCKER MEETUP AT CATALYST IT &8211; Wellington, New Zealand
Join us for our next meetup!
Dec 5th: DOCKER MEETUP AT VERSENT PTY LTD &8211; Melbourne, Australia
Yoav Landman, the CTO of JFrog, will talk to us about how new tools often introduce new paradigms. Yoav will examine the patterns and the anti-patterns for Docker image management, and what impact the new tools have on the battle-proven paradigms of the software development lifecycle.
Dec 13th: Action Cable & Docker &8211; Wellington, New Zealand
Come check out a live demo of adding Docker to a rails app.
Africa
Dec 16th: Docker Meetup at Skylabase Inc. &8211; Buea, Cameroon
Join us for a Docker Study Jam!

Check out the list of docker events, meetups, workshops, trainings for the month of December!Click To Tweet

The post Your Docker Agenda for December 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

American Airlines soars into cloud with IBM

American Airlines, the largest passenger air carrier in North America, announced this week that it has chosen IBM as a cloud provider.
Specifically, the airline intends to move some of its applications to the cloud and make use of IBM infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) tools. The agreement also means American will have access to the 50 IBM data centers around the world, the Bluemix development platform and analytics capabilities.
Patrick Grubbs, vice president of travel and transportation at IBM Cloud, said the work between IBM and American will include &;customer facing systems as well as back office.&;
American has been looking integrate and streamline its systems since merging with US Airways in 2013.
Robert LeBlanc, Senior Vice President of IBM Cloud, said, &8220;This partnership is about delivering the flexibility to American Airlines&; business, allowing them to enhance their customer relationships and further gain a competitive advantage.&8221;
The new IBM agreement with American Airlines extends a longstanding relationship. In the 1950s, the two companies collaborated to create the first-ever electronic reservation and ticketing system in the air travel industry.
For more, check out ZDNet&;s full story.
(Image via Wikimedia Commons)
The post American Airlines soars into cloud with IBM appeared first on news.
Quelle: Thoughts on Cloud

Automate bare metal server provisioning using Ironic (bifrost) and the ansible deploy driver

The post Automate bare metal server provisioning using Ironic (bifrost) and the ansible deploy driver appeared first on Mirantis | The Pure Play OpenStack Company.
On our team, we mostly conduct various research in OpenStack, so we use bare metal machines extensively. To make our lives somewhat easier, we&;ve developed a set of simple scripts that enables us to backup and restore the current state of the file system on the server. It also enables us to switch between different backups very easily. The set of scripts is called multi-root (https://github.com/vnogin/multi-root).
Unfortunately, we had a problem; in order to use this tool, we had to have our servers configured in a particular way, and we faced different issues with manual provisioning:

It is not possible to set up more than one bare metal server at a time using a Java-based IPMI application
The Java-based IPMI application does not properly handle disconnection from the remote host due to connectivity problems (you have to start installation from the very beginning)
The bare metal server provisioning procedure was really time consuming
For our particular case, in order to use multi-root functionality we needed to create software RAID and make required LVM configurations prior to operating system installation

To solve these problems, we decided to automate bare metal node setup, and since we are part of the OpenStack community, we decided to use bifrost instead of other provisioning tools. Bifrost was a good choice for us as it does not require other OpenStack components.
Lab structure
This is how we manage disk partitions and how we use software RAID on our machines:

As you can see here, we have the example of a bare metal server, which includes two physical disks.  Those disks are combined using RAID1, then partitioned by the operating system.  The LVM partition then gets further partitioned, with each copy of an operating system image assigned to its own partition.
This is our network diagram:

In this case we have one network to which our bare metal nodes are attached. Also attached to that network is the IRONIC server. A DHCP server assigns IP addresses for the various instances as they&8217;re provisioned on the bare metal nodes, or prior to the deployment procedure (so that we can bootstrap the destination server).
Now let&8217;s look at how to make this work.
How to set up bifrost with ironic-ansible-driver
So let&8217;s get started.

First, add the following line to the /root/.bashrc file:
# export LC_ALL=”en_US.UTF-8″

Ensure the operating system is up to date:
# apt-get -y update && apt-get -y upgrade

To avoid issues related to MySQL, we decided to ins tall it prior to bifrost and set the MySQL password to &;secret&;:
# apt-get install git python-setuptools mysql-server -y

Using the following guideline, install and configure bifrost:
# mkdir -p /opt/stack
# cd /opt/stack
# git clone https://git.openstack.org/openstack/bifrost.git
# cd bifrost

We need to configure a few parameters related to localhost prior to the bifrost installation. Below, you can find an example of an /opt/stack/bifrost/playbooks/inventory/group_vars/localhost file:
echo “—
ironic_url: “http://localhost:6385/”
network_interface: “p1p1″
ironic_db_password: aSecretPassword473z
mysql_username: root
mysql_password: secret
ssh_public_key_path: “/root/.ssh/id_rsa.pub”
deploy_image_filename: “user_image.qcow2″
create_image_via_dib: false
transform_boot_image: false
create_ipa_image: false
dnsmasq_dns_servers: 8.8.8.8,8.8.4.4
dnsmasq_router: 172.16.166.14
dhcp_pool_start: 172.16.166.20
dhcp_pool_end: 172.16.166.50
dhcp_lease_time: 12h
dhcp_static_mask: 255.255.255.0″ > /opt/stack/bifrost/playbooks/inventory/group_vars/localhost
As you can see, we&8217;re telling Ansible where to find Ironic and how to access it, as well as the authentication information for the database so state information can be retrieved and saved. We&8217;re specifying the image to use, and the networking information.
Notice that there&8217;s no default gateway for DHCP in the configuration above, so I&8217;m going to fix it manually after the install.yaml playbook execution.
Install ansible and all of bifrost&8217;s dependencies:
# bash ./scripts/env-setup.sh
# source /opt/stack/bifrost/env-vars
# source /opt/stack/ansible/hacking/env-setup
# cd playbooks

After that, let&8217;s install all packages that we need for bifrost (Ironic, MySQL, rabbitmq, and so on) &;
# ansible-playbook -v -i inventory/localhost install.yaml

&8230; and the Ironic staging drivers with already merged patches for enabling Ironic ansible driver functionality:
# cd /opt/stack/
# git clone git://git.openstack.org/openstack/ironic-staging-drivers
# cd ironic-staging-drivers/

Now you&8217;re ready to do the actual installation.
# pip install -e .
# pip install “ansible>=2.1.0″
You should see typical &8220;installation&8221; output.
In the /etc/ironic/ironic.conf configuration file, add the &8220;pxe_ipmitool_ansible&8221; value to the list of enabled drivers. In our case, it&8217;s the only driver we need, so let&8217;s remove the other drivers:
# sed -i ‘/enabled_drivers =*/cenabled_drivers = pxe_ipmitool_ansible’ /etc/ironic/ironic.conf

If you want to enable cleaning and disable disk shredding during the cleaning procedure, add these options to /etc/ironic/ironic.conf:
automated_clean = true
erase_devices_priority = 0

Finally, restart the Ironic conductor service:
# service ironic-conductor restart

To check that everything was installed properly, execute the following command:
# ironic driver-list | grep ansible
| pxe_ipmitool_ansible | test |
You should see the pxe_ipmitool_ansible driver in the output.
Finally, add the default gateway to /etc/dnsmasq.conf (be sure to use the IP address for your own gateway).
# sed -i ‘/dhcp-option=3,*/cdhcp-option=3,172.16.166.1′ /etc/dnsmasq.conf

Now that everything&8217;s set up, let&8217;s look at actually doing the provisioning.
How to use ironic-ansible-driver to provision bare-metal servers with custom configurations
Now let&8217;s look at actually provisioning the servers. Normally, we&8217;d use a custom ansible deployment role that satisfies Ansible&8217;s requirements regarding idempotency to prevent issues that can arise if a role is executed more than once, but because this is essentially a spike solution for us to use in the lab, we&8217;ve relaxed that requirement.  (We&8217;ve also hard-coded a number of values that you certainly wouldn&8217;t in production.)  Still, by walking through the process you can see how it works.

Download the custom ansible deployment role:
curl -Lk https://github.com/vnogin/Ansible-role-for-baremetal-node-provision/archive/master.tar.gz | tar xz -C /opt/stack/ironic-staging-drivers/ironic_staging_drivers/ansible/playbooks/ –strip-components 1

Next, create an inventory file for the bare metal server(s) that need to be provisioned:
# echo “—
 server1:
   ipa_kernel_url: “http://172.16.166.14:8080/ansible_ubuntu.vmlinuz”
   ipa_ramdisk_url: “http://172.16.166.14:8080/ansible_ubuntu.initramfs”
   uuid: 00000000-0000-0000-0000-000000000001
   driver_info:
     power:
       ipmi_username: IPMI_USERNAME
       ipmi_address: IPMI_IP_ADDRESS
       ipmi_password: IPMI_PASSWORD
       ansible_deploy_playbook: deploy_custom.yaml
   nics:
     –
       mac: 00:25:90:a6:13:ea
   driver: pxe_ipmitool_ansible
   ipv4_address: 172.16.166.22
   properties:
     cpu_arch: x86_64
     ram: 16000
     disk_size: 60
     cpus: 8
   name: server1
   instance_info:
     image_source: “http://172.16.166.14:8080/user_image.qcow2″” > /opt/stack/bifrost/playbooks/inventory/baremetal.yml

# export BIFROST_INVENTORY_SOURCE=/opt/stack/bifrost/playbooks/inventory/baremetal.yml
As you can see the above we have added all required information for bare-metal node provisioning using IPMI. If needed you can add information about various number of bare-metal servers here and all of them will be enrolled and deployed later.
Finally, you&8217;ll need to build a ramdisk for the Ironic ansible deploy driver and create a deploy image using DIB (disk image builder). Start by creating an RSA key that will be used for connectivity from the Ironic ansible driver to the provisioning bare metal host:
# su – ironic
# ssh-keygen
# exit

Next set environment variables for DIB:
# export ELEMENTS_PATH=/opt/stack/ironic-staging-drivers/imagebuild
# export DIB_DEV_USER_USERNAME=ansible
# export DIB_DEV_USER_AUTHORIZED_KEYS=/home/ironic/.ssh/id_rsa.pub
# export DIB_DEV_USER_PASSWORD=secret
# export DIB_DEV_USER_PWDLESS_SUDO=yes

Install DIB:
# cd /opt/stack/diskimage-builder/
# pip install .

Create the bootstrap and deployment images using DIB, and move them to the web folder:
# disk-image-create -a amd64 -t qcow2 ubuntu baremetal grub2 ironic-ansible -o ansible_ubuntu
# mv ansible_ubuntu.vmlinuz ansible_ubuntu.initramfs /httpboot/
# disk-image-create -a amd64 -t qcow2 ubuntu baremetal grub2 devuser cloud-init-nocloud -o user_image
# mv user_image.qcow2 /httpboot/

Fix file permissions:
# cd /httpboot/
# chown ironic:ironic *

Now we can enroll anddeploy our bare metal node using ansible:
# cd /opt/stack/bifrost/playbooks/
# ansible-playbook -vvvv -i inventory/bifrost_inventory.py enroll-dynamic.yaml
Wait for the provisioning state to read &8220;available&8221;, as a bare metal server needs to cycle through a few states and could be cleared, if needed. During the enrollment procedure, the node can be cleared by the shred command. This process does take a significant amount of time time, so you can disable or fine tune it in the Ironic configuration (as you saw above where we enabled it).
Now we can start the actual deployment procedure:
# ansible-playbook -vvvv -i inventory/bifrost_inventory.py deploy-dynamic.yaml
If deployment completes properly, you will see the provisioning state for your server as &8220;active&8221; in the Ironic node-list.
+————————————————————–+———+——————–+—————–+————————-+——————+
| UUID                                                    | Name  | Instance UUID | Power State | Provisioning State | Maintenance |
+————————————————————–+———+——————–+—————–+————————-+——————+
| 00000000-0000-0000-0000-000000000001   | server1| None          | power on      | active                     | False            |
+————————————————————–+———+——————–+—————–+————————-+——————+

Now you can log in to the deployed server via ssh using the login and password that we defined above during image creation (ansible/secret) and then, because the infrastructure to use it has now been created, clone the multi-root tool from Github.
Conclusion
As you can see, bare metal server provisioning isn&8217;t such a complicated procedure. Using the Ironic standalone server (bifrost) with the Ironic ansible driver, you can easily develop a custom ansible role for your specific deployment case and simultaneously deploy any number of bare metal servers in automation mode.
I want to say thank you to Pavlo Shchelokovskyy and Ihor Pukha for your help and support throughout the entire process. I am very grateful to you guys.
The post Automate bare metal server provisioning using Ironic (bifrost) and the ansible deploy driver appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Snapchat's Spectacles Are Overhyped – But Amazing

I waited for five hours to buy Snapchat&;s $129 camera glasses. I don&8217;t regret it.

If you’ve ever shared a self-destructing photo or video, you probably did so on Snapchat. Two months ago, the company re-branded itself as Snap Inc., “a camera company” (though the app is still called Snapchat).

But what’s a camera company without a camera? Enter Spectacles.

Snap&;s new camera/sunglasses hybrid is like a GoPro for hipsters, or maybe like a cuter and less conspicuous Google Glass. While wearing them, you can take photos that automatically upload to your phone, ready for you to add to your Snapchat story. They cost $129 and come in three colors (black, teal and coral), all in a rounded, slightly cat-eye shape.

And their hype is real, thanks in no small part to a genius rollout that&039;s led to artificially scarce supply, super-long lines, and a media story in and of itself. Unless you live in New York City or LA, Spectacles are only available via so-called Snapbots — a cyclops/vending machine hybrid that&039;s trackable on this map and that has been popping up in places like Big Sur, the Grand Canyon, and Tulsa, Oklahoma (but curiously enough, not bigger cities like Chicago or Philadelphia). Some pairs are already going for two or three times retail price on eBay, and Lumoid is charging $20 to rent a pair for a day.

Xavier Harding / BuzzFeed News


View Entire List ›

Quelle: <a href="Snapchat&039;s Spectacles Are Overhyped – But Amazing“>BuzzFeed

Introducing Decapod, an easier way to manage Ceph

The post Introducing Decapod, an easier way to manage Ceph appeared first on Mirantis | The Pure Play OpenStack Company.
Ceph is a de-facto standard in building robust distributed storage systems. It enables users to get a reliable, highly available, and easily scalable storage cluster using commodity hardware. Also, Ceph is becoming a storage basis for production OpenStack clusters.
There are several ways of managing Ceph clusters, including:

Using the ceph-deploy tool
Using custom in-house or open source manifests for configuration management software such as Puppet or Ansible
Using standalone solutions such as 01.org VSM or Fuel

Another solution in that third bucket is Decapod, a standalone solution that simplifies deployment of clusters and management of their lifecycles.
In this article, we&;ll compare the different means for deploying Ceph.
Deployment using ceph-deploy
The ceph-deploy tool is available with Ceph itself. According to the official documentation:
The ceph-deploy tool is a way to deploy Ceph relying only upon SSH access to the servers, sudo, and some Python. It runs on your workstation, and does not require servers, databases, or any other tools. If you set up and tear down Ceph clusters a lot, and want minimal extra bureaucracy, ceph-deploy is an ideal tool. The ceph-deploy tool is not a generic deployment system. It was designed exclusively for Ceph users who want to get Ceph up and running quickly with sensible initial configuration settings without the overhead of installing Chef, Puppet or Juju. Users who want fine-control over security settings, partitions or directory locations should use a tool such as Juju, Puppet, Chef or Crowbar.
As described, ceph-deploy is mostly limited to some quick cluster deployment. This is perfectly applicable for deploying a test environment, but production deployment still requires a lot of thorough configuration using external tools.
Deployment using manifests for configuration management tools
Configuration management tools enable you to deploy Ceph clusters as while maintaining great possibilities to tune the cluster. It is also possible to scale or shrink these clusters using the same code base.
The only problem here is high learning curve of such solutions: you need to know, in detail, every configuration option, and you need to read the source code of manifests/playbooks/formulas to understand in detail how they works.
Also, in most cases these manifests focus on a single use case: cluster deployment. They do not provide enough possibilities to manage the cluster after it is up and running. When you operate the cluster, if you need to extend it with new machines, disable existing machines to do maintenance, reconfigure hosts to add new storage pools or hardware, and so on, you will need to create and debug new manifests by yourself.
Standalone solutions
Decapod and 01.org VSM are examples of standalone configuration tools. They provide you with a unified view of the whole storage system, eliminating the need to understand low level details of cluster management. They integrate with a monitoring system, and they simplify operations on the cluster. They both have a low learning curve, providing best management practices with a simple interface.
Unfortunately, VSM has some flaws, including the following:

It has tightly coupled business and automation logic, which makes it hard to extend the tool, or even customize some deployment steps
By design, it is limited in scale. It works great for small clusters, but at a bigger scale the software itself becomes a bottleneck
It lacks community support
It has an overcomplicated design

Decapod takes a slightly different approach: it separates provisioning and management logic from the start, using an official community project, ceph-ansible. Decapod uses Ansible to do all remote management work, and uses its proven ability to create scalable deployments.
The Decapod architecture
Since Decapod uses Ansible to manage remote nodes, it does not need a complex architecture. Moreover, we’ve been trying to keep it as simple as possible. The architecture looks like this:

As you can see, Decapod has two main services: API and controller.
The API service is responsible for management entities and the handling of HTTP requests. If you request execution of an action on a Ceph node, the API service creates the task in the database for the controller. Each request for that task returns its status.
The Controller listens for new tasks in the database, prepares Ansible for execution (generates Ansible inventory, injects variables for playbooks) and tracks the progress of execution. Every step of the execution is trackable in the UI. You can also download the whole log afterwards.
Decapod performs every management action using a plugin, including cluster deployment and purging object storage daemons from hosts. Basically, a plugin is a playbook to execute, and a Python class used to generate the correct variables and dynamic inventory for Ansible based on the incoming class. Installation is dynamically extendable, so there is no need to redeploy Decapod with another set of plugins. Also, each plugin provides a set of sensible settings for your current setup, but if you want, you may modify every aspect and each setting.
Decapod usage
Decapod has rich CLI and UI interfaces, which enable you to manage clusters. We gave a lot of attention to the UI because we believe that a good interface can help users to accomplish  their goals without paying a lot of attention to low level details. If you want to do some operation work on a cluster, Decapod will try to help you with the most sensible settings possible.
Also, another important feature of Decapod is its ability to audit changes. Every action or operation on the cluster is trackable, and it is always possible to check the history of modifications for every entity, from its history of execution on a certain cluster to changes in the name of a user.
The Decapod workflow is rather simple, and involves a traditional user/role permission based model of access. To deploy a cluster, you  needs to create it, providing a name for the deployed cluster. After that, you select the management action you wants to do, and select the required servers, and Decapod will generate sensible defaults for that action. If you&8217;re is satisfied, you can execute this action in a single click. If not, you can tune these defaults.
You may find more information about using Decapod in our demo:

So what do you think? What are you using to deploy Ceph now, and how do you think Decapod will affect your workflow? Leave us a comment and let us know.
The post Introducing Decapod, an easier way to manage Ceph appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

OpenStack Developer Mailing List Digest November 18-25th

OpenStack Developer Mailing List Digest November 18-25th

Updates:

Nova placement/resource provider work [4]
New release-announce list and other changes to openstack-announce [5]
Formal Discussion of Documenting Upgrades[6]
Stewardship Working Group description/update [7]
OpenStack Liberty has reached EOL [8]
Switching test jobs from Ubuntu Trusty to Xenial on the gate is happening on December 6th [9]

A Continuously Changing Environment:

We have core developers who’ve been around for a long while stepping down and giving the opportunity to the “next generation” to take on the responsibility of leadership
Thank you for your presence, for teaching and for showing other contributors a good example by embracing open source and OpenStack

Andrew Laski (Nova): “As I&;ve told people many times when they ask me what it&8217;s like to work on an open source project like this: working on proprietary software exposes you to smart people but you&8217;re limited to the small set of people within an organization, working on a project like this exposed me to smart people from many companies and many parts of the world. I have learned a lot working with you all. Thanks.”
Carl Baldwin (Neutron): “This is a great community and I&8217;ve had a great time participating and learning with you all.”
Marek Denis (Keystone): “It&8217;s been a great journey, I surely learned a lot and improved both my technical and soft skills.”

Thank you for all your hard work!

Community goals for Ocata:

Starting with the Newton, our community commits to release goals in order to provide the minimum level of consistency and user experience and to improve certain areas OpenStack-wide [1]
The goal is to remove all remaining incubated Oslo code in Ocata[2][3]

Unit Test Setup Changes [10]:

Attempt to remove DB dependency from the unit test jobs

Special DB jobs still exist to provide workaround where needed along with a script in ‘tools/test-setup.sh’

Long term goal is for projects to not use the -db jobs anymore, new changes for them should not be accepted.

Project Info in README Files [11]

Increase visibility of fundamental project information that is already available on the governance web site [12]
Badges are automatically generated as part of the governance CI [13]
Every project is strongly recommended to use this new system to provide information about

The project’s state (in Big Tent or not, etc.)
Project tags
Project capabilities

[1] http://governance.openstack.org/goals/index.html
[2] http://governance.openstack.org/goals/ocata/remove-incubated-oslo-code.html
[3] https://www.youtube.com/watch?v=tW0mJZe6Jiw
[4] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107600.html
[5] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107629.html
[6] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107570.html
[7] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107712.html
[8] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107184.html
[9] http://lists.openstack.org/pipermail/openstack-dev/2016-November/106906.html
[10] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107784.html
[11] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107966.html
[12] http://governance.openstack.org/reference/projects/index.html
[13] https://review.openstack.org/#/c/391588/
Quelle: openstack.org

Your Agenda for HPE Discover London 2016

Next week HPE will host more than 10,000 top IT executives, architects, engineers, partners and thought-leaders from across Europe at Discover 2016 London, November 29th &; December 1st in London.
Come visit Docker in Booth  to learn how Docker’s Containers-as-a-Service platform is transforming modern application infrastructures, allowing business to benefit from a more agile development environment.
Docker experts will be on-hand to for in-booth demos, hands-on-labs, breakout sessions and Transformation Zone sessions to demonstrate how Docker’s infrastructure platform, provides businesses with a unifying framework to embrace hybrid infrastructures and optimize resource utilization across legacy and modern Linux and Windows applications.
Not attending Discover London? Don’t miss a thing and “Save the Date” for the live streaming of keynotes and top sessions beginning November 29th at 11:00 GMT and through the duration of the event.

Save the date &8211; General Session Day 1
Save the date &8211; General Session Day 2

Be sure to add these key Docker sessions to your HPE Discover London agenda:
Ongoing: Transformation Zone Hours Show Floor
DEMO315: HPE IT Docker success stories
Supercharge your container deployments on bare metal and VMs by orchestrating large workloads using simple Docker mechanisms. See how the HPE team automated hosting applications using HPE OneView, running Docker containers on bare metal and VMs for deployment and management of traditional R&D tools for build and test.
 
Tuesday,   November 29, 2016
10:30 &8211; 11:00   Theater 1
T10749: Pick up the pace with infrastructure optimized for Docker and DevOps
Docker and DevOps can accelerate app development, but what are you doing to accelerate your Docker platform? Improving software release velocity and efficiency requires infrastructure that can keep pace with Docker. During this session, you will receive practical tips on how to quickly spin up and manage Docker DevOps environments. Take advantage of our development experiences and reference architecture best practices to leverage the HPE Hyper Converged platform so that you will have more time to focus on developing your apps.
11:30 &8211; 12:00  Discussion Forum 6: 
DF11870: Meet the expert, tips to accelerate your IT with composable infrastructure, containers, virtualization and microservices
Spend time with a Hewlett Packard Enterprise infrastructure automation expert to explore new ways to accelerate delivery of applications and IT services. Learn how to bring infrastructure as code to bare metal with HPE OneView and composable infrastructure. Find out how containers can provide an ideal environment for service deployment. Get best-practice guidance for using a microservices architecture to create small services with light use of resources, coupled with fast deployment and easy portability.
12:30 &8211; 13:30   Capital Suite, Rm 16: 
BB11866: Developer-friendly IT accelerates adoption of continuous integration and delivery to drive greater value
Are your marching orders, “Everything as code and automate everything?” If your answer is, “Yes,” then come to this Breakout Session to hear Hewlett Packard Enterprise experts share real-world use cases that address compliance at velocity, configuration drift and bare-metal provisioning. During this session, you’ll also gain best-practice insight on patch management, containers and workflow optimization strategies.

Tuesday, November 29, 2016 12:30 &8211; 13:00   Theater 11
T11827: HPE and Docker, accelerating modern application architectures in the hybrid IT world
Businesses require a hybrid infrastructure that supports continuous delivery of new applications and services. With HPE and Docker, businesses are now able to build and run distributed applications in a hybrid IT environment faster and more cost-effectively. This partnership provides the flexibility of a true hybrid solution, with your own container and Docker apps that can run in a public or private cloud. Join us to see how HPE and Docker provide a comprehensive solution that spans the app lifecycle, and helps cut cost and reduce complexity.
 
Wednesday,   November 30, 2016 11:00 &8211; 12:00  Innovation Theater 10
SL11392: The future belongs to the fast, transform your business with IT Operations Management
Join Tony Sumpster, Senior Vice President and General Manager of Hewlett Packard Enterprise Software, along with a panel of customers, to discuss the challenges and opportunities in digital transformation. You’ll also hear about how IT operations can accelerate your transition to the digital enterprise. Transformation is driven by business needs, and innovations in hybrid cloud, machine learning and collaboration can help you realize rapid time to value and time to market, while also managing risk.
 
Wednesday,   November 30, 2016 11:30 &8211; 12:00  Connect Community
DF12121: Connect Tech Forum, from automation to Docker and Azure, a practical guide to build your cloud journey
Businesses of all sizes are feeling the need for infrastructure that’s faster and lighter on its feet. The C-Suite is looking for IT to be a catalyst for change, not a constraint. Your business is looking for public-cloud-like convenience and speed, things you, as IT Director, will be hard-pressed to provide with incremental changes. Through a company assessment, you will learn how to start your cloud journey and discover the route to Hybrid IT through practical use cases.
Read more about Docker for the Virtualization Admin in our eBook by Docker Technical Evangelist Mike Coleman and to learn more about Docker’s enterprise platform, Docker Datacenter, watch the on-demand webinar What&;s New in Docker Datacenter with Engine 1.12.
To start learning more about Docker and HPE, check out these additional resources:

Go to: www.docker.com/hpe
Sign up for a free 30 day trial
Read the Containers as a Service white paper

Visit @Docker at in London 11/29-12/1Click To Tweet

The post Your Agenda for HPE Discover London 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

What’s New in Docker Datacenter with Engine 1.12 – Demo Q&A

Last week we announced the latest release of (DDC) with Engine 1.12 integration, which includes Universal Control Plane (UCP) 2.0 and Docker Trusted Registry (DTR) 2.1. Now, IT operations teams can manage and secure their environment more effectively and developers can self-service select from an even more secure image base. Docker Datacenter with Engine 1.12 boasts improvements in orchestration and operations, end to end security (image signing, policy enforcement, mutual TLS encryption for clusters), enables Docker service deployments and includes an enhanced UI. Customers also have backwards compatibility for Swarm 1.x and Compose.

 
To showcase some of these new features we hosted a webinar where we provided an overview of Docker Datacenter, talked through some of the new features and showed a live demo of solution. Watch the recording of the webinar below:
 

 
We hosted a Q&A session at the end of the webinar and have included some of the most common audience questions  received.
Audience Q&A
Can I still deploy run and deploy my applications built with a previous Docker Engine version?
Yes. UCP 2.0 automatically sets up and manages a Swarm cluster alongside the native built-in swarm-mode cluster from Engine 1.12 on the same set of nodes. This means that when you use “docker run” commands, they are handled by the Swarm 1.x part of the UCP cluster and thus ensures full backwards compatibility with your existing Docker applications. The best part is, no additional product installation or configuration is required by the admin to make this work.In addition to this, previous versions of the Docker Engine (1.10 and 1.11) will still be supported as part of Docker Datacenter.
 
Will Docker Compose continue to work in Docker Datacenter?  I.e Deploy containers to multiple hosts in a DDC cluster, as opposed to only on a single host?
In UCP, “docker-compose up” will deploy to multiple hosts on the cluster. This is different from an open-source Engine 1.12 swarm-mode, where it will only deploy on a single node, because UCP offers full backwards compatibility (using the parallel Swarm 1.x cluster, as described above). Note that you will have to use Compose v2 in order to deploy across multiple hosts, as Compose v1 format does not support multi-host deployment.
 
For the built in HTTP routing mesh, which External LB&;s are supported? Nginx, HAProxy, AWS EC2 Elastic LB? Does this work similar to what Interlock was doing?
The experimental HTTP routing mesh (HRM) feature is focused on providing correct routing between hostnames and services, so it will  work across any of the above load balancers, as long as you configure them appropriately for this purpose.
The HRM and Interlock LB/SD feature sets provide similar capabilities but for different application architectures. HRM is used for swarm-mode based services, while Interlock is used for non-swarm-mode “docker run” containers.
For more information on these features, check out our blog post on DDC networking updates and the updated reference architecture linked within that post.
 
Will the HTTP routing mesh feature be available also in the open source free version of the docker engine?
Docker Engine 1.12 (open-source) contains the TCP-based routing mesh, which allows you to route based on ports. Docker Datacenter also provides the HTTP routing mesh feature which extends the open-source feature to allow you to route based on hostnames.
 
What is “docker service” used for and why?
A Docker service is a construct within swarm-mode that consists of a group of containers (“tasks”) from the same image. Services follow a declarative model that allows you to specify the desired state of your application: you specify how many instances of the container image you want, and swarm-mode ensures that those instances are deployed on the cluster. If any of those instances go down (e.g. because a host is lost), swarm-mode automatically reschedules them elsewhere on the cluster. The service also provides integrated load balancing and service discovery for its container instances.
 
What type of monitoring of host health is built in?
The new swarm-mode in Docker Engine 1.12 uses a RAFT-based consensus algorithm to determine the health of nodes in the cluster. Each swarm manager sends regular pings to workers (and to other managers) in order to determine their current status. If the pings return an unhealthy response or do not meet the latency minimums for the cluster (configurable in the settings), then that node might be declared unhealthy and containers will be scheduled elsewhere in the cluster. In Universal Control Plane (UCP), the status of nodes is described in detail in the web UI on the dashboard and Nodes pages.
 
What kind of role based access controls (RBAC) are available for networks and load balancing features?
The previous version of UCP (1.1) had the ability to provide granular label-based access control for containers. We’ve since expanded that granular access control to include both services and networks, so you can use labels to define which networks a team of users has access to, and what level of access that team has. The load balancing features make use of both services and networks so will be access controlled through those resources.
 
Is it possible to enforce a criteria that only allows production DTR run only containers that are signed?
Yes, you can accomplish this using a combination of features in the new version of Docker Datacenter. DTR 2.1 contains a Notary server (Docker Content Trust), which allows you to provide your users cryptographic keys to sign images. UCP 2.0 has the ability to run only signed images on the cluster. Furthermore, you can use “delegations” to define which teams must sign the image prior to it being deployed; for example in a low security cluster you could allows any UCP user to sign, whereas in production, you might require signatures from Release Management, Security, and Developer teams. Learn more about running images with Docker Content Trust here.
 
As a very large enterprise doing various POC&8217;s for Docker, one of the big questions is vulnerabilities in the open source code that can be part of the base images. Is there anything that Docker is developing to counter this?
Earlier this year, we announced Docker Security Scanning, which provides a detailed security profile of Docker images for risk management and software compliance purposes. Docker Security Scanning is currently available for private repositories in Docker Cloud private and coming soon to Docker Datacenter.
 
Is there any possibility to trace which user is accessing a container?
Yes, you can use audit logging. To provide auditing of your cluster, you can utilize UCP’s Remote Log Server feature. This allows you to send system debug information to a syslog server of your choice, including a full list of all commands run against the UCP cluster. This would include information such as which user attempted to deploy or access a container.
 
What checks does the new DDC have for potential noisy neighbor container scenarios, or for rogue containers that can potentially hog the underlying infrastructure?
One of the ways you can provide a check against noisy neighbor scenarios is through the use of runtime resource constraints. These allow you to set limits on how much system resources (e.g. cpu, memory) that any given container is allowed to use. These are configurable within the UI.
 
Do you have a trial license for Docker Datacenter ?
We offer a free 30-day trial of Docker Datacenter. Trial software  can be accessed by visiting the Docker Store &; www.docker.com/trial
 
For pricing, is a node defined as a host machine or a container?
The subscription is licensed and priced on a per node per year basis. A node is anything with the Docker Commercially Supported (CS) Engine installed on it. It could be a bare metal server, cloud instance or within a virtual machine. More pricing details are available here.
 
More Resources:

Request a demo: of the latest Docker Datacenter
See What’s New in Docker Datacenter
Learn more by visiting the Docker Datacenter webpage
Sign up for a free 30 day trial

Check out the FAQ from last week’s Docker Datacenter demo! To Tweet

The post What’s New in Docker Datacenter with Engine 1.12 &8211; Demo Q&;A appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/