“Dear Boss, I want to attend the OpenStack Summit”

Want to attend the OpenStack Summit Boston but need help with the right words for getting your trip approved? While we won&;t write the whole thing for you, here&8217;s a template to get you going. It&8217;s up to you to decide how the Summit will help your team, but with free workshops and trainings, technical sessions, strategy talks and the opportunity to meet thousands of likeminded Stackers, we don&8217;t think you&8217;ll have a hard time finding an answer.
 
Dear [Boss],
I would like to attend the OpenStack Summit in Boston, May 8-11, 2017. The OpenStack Summit is the largest open source conference in North America, and the only one where I can get free OpenStack training, learn how to contribute code upstream to the project, and meet with other users to learn how they’ve been using OpenStack in production. The Summit is an opportunity for me to bring back knowledge about [Why you want to attend! What are you hoping to learn? What would benefit your team?] and share it with our team, while helping us get to know similar OpenStack-minded teams around the world (think 60+ countries and nearly 1,200 companies represented).
If I register before mid-March, I get early bird pricing&;$600 USD for 4 days (plus an optional day of training). Early registration also allows me to RSVP for trainings and workshops as soon as they open (they always sell out!), or sign up to take the Certified OpenStack Administrator exam onsite.
At the OpenStack Summit Austin last year, over 7,800 attendees heard case studies from Superusers like AT&T and China Mobile, learned how teams are using containers and container orchestration like Kubernetes with OpenStack, and gave feedback to Project Teams about user needs for the upcoming software release. You can browse past Summit content at openstack.org/videos to see a sample of the conference talks.
The OpenStack Summit is the opportunity for me to expand my OpenStack knowledge, network and skills. Thanks for considering my request.
[Your Name]
Quelle: openstack.org

OpenStack Developer Mailing List Digest December 17-23

SuccessBot Says

AJaeger: We&;ve now got the first Deployment guide published for Newton, see http://docs.openstack.org/project-deploy-guide/newton/ . Congrats to OpenStack Ansible team!
clarkb: OpenStack CI has moved off of Ubuntu Trusty and onto Ubuntu Xenial for testing Newton and master.
ihrachys: first oslo.privsep patch landed in Neutron.
dulek: Cinder now supports ZeroMQ messaging!
All

Release Countdown for Week R-8, 26-30 December

Feature work and major refactoring should be well under way as we pass the second milestone.
Focus:

Deadline for non-client library releases is R-5 (19 Jan).

Feature freeze exceptions are not granted for libraries.

General Notes:

Project teams should identify contributors that have a significant impact this cycle who not otherwise qualify for ATC status.
Those names should be added to the governance repository for consideration as ATC.
The list needs to be approved by the TC by 20 January to qualify for contributor discounts codes for the event.
Submit these by 5 January

Important Dates:

Extra ATCs deadline: 5 January
Final release of non-client libraries: 19 January
Ocata 3 Milestone, with Feature and Requirements freezes: 26 January

Ocata release schedule [1]
Full thread

Lives

There is movement to still move to Storyboard as our task tracker.
To spread awareness, some blog posts have been made about it, and it’s capabilities:

General over and decision to move from Launchpad [2].
Next post will focus on compare and contrast of Launchpad and Storyboard.

If you want to hear about something in particular in the blog posts, let the team know on storyboard IRC channel on Freenode.
Attend their weekly meeting [3].
Try out Storyboard in the sandbox [4].
Storyboard documentation [5]
Full thread

 
[1] &; http://releases.openstack.org/ocata/schedule.html
[2] &8211; https://storyboard-blog.sotk.co.uk/why-storyboard-for-openstack.html
[3] &8211; https://wiki.openstack.org/wiki/StoryBoard
[4] &8211; https://storyboard-dev.openstack.org/
[5] &8211; http://docs.openstack.org/infra/storyboard/
Quelle: openstack.org

OpenStack Developer Mailing List Digest December 31 – January 6

SuccessBot Says

Dims &; Keystone now has Devstack based functional test with everything running under python3.5.
Tell us yours via IRC channels with message &; <message>&;
All

Time To Retire Nova-docker

nova-docker has lagged behind the last 6 months of nova development.
No longer passes simple CI unit tests.

There are patches to at least get the unit tests work 1 .

If the core team no longer has time for it, perhaps we should just archive it.
People ask about it on openstack-nova about once or twice a year, but it’s not recommended as it’s not maintained.
It’s believed some people are running and hacking on it outside of the community.
The Sun project provides lifecycle management interface for containers that are started in container orchestration engines provided with Magnum.
Nova-lxc driver provides an ability of treating containers like your virtual machines. 2

Not recommended for production use though, but still better maintained than nova-docker 3.

Nova-lxd also provides the ability of treating containers like virtual machines.
Virtuozzo which is supported in Nova via libvirt provides both a virtual machine and OS containers similar to LXC.

These containers have been in production for more than 10 years already.
Well maintained and actually has CI testing.

A proposal to remove it 4 .
Full thread

Community Goals For Pike

A few months ago the community started identifying work for OpenStack-wide goals to “achieve visible common changes, push for basic levels of consistency and user experience, and efficiently improve certain areas where technical debt payments have become to high &8211; across all OpenStack projects.”
First goal defined 5 to remove copies of incubated Oslo code.
Moving forward in Pike:

Collect feedback of our first iteration. What went well and what was challenging?
Etherpad for feedback 6

Goals backlog 7

New goals welcome
Each goal should be achievable in one cycle. If not, it should be broken up.
Some goals might require documentation for how it could be achieved.

Choose goals for Pike

What is really urgent? What can wait for six months?
Who is available and interested in contributing to the goal?

Feedback was also collected at the Barcelona summit 8
Digest of feedback:

Most projects achieved the goal for Ocata, and there was interest in doing it on time.
Some confusion on acknowledging a goal and doing the work.
Some projects slow on the uptake and reviewing the patches.
Each goal should document where the “guides” are, and how to find them for help.
Achieving multiple goals in a single cycle wouldn’t be possible for all team.

The OpenStack Product Working group is also collecting feedback for goals 9
Goals set for Pike:

Split out Tempest plugins 10
Python 3 11

TC agreeements from last meeting:

2 goals might be enough for the Pike cycle.
The deadline to define Pike goals would be Ocata-3 (Jan 23-27 week).

Full thread

POST /api-wg/news

Guidelines current review:

Add guidelines on usage of state vs. status 12
Add guidelines for boolean names 13
Clarify the status values in versions 14
Define pagination guidelines 15
Add API capabilities discovery guideline 16

Full thread

 
Quelle: openstack.org

Introduction to YAML: Creating a Kubernetes deployment

The post Introduction to YAML: Creating a Kubernetes deployment appeared first on Mirantis | The Pure Play OpenStack Company.
In previous articles, we&;ve been talking about how to use Kubernetes to spin up resources. So far, we&8217;ve been working exclusively on the command line, but there&8217;s an easier and more useful way to do it: creating configuration files using YAML. In this article, we&8217;ll look at how YAML works and use it to define first a Kubernetes Pod, and then a Kubernetes Deployment.
YAML Basics
It&8217;s difficult to escape YAML if you&8217;re doing anything related to many software fields &; particularly Kubernetes, SDN, and OpenStack. YAML, which stands for Yet Another Markup Language, or YAML Ain&8217;t Markup Language (depending who you ask) is a human-readable text-based format for specifying configuration-type information. For example, in this article, we&8217;ll pick apart the YAML definitions for creating first a Pod, and then a Deployment.
Using YAML for K8s definitions gives you a number of advantages, including:

Convenience: You&8217;ll no longer have to add all of your parameters to the command line
Maintenance: YAML files can be added to source control, so you can track changes
Flexibility: You&8217;ll be able to create much more complex structures using YAML than you can on the command line

YAML is a superset of JSON, which means that any valid JSON file is also a valid YAML file. So on the one hand, if you know JSON and you&8217;re only ever going to write your own YAML (as opposed to reading other people&8217;s) you&8217;re all set.  On the other hand, that&8217;s not very likely, unfortunately.  Even if you&8217;re only trying to find examples on the web, they&8217;re most likely in (non-JSON) YAML, so we might as well get used to it.  Still, there may be situations where the JSON format is more convenient, so it&8217;s good to know that it&8217;s available to you.
Fortunately, there are only two types of structures you need to know about in YAML:

Lists
Maps

That&8217;s it. You might have maps of lists and lists of maps, and so on, but if you&8217;ve got those two structures down, you&8217;re all set. That&8217;s not to say there aren&8217;t more complex things you can do, but in general, this is all you need to get started.
YAML Maps
Let&8217;s start by looking at YAML maps.  Maps let you associate name-value pairs, which of course is convenient when you&8217;re trying to set up configuration information.  For example, you might have a config file that starts like this:

apiVersion: v1
kind: Pod
The first line is a separator, and is optional unless you&8217;re trying to define multiple structures in a single file. From there, as you can see, we have two values, v1 and Pod, mapped to two keys, apiVersion and kind.
This kind of thing is pretty simple, of course, and you can think of it in terms of its JSON equivalent:
{
“apiVersion”: “v1″,
“kind”: “Pod”
}
Notice that in our YAML version, the quotation marks are optional; the processor can tell that you&8217;re looking at a string based on the formatting.
You can also specify more complicated structures by creating a key that maps to another map, rather than a string, as in:

apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
In this case, we have a key, metadata, that has as its value a map with 2 more keys, name and labels. The labels key itself has a map as its value. You can nest these as far as you want to.
The YAML processor knows how all of these pieces relate to each other because we&8217;ve indented the lines. In this example I&8217;ve used 2 spaces for readability, but the number of spaces doesn&8217;t matter &8212; as long as it&8217;s at least 1, and as long as you&8217;re CONSISTENT.  For example, name and labels are at the same indentation level, so the processor knows they&8217;re both part of the same map; it knows that app is a value for labels because it&8217;s indented further.
Quick note: NEVER use tabs in a YAML file.
So if we were to translate this to JSON, it would look like this:
{
“apiVersion”: “v1″,
“kind”: “Pod”,
“metadata”: {
“name”: “rss-site”,
“labels”: {
“app”: “web”
}
}
}
Now let&8217;s look at lists.
YAML lists
YAML lists are literally a sequence of objects.  For example:
args
 – sleep
 – “1000”
 – message
 – “Bring back Firefly!”
As you can see here, you can have virtually any number of items in a list, which is defined as items that start with a dash (-) indented from the parent.  So in JSON, this would be:
{
“args”: [“sleep”, “1000”, “message”, “Bring back Firefly!”]
}
And of course, members of the list can also be maps:

apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
spec:
 containers:
   – name: front-end
     image: nginx
     ports:
       – containerPort: 80
   – name: rss-reader
     image: nickchase/rss-php-nginx:v1
     ports:
       – containerPort: 88
So as you can see here, we have a list of containers &;objects&;, each of which consists of a name, an image, and a list of ports.  Each list item under ports is itself a map that lists the containerPort and its value.
For completeness, let&8217;s quickly look at the JSON equivalent:
{
“apiVersion”: “v1″,
“kind”: “Pod”,
“metadata”: {
“name”: “rss-site”,
“labels”: {
“app”: “web”
}
},
“spec”: {
“containers”: [{
“name”: “front-end”,
“image”: “nginx”,
“ports”: [{
“containerPort”: “80”
}]
},
{
“name”: “rss-reader”,
“image”: “nickchase/rss-php-nginx:v1″,
“ports”: [{
“containerPort”: “88”
}]
}]
}
}
As you can see, we&8217;re starting to get pretty complex, and we haven&8217;t even gotten into anything particularly complicated! No wonder YAML is replacing JSON so fast.
So let&8217;s review.  We have:

maps, which are groups of name-value pairs
lists, which are individual items
maps of maps
maps of lists
lists of lists
lists of maps

Basically, whatever structure you want to put together, you can do it with those two structures.  
Creating a Pod using YAML
OK, so now that we&8217;ve got the basics out of the way, let&8217;s look at putting this to use. We&8217;re going to first create a Pod, then a Deployment, using YAML.
If you haven&8217;t set up your cluster and kubectl, go ahead and check out this article series on setting up Kubernetes before you go on.  It&8217;s OK, we&8217;ll wait&;.

Back already?  Great!  Let&8217;s start with a Pod.
Creating the pod file
In our previous example, we described a simple Pod using YAML:
&8212;
apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
spec:
 containers:
   &; name: front-end
     image: nginx
     ports:
       &8211; containerPort: 80
   &8211; name: rss-reader
     image: nickchase/rss-php-nginx:v1
     ports:
       &8211; containerPort: 88
Taking it apart one piece at a time, we start with the API version; here it&8217;s just v1. (When we get to deployments, we&8217;ll have to specify a different version because Deployments don&8217;t exist in v1.)
Next, we&8217;re specifying that we want to create a Pod; we might specify instead a Deployment, Job, Service, and so on, depending on what we&8217;re trying to achieve.
Next we specify the metadata. Here we&8217;re specifying the name of the Pod, as well as the label we&8217;ll use to identify the pod to Kubernetes.
Finally, we&8217;ll specify the actual objects that make up the pod. The spec property includes any containers, storage volumes, or other pieces that Kubernetes needs to know about, as well as properties such as whether to restart the container if it fails. You can find a complete list of Kubernetes Pod properties in the Kubernetes API specification, but let&8217;s take a closer look at a typical container definition:
&8230;
spec:
 containers:
   &8211; name: front-end
     image: nginx
     ports:
       &8211; containerPort: 80
   &8211; name: rss-reader
&8230;
In this case, we have a simple, fairly minimal definition: a name (front-end), the image on which it&8217;s based (nginx), and one port on which the container will listen internally (80).  Of these, only the name is really required, but in general, if you want it to do anything useful, you&8217;ll need more information.
You can also specify more complex properties, such as a command to run when the container starts, arguments it should use, a working directory, or whether to pull a new copy of the image every time it&8217;s instantiated.  You can also specify even deeper information, such as the location of the container&8217;s exit log.  Here are the properties you can set for a Container:

name
image
command
args
workingDir
ports
env
resources
volumeMounts
livenessProbe
readinessProbe
livecycle
terminationMessagePath
imagePullPolicy
securityContext
stdin
stdinOnce
tty

Now let&8217;s go ahead and actually create the pod.
Creating the pod using the YAML file
The first step, of course, is to go ahead and create a text file.   Call it pod.yaml and add the following text, just as we specified it earlier:
&8212;
apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
spec:
 containers:
   &8211; name: front-end
     image: nginx
     ports:
       &8211; containerPort: 80
   &8211; name: rss-reader
     image: nickchase/rss-php-nginx:v1
     ports:
       &8211; containerPort: 88
Save the file, and tell Kubernetes to create its contents:
> kubectl create -f pod.yaml
pod “rss-site” created
As you can see, K8s references the name we gave the Pod.  You can see that if you ask for a list of the pods:
> kubectl get pods
NAME       READY     STATUS              RESTARTS   AGE
rss-site   0/2       ContainerCreating   0          6s
If you check early enough, you can see that the pod is still being created.  After a few seconds, you should see the containers running:
> kubectl get pods
NAME       READY     STATUS    RESTARTS   AGE
rss-site   2/2       Running   0          14s
From here, you can test out the Pod (just as we did in the previous article), but ultimately we want to create a Deployment, so let&8217;s go ahead and delete it so there aren&8217;t any name conflicts:
> kubectl delete pod rss-site
pod “rss-site” deleted
Troubleshooting pod creation
Sometimes, of course, things don&8217;t go as you expect. Maybe you&8217;ve got a networking issue, or you&8217;ve mistyped something in your YAML file.  You might see an error like this:
> kubectl get pods
NAME       READY     STATUS         RESTARTS   AGE
rss-site   1/2       ErrImagePull   0          9s
In this case, we can see that one of our containers started up just fine, but there was a problem with the other.  To track down the problem, we can ask Kubernetes for more information on the Pod:
> kubectl describe pod rss-site
Name:           rss-site
Namespace:      default
Node:           10.0.10.7/10.0.10.7
Start Time:     Sun, 08 Jan 2017 08:36:47 +0000
Labels:         app=web
Status:         Pending
IP:             10.200.18.2
Controllers:    <none>
Containers:
 front-end:
   Container ID:               docker://a42edaa6dfbfdf161f3df5bc6af05e740b97fd9ac3d35317a6dcda77b0310759
   Image:                      nginx
   Image ID:                   docker://sha256:01f818af747d88b4ebca7cdabd0c581e406e0e790be72678d257735fad84a15f
   Port:                       80/TCP
   State:                      Running
     Started:                  Sun, 08 Jan 2017 08:36:49 +0000
   Ready:                      True
   Restart Count:              0
   Environment Variables:      <none>
 rss-reader:
   Container ID:
   Image:                      nickchase/rss-php-nginx
   Image ID:
   Port:                       88/TCP
   State:                      Waiting
    Reason:                   ErrImagePull
   Ready:                      False
   Restart Count:              0
   Environment Variables:      <none>
Conditions:
 Type          Status
 Initialized   True
 Ready         False
 PodScheduled  True
No volumes.
QoS Tier:       BestEffort
Events:
 FirstSeen     LastSeen        Count   From                    SubobjectPath  Type             Reason                  Message
 ———     ——–        —–   —-                    ————-  ——– ——                  ——-
 45s           45s             1       {default-scheduler }                   Normal           Scheduled               Successfully assigned rss-site to 10.0.10.7
 44s           44s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Pulling                 pulling image “nginx”
 45s           43s             2       {kubelet 10.0.10.7}                    Warning          MissingClusterDNS       kubelet does not have ClusterDNS IP configured and cannot create Pod using “ClusterFirst” policy. Falling back to DNSDefault policy.
 43s           43s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Pulled                  Successfully pulled image “nginx”
 43s           43s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Created                 Created container with docker id a42edaa6dfbf
 43s           43s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Started                 Started container with docker id a42edaa6dfbf
 43s           29s             2       {kubelet 10.0.10.7}     spec.containers{rss-reader}     Normal          Pulling                 pulling image “nickchase/rss-php-nginx”
 42s           26s             2       {kubelet 10.0.10.7}     spec.containers{rss-reader}     Warning         Failed                  Failed to pull image “nickchase/rss-php-nginx”: Tag latest not found in repository docker.io/nickchase/rss-php-nginx
 42s           26s             2       {kubelet 10.0.10.7}                    Warning          FailedSync              Error syncing pod, skipping: failed to “StartContainer” for “rss-reader” with ErrImagePull: “Tag latest not found in repository docker.io/nickchase/rss-php-nginx”

 41s   12s     2       {kubelet 10.0.10.7}     spec.containers{rss-reader}    Normal   BackOff         Back-off pulling image “nickchase/rss-php-nginx”
 41s   12s     2       {kubelet 10.0.10.7}                                    Warning  FailedSync      Error syncing pod, skipping: failed to “StartContainer” for “rss-reader” with ImagePullBackOff: “Back-off pulling image “nickchase/rss-php-nginx””
As you can see, there&8217;s a lot of information here, but we&8217;re most interested in the Events &8212; specifically, once the warnings and errors start showing up.  From here I was able to quickly see that I&8217;d forgotten to add the :v1 tag to my image, so it was looking for the :latest tag, which didn&8217;t exist.  
To fix the problem, I first deleted the Pod, then fixed the YAML file and started again. Instead, I could have fixed the repo so that Kubernetes could find what it was looking for, and it would have continued on as though nothing had happened,.
Now that we&8217;ve successfully gotten a Pod running, let&8217;s look at doing the same for a Deployment.
Creating a Deployment using YAML
Finally, we&8217;re down to creating the actual Deployment.  Before we do that, though, it&8217;s worth understanding what it is we&8217;re actually doing.
K8s, remember, manages container-based resources. In the case of a Deployment, you&8217;re creating a set of resources to be managed. For example, where we created a single instance of the Pod in the previous example, we might create a Deployment to tell Kubernetes to manage a set of replicas of that Pod &8212; literally, a ReplicaSet &8212; to make sure that a certain number of them are always available.  So we might start our Deployment definition like this:
&8212;
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: rss-site
spec:
 replicas: 2
Here we&8217;re specifying the apiVersion as extensions/v1beta1 &8212; remember, Deployments aren&8217;t in v1, as Pods were &8212; and that we want a Deployment. Next we specify the name. We can also specify any other metadata we want, but let&8217;s keep things simple for now.
Finally, we get into the spec. In the Pod spec, we gave information about what actually went into the Pod; we&8217;ll do the same thing here with the Deployment. We&8217;ll start, in this case, by saying that whatever Pods we deploy, we always want to have 2 replicas. You can set this number however you like, of course, and you can also set properties such as the selector that defines the Pods affected by this Deployment, or the minimum number of seconds a pod must be up without any errors before it&8217;s considered &8220;ready&8221;.  You can find a full list of the Deployment specification properties in the Kuberenetes v1beta1 API reference.
OK, so now that we know we want 2 replicas, we need to answer the question: &8220;Replicas of what?&8221;  They&8217;re defined by templates:
&8212;
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: rss-site
spec:
 replicas: 2
 template:
   metadata:
     labels:
       app: web
   spec:
     containers:
       &8211; name: front-end
         image: nginx
         ports:
           &8211; containerPort: 80
       &8211; name: rss-reader
         image: nickchase/rss-php-nginx:v1
         ports:
           &8211; containerPort: 88
Look familiar?  It should; it&8217;s virtually identical to the Pod definition in the previous section, and that&8217;s by design. Templates are simply definitions of objects to be replicated &8212; objects that might, in other circumstances, by created on their own.
Now let&8217;s go ahead and create the deployment.  Add the YAML to a file called deployment.yaml and point Kubernetes at it:
> kubectl create -f deployment.yaml
deployment “rss-site” created
To see how it&8217;s doing, we can check on the deployments list:
> kubectl get deployments
NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
rss-site   2         2         2            1           7s
As you can see, Kubernetes has started both replicas, but only one is available. You can check the event log by describing the Deployment, as before:
> kubectl describe deployment rss-site
Name:                   rss-site
Namespace:              default
CreationTimestamp:      Mon, 09 Jan 2017 17:42:14 +0000=
Labels:                 app=web
Selector:               app=web
Replicas:               2 updated | 2 total | 1 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          rss-site-4056856218 (2/2 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–        ——                  ——-
 46s           46s             1       {deployment-controller }               Normal           ScalingReplicaSet       Scaled up replica set rss-site-4056856218 to 2
As you can see here, there&8217;s no problem, it just hasn&8217;t finished scaling up yet. Another few seconds, and we can see that both Pods are running:
> kubectl get deployments
NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
rss-site   2         2         2            2           1m
What we&8217;ve seen so far
OK, so let&8217;s review. We&8217;ve basically covered three topics:

YAML is a human-readable text-based format that let&8217;s you easily specify configuration-type information by using a combination of maps of name-value pairs and lists of items (and nested versions of each).
YAML is the most convenient way to work with Kubernetes objects, and in this article we looked at creating Pods and Deployments.
You can get more information on running (or should-be-running) objects by asking Kubernetes to describe them.

So that&8217;s our basic YAML tutorial. We&8217;re going to be tackling a great deal of Kubernetes-related content in the coming months, so if there&8217;s something specific you want to hear about, let us know in the comments, or tweet us at @MirantisIT.
The post Introduction to YAML: Creating a Kubernetes deployment appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

OpenStack Developer Mailing List Digest December 31 – January 6

SuccessBot Says

Dims &; Keystone now has Devstack based functional test with everything running under python3.5.
Tell us yours via IRC channels with message &; <message>&;
All

Time To Retire Nova-docker

nova-docker has lagged behind the last 6 months of nova development.
No longer passes simple CI unit tests.

There are patches to at least get the unit tests work 1 .

If the core team no longer has time for it, perhaps we should just archive it.
People ask about it on openstack-nova about once or twice a year, but it’s not recommended as it’s not maintained.
It’s believed some people are running and hacking on it outside of the community.
The Sun project provides lifecycle management interface for containers that are started in container orchestration engines provided with Magnum.
Nova-lxc driver provides an ability of treating containers like your virtual machines. 2

Not recommended for production use though, but still better maintained than nova-docker 3.

Nova-lxd also provides the ability of treating containers like virtual machines.
Virtuozzo which is supported in Nova via libvirt provides both a virtual machine and OS containers similar to LXC.

These containers have been in production for more than 10 years already.
Well maintained and actually has CI testing.

A proposal to remove it 4 .
Full thread

Community Goals For Pike

A few months ago the community started identifying work for OpenStack-wide goals to “achieve visible common changes, push for basic levels of consistency and user experience, and efficiently improve certain areas where technical debt payments have become to high &8211; across all OpenStack projects.”
First goal defined 5 to remove copies of incubated Oslo code.
Moving forward in Pike:

Collect feedback of our first iteration. What went well and what was challenging?
Etherpad for feedback 6

Goals backlog 7

New goals welcome
Each goal should be achievable in one cycle. If not, it should be broken up.
Some goals might require documentation for how it could be achieved.

Choose goals for Pike

What is really urgent? What can wait for six months?
Who is available and interested in contributing to the goal?

Feedback was also collected at the Barcelona summit 8
Digest of feedback:

Most projects achieved the goal for Ocata, and there was interest in doing it on time.
Some confusion on acknowledging a goal and doing the work.
Some projects slow on the uptake and reviewing the patches.
Each goal should document where the “guides” are, and how to find them for help.
Achieving multiple goals in a single cycle wouldn’t be possible for all team.

The OpenStack Product Working group is also collecting feedback for goals 9
Goals set for Pike:

Split out Tempest plugins 10
Python 3 11

TC agreeements from last meeting:

2 goals might be enough for the Pike cycle.
The deadline to define Pike goals would be Ocata-3 (Jan 23-27 week).

Full thread

POST /api-wg/news

Guidelines current review:

Add guidelines on usage of state vs. status 12
Add guidelines for boolean names 13
Clarify the status values in versions 14
Define pagination guidelines 15
Add API capabilities discovery guideline 16

Full thread

 
Quelle: openstack.org

DockerCon workshops: Which one will you be attending?

Following in last year’s major success, we are excited to be bringing back and expand the paid workshops at 2017. The pre-conference workshops will focus on a range of subjects from Docker 101 to deep dives in networking, Docker for JAVA and  advanced orchestration. Each workshop is designed to give you hands-on instruction and insight on key Docker topics, taught by Docker Engineers and Docker Captains. The workshops are a great opportunity to get better acquainted and excited about Docker technology to start off DockerCon week.

Take advantage of the lowest DockerCon pricing and get your Early Bird Ticket + Workshop now! Early Bird Tickets are limited and will sell out in the next two weeks!
Here are the basics of the DockerCon workshops:
Date: Monday, April 17, 2017
Time: 2:00pm &; 5:00pm
Where: Austin Convention Center &8211; 500 E. Cesar Chavez Street, Austin, TX
Cost: $150
Class size: Classes will remain small and are limited to 50 attendees per class.
Registration: The workshops are only open to DockerCon attendees. You can register for the workshops as an add-on package through the registration site here.

Below are overviews of each workshop. To learn more about each topic head over to the DockerCon 2017 registration site.
Learn Docker
If you are just getting started learning about Docker and want to get up to speed, this is the workshop for you. Come learn Docker basics including running containers, building images and basics on networking, orchestration, security and  volumes.
Orchestration Workshop: Beginner
You&;ve installed Docker, you know how to run containers, you&8217;ve written Dockerfiles to build container images for your applications (or parts of your applications), and perhaps you&8217;re even using Compose to describe your application stack as an assemblage of multiple containers.
But how do you go to production? What modifications are necessary in your code to allow it to run on a cluster? (Spoiler alert: very little, if any.) How does one set up such a cluster, anyway? Then how can we use it to deploy and scale applications with high availability requirements?
In this workshop, we will answer those questions using tools from the Docker ecosystem, with a strong focus on the native orchestration capabilities available since Docker Engine 1.12, aka &;Swarm Mode.&;
Orchestration Workshop: Advanced
Already using Docker and recently started using Swarm Mode in 1.12? Let’s start where previous Orchestration workshops may have left off, and dive into monitoring, logging, troubleshooting, and security of docker engine and docker services (Swarm Mode) for production workloads. Pulled from real world deployments, we&8217;ll cover centralized logging with ELK, SaaS, and others, monitoring/alerting with CAdvisor and Prometheus, backups of persistent storage, optional security features (namespaces, seccomp and apparmor profiles, notary), and a few cli tools for troubleshooting. Come away ready to take your Swarm to the next level!
Stay tuned as more workshop topics will be announced in the coming weeks! The workshops will sell out, so act fast and add the pre-conference workshops to your DockerCon 2017 registration!
Docker Networking
In this 3-hour, instructor-led training, you will get an in-depth look into Docker Networking. We will cover all the networking features natively available in Docker and take you through hands-on exercises designed to help you learn the skills you need to deploy and maintain Docker containers in your existing network environment.
Docker Store for Publishers
This workshop is designed to help potential Docker Store Publishers to understand the process, the best practices and the workflow of creating and publishing great content. You will get to interact with the members of the Docker Store’s engineering team. Whether you are an established ISV, a startup trying to distribute your software creation using Docker Containers or an independent developer, just trying to reach as many users as possible, you will benefit from this workshop by learning how to create and distribute trusted and Enterprise-ready content for the Docker Store.
Docker for Java Developers
Docker provides PODA (Package Once Deploy Anywhere) and complements WORA (Write Once Run Anywhere) provided by Java. It also helps you reduce the impedance mismatch between dev, test, and production environment and simplifies Java application deployment.
This workshop will explain how to:

Running first Java application with Docker
Package your Java application with Docker
Sharing your Java application using Docker Hub
Deploy your Java application using Maven
Deploy your application using Docker for AWS
Scaling Java services with Docker Engine swarm mode
Package your multi-container application and use service discovery
Monitor your Docker + Java applications
Build a deployment pipeline using common tools

Hands-On Docker for Raspberry Pi
Take part in our first-of-a-kind hands-on Raspberry Pi and Docker workshop where you will be given all the hardware you need to start creating and deploying containers with Docker including an 8-LED RGB add-on from Pimoroni. You will learn the subtleties of working with an ARM processor and how to control physical hardware through the GPIO interface. Programming experience is not required but a basic understanding of Python is helpful.
Microservices Lifecycle Explained Through Docker and Continuous Deployment
The workshop will go through the whole microservices development lifecycle. We’ll start from the very beginning and define and design architecture. From there on we’ll do some coding and testing all the way until the final deployment to production. Once our new services are up and running we’ll see how to maintain them, scale them, and recover them in case of failures. The goal will be to design a fully automated continuous deployment (CDP) pipeline with Docker containers.
During the workshop we’ll explore tools like Docker Engine with built in orchestration via swarm mode,, Docker Compose, Jenkins, HAProxy, and a few others.
Modernizing Monolothic ASP.NET Applications with Docker
Learn how to use Docker to run traditional ASP.NET applications In Windows containers without an application re-write. We’ll use Docker tools to containerize a monolithic ASP.NET app, then see how the platform helps us iterate quickly &8211; pulling high-value features out of the app and running them in separate containers. This workshop gives you a roadmap for modernizing your own ASP.NET workloads.

What dockercon workshop will you be attending? Limited number of spots => save yours now!Click To Tweet

The post DockerCon workshops: Which one will you be attending? appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

containerd livestream recap

In case you missed it last month, we announced that is extracting a key component of its platform, a part of the engine plumbing called  &; a core container runtime – and committed to donating it to an open foundation.
You can find up-to-date roadmap, architecture and API definitions in the Github repository, and more details about the project in our engineering team’s blog post.

You can also watch the following video recording of the containerd online meetup, for a summary and Q&A with Arnaud Porterie, Michael Crosby, Stephen Day, Patrick Chanezon and Solomon Hykes from the Docker team:

Here is the list of top questions we got following this announcement:
Q. Are you planning to run docker without runC ?
A. Although runC is the default runtime, as of  Docker 1.12, it can be replaced by any other OCI-compliant implementation. Docker will be compliant with the OCI Runtime Specification
Q. What major changes are on the roadmap for swarmkit to run on containerd if any? 
A. SwarmKit is using Docker Engine to orchestrate tasks, and Docker Engine is already using containerd for container execution. So technically, you are already using containerd when using SwarmKit. There is no plan currently to have SwarmKit directly orchestrate containerd containers though.
Q. Mind sharing why you went with GRPC for the API?
A. containerd is a component designed to be embedded in a higher level system, and serve a host local API over a socket. GRPC enables us to focus on designing RPC calls and data structures instead of having to deal with JSON serialization and HTTP error codes. This improves iteration speed when designing the API and data structures. For higher level systems that embed containerd, such as Docker or Kubernetes, a JSON/HTTP API makes more sense, allowing easier integration. The Docker API will not change, and will continue to be based on JSON/HTTP.
Q. How do you expect to see others leverage containerd outside of Docker?
A. Cloud managed container services such as Amazon ECS, Microsoft ACS, Google Container Engine, or orchestration tools such as Kubernetes or Mesos can leverage containerd as their core container runtime. containerd has been designed to be embedded for that purpose.
Q. How did you decided which feature should get into containerd?  How did you came up with the scope of the future containers?
A. We’re trying to capture in containerd the features that any container-centric platform would need, and for which there’s reasonable consensus on the way it should be implemented. Aspects which are either not widely agreed on or that can trivially be built one layer up were left out.
Q. How integrate with CNI and CNM?
A. Phase 3 of the containerd roadmap involves porting the network drivers from libnetwork and finding a good middle ground between the CNM abstraction of libnetwork and the CNI spec.
Additional Resources:

Contribute to containerd
Join the containerd slack channel
Read the engineering team’s blog post.

Docker Extracts & Donates containerd, it&;s Core Container Runtime for the container IndustryClick To Tweet

The post containerd livestream recap appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DockerCon 2017: Call For Papers FAQ

It’s a new year, and we are looking for new stories of how you are using technology to do big things. Submit your cool hack, use case or deep dive sessions before the 2017 CFP closes on January 14th.

To help with your submissions, we’ve answered the most frequent questions below and put together a list of tips to help get your proposal selected.
Q. How do I submit a proposal?
A. Submit your proposal here.
Q. What kind of talks are you looking for?
A. This year, we are looking for cool hacks, user stories and deep dive submissions:

Cool Hacks: Show us your cool hack and wow us with the interesting ways you can push the boundaries of the Docker stack. You do not have to have your hack ready by the submission deadline, just clearly explain your hack, what makes it cool and the technologies you will use.

Using Docker: Tell us first-hand about your Docker usage, challenges and what you learned along the way and inspire us on how to use Docker to accomplish real tasks.

Deep Dives: Propose code and demo heavy deep-dive sessions on what you have been able to transform with your use of the Docker stack. Entice your audience by going deeply technical and teach them how to do something they haven’t done.

Above all, DockerCon is a user conference and product and vendor pitches are not appropriate.
Q. What will I need to include in my submission?
A. Speaking proposals will ask for:

Title, the more catchy and descriptive, the better. But don&;t be too cute.
Abstract describing the presentation. This is what gets shown in the agenda and how the audience decides if they want to attend your session.
Key Takeaways that communicate your session’s main idea and conclusion. This is your gift to the audience, what will they learn from your session and be able to apply when they get back to work the following week.
Speaker(s): expertise and summary biography
Suggested tags
Past Speaking examples
Recommendations of appropriate audience.

Q. How can I increase the odds of my proposal being selected?
A. Check out the following resources:

Read our tips to help get your proposal selected
See the list of sessions chosen for the 2016 DockerCon and DockerCon EU 2015 programs and read their descriptions
Watch videos from previous DockerCons
See speaker slides from previous DockerCons.

Q. How are submissions selected?
A. After a proposal is submitted, it will be reviewed initially for content and format. Once past the initial review, a committee will read the proposals and vote on best submissions. There are a limited number of speaking slots and we work to achieve a balance of presentations that will interest the Docker community.
Q. How will Speakers be compensated?
A. One speaker for every session will be given a full conference pass. Any additional speakers will be given a pass at the Early Bird rate.
Q. Will there be a Speaker room at the conference?
A. Yes, we will provide a Speaker Ready room for speakers to prepare for presentations, relax and mingle. Speakers should check in with the DockerCon 2017 speaker manager on the day of your talk in the Speaker Room and make sure you are all set for your talk.
Q. What are the important dates to remember?
A.

Call for Proposals Closes &; January 14, 2017 at 11:59 PST
All proposers notified &8211; Late February
Program announced &8211; Late February
Submit your proposal &8211; Today!

DockerCon 2017 CFP is open until Jan 14! Submit your Docker story todayClick To Tweet

The post DockerCon 2017: Call For Papers FAQ appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Top Docker content of 2016

2016 has been an amazing year for and the container industry. We had 3 major releases of Docker engine this year , and tremendous increase in usage. The community has been following along and contributing amazing Docker resources to help you learn and get hands-on experience. Here’s some of the top read and viewed content for the year:
Releases
Of course releases are always really popular, particularly when they fit requests we had from the community. In particular, we had:

Docker for Mac & Docker for Windows Beta and GA release blog posts, and the video

Docker 1.12 Built-in Orchestration release, and the DockerCon keynote where we announced it

And the release of the Docker for AWS and Azure beta

Windows Containers
When Microsoft made Windows 2016 generally available, people rushed to

Our release blog to read the news
Tutorials to find out how to use Windows containers powered by Docker
The commercial relationship blog post to understand how it all fits together

About Docker
We also provide a lot of information about how to use Docker. In particular, these posts and articles that we shared on social media were the most read:

Containers are Not VMs by Mike Coleman
9 Critical Decisions for Running Docker in Production by James Higginbotham
A Comparative Study of Docker Engine on Windows Server vs. Linux Platform by Docker Captain Ajeet Singh Raina
Our White paper &; The Definitive Guide To Docker

How to Use Docker
Docker has a wide variety of use cases, articles and videos about how to use it are really popular. In particular, when we share content from our users and Docker Captains, they get a lot of views:

Getting started with Docker 1.12 and Raspberry Pi by Docker Captain Alex Ellis
Docker: Making our bioinformatics easier and more reproducible by Jeremy Yoder
NGINX as a Reverse Proxy for Docker by Lorenzo Fontana
5 minute guide for getting Docker 1.12.1 running on your Raspberry Pi 3 by Docker Captain Ajeet Singh Raina
The Docker Cheat Sheet
Docker for Developers

Cgroups, namespaces, and beyond

Still hungry for more info? Here’s some more Docker resources:

Check out Follow all the Captains in one shot with Docker by Docker Captain Alex Ellis
Docker labs and tutorials on GitHub
Follow us on Twitter, Facebook or LinkedIn group
Join the Docker Community Directory and Slack
And of course, keep following this blog for more exciting info

Top Docker content from 2016 &8211; What you docker resources you read the most Click To Tweet

The post Top Docker content of 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

OpenStack Developer Mailing List Digest December 17-23

SuccessBot Says

AJaeger: We&;ve now got the first Deployment guide published for Newton, see http://docs.openstack.org/project-deploy-guide/newton/ . Congrats to OpenStack Ansible team!
clarkb: OpenStack CI has moved off of Ubuntu Trusty and onto Ubuntu Xenial for testing Newton and master.
ihrachys: first oslo.privsep patch landed in Neutron.
dulek: Cinder now supports ZeroMQ messaging!
All

Release Countdown for Week R-8, 26-30 December

Feature work and major refactoring should be well under way as we pass the second milestone.
Focus:

Deadline for non-client library releases is R-5 (19 Jan).

Feature freeze exceptions are not granted for libraries.

General Notes:

Project teams should identify contributors that have a significant impact this cycle who not otherwise qualify for ATC status.
Those names should be added to the governance repository for consideration as ATC.
The list needs to be approved by the TC by 20 January to qualify for contributor discounts codes for the event.
Submit these by 5 January

Important Dates:

Extra ATCs deadline: 5 January
Final release of non-client libraries: 19 January
Ocata 3 Milestone, with Feature and Requirements freezes: 26 January

Ocata release schedule [1]
Full thread

Lives

There is movement to still move to Storyboard as our task tracker.
To spread awareness, some blog posts have been made about it, and it’s capabilities:

General over and decision to move from Launchpad [2].
Next post will focus on compare and contrast of Launchpad and Storyboard.

If you want to hear about something in particular in the blog posts, let the team know on storyboard IRC channel on Freenode.
Attend their weekly meeting [3].
Try out Storyboard in the sandbox [4].
Storyboard documentation [5]
Full thread

 
[1] &; http://releases.openstack.org/ocata/schedule.html
[2] &8211; https://storyboard-blog.sotk.co.uk/why-storyboard-for-openstack.html
[3] &8211; https://wiki.openstack.org/wiki/StoryBoard
[4] &8211; https://storyboard-dev.openstack.org/
[5] &8211; http://docs.openstack.org/infra/storyboard/
Quelle: openstack.org