The first and final words on OpenStack availability zones

The post The first and final words on OpenStack availability zones appeared first on Mirantis | Pure Play Open Cloud.
Availability zones are one of the most frequently misunderstood and misused constructs in OpenStack. Each cloud operator has a different idea about what they are and how to use them. What&;s more, each OpenStack service implements availability zones differently &; if it even implements them at all.
Often, there isn’t even agreement over the basic meaning or purpose of availability zones.
On the one hand, you have the literal interpretation of the words “availability zone”, which would lead us to think of some logical subdivision of resources into failure domains, allowing cloud applications to intelligently deploy in ways to maximize their availability. (We’ll be running with this definition for the purposes of this article.)
On the other hand, the different ways that projects implement availability zones lend themselves to certain ways of using the feature as a result. In other words, because this feature has been implemented in a flexible manner that does not tie us down to one specific concept of an availability zone, there&8217;s a lot of confusion over how to use them.
In this article, we&8217;ll look at the traditional definition of availability zones, insights into and best practices for planning and using them, and even a little bit about non-traditional uses. Finally, we hope to address the question: Are availability zones right for you?
OpenStack availability zone Implementations
One of the things that complicates use of availability zones is that each OpenStack project implements them in their own way (if at all). If you do plan to use availability zones, you should evaluate which OpenStack projects you&8217;re going to use support them, and how that affects your design and deployment of those services.
For the purposes of this article, we will look at three core services with respect to availability zones: , Cinder, and Neutron. We won&8217;t go into the steps to set up availability zones, but but instead we&8217;ll focus on a few of the key decision points, limitations, and trouble areas with them.
Nova availability zones
Since host aggregates were first introduced in OpenStack Grizzly, I have seen a lot of confusion about availability zones in Nova. Nova tied their availability zone implementation to host aggregates, and because the latter is a feature unique to the Nova project, its implementation of availability zones is also unique.
I have had many people tell me they use availability zones in Nova, convinced they are not using host aggregates. Well, I have news for these people &8212; all* availability zones in Nova are host aggregates (though not all host aggregates are availability zones):
* Exceptions being the default_availability_zone that compute nodes are placed into when not in another user-defined availability zone, and the internal_service_availability_zone where other nova services live
Some of this confusion may come from the nova CLI. People may do a quick search online, see they can create an availability zone with one command, and may not realize that they’re actually creating a host aggregate. Ex:
$ nova aggregate-create <aggregate name> <AZ name>
$ nova aggregate-create HA1 AZ1
+—-+———+——————-+——-+————————+
| Id | Name    | Availability Zone | Hosts | Metadata               |
+—-+———+——————-+——-+————————+
| 4  |   HA1   | AZ1               |       | ‘availability_zone=AZ1’|
+—-+———+——————-+——-+————————+
I have seen people get confused with the second argument (the AZ name). This is just a shortcut for setting the availability_zone metadata for a new host aggregate you want to create.
This command is equivalent to creating a host aggregate, and then setting the metadata:
$ nova aggregate-create HA1
+—-+———+——————-+——-+———-+
| Id | Name    | Availability Zone | Hosts | Metadata |
+—-+———+——————-+——-+———-+
| 7  |   HA1   | –                 |       |          |
+—-+———+——————-+——-+———-+
$ nova aggregate-set-metadata HA1 availability_zone=AZ1
Metadata has been successfully updated for aggregate 7.
+—-+———+——————-+——-+————————+
| Id | Name    | Availability Zone | Hosts | Metadata               |
+—-+———+——————-+——-+————————+
| 7  |   HA1   | AZ1               |       | ‘availability_zone=AZ1’|
+—-+———+——————-+——-+————————+
Doing it this way, it’s more apparent that the workflow is the same as any other host aggregate, the only difference is the “magic” metadata key availability_zone which we set to AZ1 (notice we also see AZ1 show up under the Availability Zone column). And now when we add compute nodes to this aggregate, they will be automatically transferred out of the default_availability_zone and into the one we have defined. For example:
Before:
$ nova availability-zone-list
| nova              | available                              |
| |- node-27        |                                        |
| | |- nova-compute | enabled :-) 2016-11-06T05:13:48.000000 |
+——————-+—————————————-+
After:
$ nova availability-zone-list
| AZ1               | available                              |
| |- node-27        |                                        |
| | |- nova-compute | enabled :-) 2016-11-06T05:13:48.000000 |
+——————-+—————————————-+
Note that there is one behavior that sets apart the availability zone host aggregates apart from others. Namely, nova does not allow you to assign the same compute host to multiple aggregates with conflicting availability zone assignments. For example, we can first add compute a node to the previously created host aggregate with availability zone AZ1:
$ nova aggregate-add-host HA1 node-27
Host node-27 has been successfully added for aggregate 7
+—-+——+——————-+———-+————————+
| Id | Name | Availability Zone | Hosts    | Metadata               |
+—-+——+——————-+———-+————————+
| 7  | HA1  | AZ1               | ‘node-27’| ‘availability_zone=AZ1’|
+—-+——+——————-+———-+————————+
Next, we create a new host aggregate for availability zone AZ2:
$ nova aggregate-create HA2

+—-+———+——————-+——-+———-+
| Id | Name    | Availability Zone | Hosts | Metadata |
+—-+———+——————-+——-+———-+
| 13 |   HA2   | –                 |       |          |
+—-+———+——————-+——-+———-+

$ nova aggregate-set-metadata HA2 availability_zone=AZ2
Metadata has been successfully updated for aggregate 13.
+—-+———+——————-+——-+————————+
| Id | Name    | Availability Zone | Hosts | Metadata               |
+—-+———+——————-+——-+————————+
| 13 |   HA2   | AZ2               |       | ‘availability_zone=AZ2’|
+—-+———+——————-+——-+————————+
Now if we try to add the original compute node to this aggregate, we get an error because this aggregate has a conflicting availability zone:
$ nova aggregate-add-host HA2 node-27
ERROR (Conflict): Cannot add host node-27 in aggregate 13: host exists (HTTP 409)
+—-+——+——————-+———-+————————+
| Id | Name | Availability Zone | Hosts    | Metadata               |
+—-+——+——————-+———-+————————+
| 13 | HA2  | AZ2               |          | ‘availability_zone=AZ2’|
+—-+——+——————-+———-+————————+
(Incidentally, it is possible to have multiple host aggregates with the same availability_zone metadata, and add the same compute host to both. However, there are few, if any, good reasons for doing this.)
In contrast, Nova allows you to assign this compute node to another host aggregate with other metadata fields, as long as the availability_zone doesn&8217;t conflict:
You can see this if you first create a third host aggregate:
$ nova aggregate-create HA3
+—-+———+——————-+——-+———-+
| Id | Name    | Availability Zone | Hosts | Metadata |
+—-+———+——————-+——-+———-+
| 16 |   HA3   | –                 |       |          |
+—-+———+——————-+——-+———-+
Next, tag  the host aggregate for some purpose not related to availability zones (for example, an aggregate to track compute nodes with SSDs):
$ nova aggregate-set-metadata HA3 ssd=True
Metadata has been successfully updated for aggregate 16.
+—-+———+——————-+——-+———–+
| Id | Name    | Availability Zone | Hosts |  Metadata |
+—-+———+——————-+——-+———–+
| 16 |   HA3   | –                 |       | ‘ssd=True’|
+—-+———+——————-+——-+———–+
Adding original node to another aggregate without conflicting availability zone metadata works:
$ nova aggregate-add-host HA3 node-27
Host node-27 has been successfully added for aggregate 16
+—-+——-+——————-+———–+————+
| Id | Name  | Availability Zone | Hosts     |  Metadata  |
+—-+——-+——————-+———–+————+
| 16 | HA3   | –                 | ‘node-27′ | ‘ssd=True’ |
+—-+——-+——————-+———–+————+
(Incidentally, Nova will also happily let you assign the same compute node to another aggregate with ssd=False for metadata, even though that clearly doesn&8217;t make sense. Conflicts are only checked/enforced in the case of the availability_zone metadata.)
Nova configuration also holds parameters relevant to availability zone behavior. In the nova.conf read by your nova-api service, you can set a default availability zone for scheduling, which is used if users do not specify an availability zone in the API call:
[DEFAULT]
default_schedule_zone=AZ1
However, most operators leave this at its default setting (None), because it allows users who don’t care about availability zones to omit it from their API call, and the workload will be scheduled to any availability zone where there is available capacity.
If a user requests an invalid or undefined availability zone, the Nova API will report back with an HTTP 400 error. There is no availability zone fallback option.
Cinder
Creating availability zones in Cinder is accomplished by setting the following configuration parameter in cinder.conf, on the nodes where your cinder-volume service runs:
[DEFAULT]
storage_availability_zone=AZ1
Note that you can only set the availability zone to one value. This is consistent with availability zones in other OpenStack projects that do not allow for the notion of overlapping failure domains or multiple failure domain levels or tiers.
The change takes effect when you restart your cinder-volume services. You can confirm your availability zone assignments as follows:
cinder service-list
+—————+——————-+——+———+——-+
|     Binary    |        Host       | Zone | Status  | State |
+—————+——————-+——+———+——-+
| cinder-volume | hostname1@LVM     |  AZ1 | enabled |   up  |
| cinder-volume | hostname2@LVM     |  AZ2 | enabled |   up  |
If you would like to establish a default availability zone, you can set the this parameter in cinder.conf on the cinder-api nodes:
[DEFAULT]
default_availability_zone=AZ1
This instructs Cinder which availability zone to use if the API call did not specify one. If you don’t, it will use a hardcoded default, nova. In the case of our example, where we&8217;ve set the default availability zone in Nova to AZ1, this would result in a failure. This also means that unlike Nova, users do not have the flexibility of omitting availability zone information and expecting that Cinder will select any available backend with spare capacity in any availability zone.
Therefore, you have a choice with this parameter. You can set it to one of your availability zones so API calls without availability zone information don’t fail, but causing a potential situation of uneven storage allocation across your availability zones. Or, you can not set this parameter, and accept that user API calls that forget or omit availability zone info will fail.
Another option is to set the default to a non-existent availability zone you-must-specify-an-AZ or something similar, so when the call fails due to the non-existant availability zone, this information will be included in the error message sent back to the client.
Your storage backends, storage drivers, and storage architecture may also affect how you set up your availability zones. If we are using the reference Cinder LVM ISCSI Driver deployed on commodity hardware, and that hardware fits the same availability zone criteria of our computes, then we could setup availability zones to match what we have defined in Nova. We could also do the same if we had a third party storage appliance in each availability zone, e.g.:
|     Binary    |           Host          | Zone | Status  | State |
| cinder-volume | hostname1@StorageArray1 |  AZ1 | enabled |   up  |
| cinder-volume | hostname2@StorageArray2 |  AZ2 | enabled |   up  |
(Note: Notice that the hostnames (hostname1 and hostname2) are still different in this example. The cinder multi-backend feature allows us to configure multiple storage backends in the same cinder.conf (for the same cinder-volume service), but Cinder availability zones can only be defined per cinder-volume service, and not per-backend per-cinder-volume service. In other words, if you define multiple backends in one cinder.conf, they will all inherit the same availability zone.)
However, in many cases if you’re using a third party storage appliance, then these systems usually have their own built-in redundancy that exist outside of OpenStack notions of availability zones. Similarly if you use a distributed storage solution like Ceph, then availability zones have little or no meaning in this context. In this case, you can forgo Cinder availability zones.
The one issue in doing this, however, is that any availability zones you defined for Nova won’t match. This can cause problems when Nova makes API calls to Cinder &; for example, when performing a Boot from Volume API call through Nova. If Nova decided to provision your VM in AZ1, it will tell Cinder to provision a boot volume in AZ1, but Cinder doesn’t know anything about AZ1, so this API call will fail. To prevent this from happening, we need to set the following parameter in cinder.conf on your nodes running cinder-api:
[DEFAULT]
allow_availability_zone_fallback=True
This parameter prevents the API call from failing, because if the requested availability zone does not exist, Cinder will fallback to another availability zone (whichever you defined in default_availability_zone parameter, or in storage_availability_zone if the default is not set). The hardcoded default storage_availability_zone is nova, so the fallback availability zone should match the default availability zone for your cinder-volume services, and everything should work.
The easiest way to solve the problem, however is to remove the AvailabilityZoneFilter from your filter list in cinder.conf on nodes running cinder-scheduler. This makes the scheduler ignore any availability zone information passed to it altogether, which may also be helpful in case of any availability zone configuration mismatch.
Neutron
Availability zone support was added to Neutron in the Mitaka release. Availability zones can be set for DHCP and L3 agents in their respective configuration files:
[AGENT]
Availability_zone = AZ1
Restart the agents, and confirm availability zone settings as follows:
neutron agent-show <agent-id>
+———————+————+
| Field               | Value      |
+———————+————+
| availability_zone   | AZ1        |

If you would like to establish a default availability zone, you can set the this parameter in neutron.conf on neutron-server nodes:
[DEFAULT]
default_availability_zones=AZ1,AZ2
This parameters tells Neutron which availability zones to use if the API call did not specify any. Unlike Cinder, you can specify multiple availability zones, and leaving it undefined places no constraints in scheduling, as there are no hard coded defaults. If you have users making API calls that do not care about the availability zone, then you can enumerate all your availability zones for this parameter, or simply leave it undefined &8211; both would yield the same result.
Additionally, when users do specify an availability zone, such requests are fulfilled as a “best effort” in Neutron. In other words, there is no need for an availability zone fallback parameter, because your API call still execute even if your availability zone hint can’t be satisfied.
Another important distinction that sets Neutron aside from Nova and Cinder is that it implements availability zones as scheduler hints, meaning that on the client side you can repeat this option to chain together multiple availability zone specifications in the event that more than one availability zone would satisfy your availability criteria. For example:
$ neutron net-create –availability-zone-hint AZ1 –availability-zone-hint AZ2 new_network
As with Cinder, the Neutron plugins and backends you’re using deserve attention, as the support or need for availability zones may be different depending on their implementation. For example, if you’re using a reference Neutron deployment with the ML2 plugin and with DHCP and L3 agents deployed to commodity hardware, you can likely place these agents consistently according to the same availability zone criteria used for your computes.
Whereas in contrast, other alternatives such as the Contrail plugin for Neutron do not support availability zones. Or if you are using Neutron DVR for example, then availability zones have limited significance for Layer 3 Neutron.
OpenStack Project availability zone Comparison Summary
Before we move on, it&8217;s helpful to review how each project handles availability zones.

Nova
Cinder
Neutron

Default availability zone scheduling
Can set to one availability zone or None
Can set one availability zone; cannot set None
Can set to any list of availability zones or none

Availability zone fallback
None supported
Supported through configuration
N/A; scheduling to availability zones done on a best effort basis

Availability zone definition restrictions
No more than availability zone per nova-compute
No more than 1 availability zone per cinder-volume
No more than 1 availability zone per neutron agent

Availability zone client restrictions
Can specify one availability zone or none
Can specify one availability zone or none
Can specify an arbitrary number of availability zones

Availability zones typically used when you have &;
Commodity HW for computes, libvirt driver
Commodity HW for storage, LVM iSCSI driver
Commodity HW for neutron agents, ML2 plugin

Availability zones not typically used when you have&8230;
Third party hypervisor drivers that manage their own HA for VMs (DRS for VCenter)
Third party drivers, backends, etc. that manage their own HA
Third party plugins, backends, etc. that manage their own HA

Best Practices for availability zones
Now let&8217;s talk about how to best make use of availability zones.
What should my availability zones represent?
The first thing you should do as a cloud operator is to nail down your own accepted definition of an availability zone and how you will use them, and remain consistent. You don’t want to end up in a situation where availability zones are taking on more than one meaning in the same cloud. For example:
Fred’s AZ            | Example of AZ used to perform tenant workload isolation
VMWare cluster 1 AZ | Example of AZ used to select a specific hypervisor type
Power source 1 AZ   | Example of AZ used to select a specific failure domain
Rack 1 AZ           | Example of AZ used to select a specific failure domain
Such a set of definitions would be a source of inconsistency and confusion in your cloud. It’s usually better to keep things simple with one availability zone definition, and use OpenStack features such as Nova Flavors or Nova/Cinder boot hints to achieve other requirements for multi-tenancy isolation, ability to select between different hypervisor options and other features, and so on.
Note that OpenStack currently does not support the concept of multiple failure domain levels/tiers. Even though we may have multiple ways to define failure domains (e.g., by power circuit, rack, room, etc), we must pick a single convention.
For the purposes of this article, we will discuss availability zones in the context of failure domains. However, we will cover one other use for availability zones in the third section.
How many availability zones do I need?
One question that customers frequently get hung up on is how many availability zones they should create. This can be tricky because the setup and management of availability zones involves stakeholders at every layer of the solution stack, from tenant applications to cloud operators, down to data center design and planning.

A good place to start is your cloud application requirements: How many failure domains are they designed to work with (i.e. redundancy factor)? The likely answer is two (primary + backup), three (for example, for a database or other quorum-based system), or one (for example, a legacy app with single points of failure). Therefore, the vast majority of clouds will have either 2, 3, or 1 availability zone.
Also keep in mind that as a general design principle, you want to minimize the number of availability zones in your environment, because the side effect of availability zone proliferation is that you are dividing your capacity into more resource islands. The resource utilization in each island may not be equal, and now you have an operational burden to track and maintain capacity in each island/availability zone. Also, if you have a lot of availability zones (more than the redundancy factor of tenant applications), tenants are left to guess which availability zones to use and which have available capacity.
How do I organize my availability zones?
The value proposition of availability zones is that tenants are able to achieve a higher level of availability in their applications. In order to make good on that proposition, we need to design our availability zones in ways that mitigate single points of failure.             
For example, if our resources are split between two power sources in the data center, then we may decide to define two resource pools (availability zones) according to their connected power source:
Or, if we only have one TOR switch in our racks, then we may decide to define availability zones by rack. However, here we can run into problems if we make each rack its own availability zone, as this will not scale from a capacity management perspective for more than 2-3 racks/availability zones (because of the &;resource island&; problem). In this case, you might consider dividing/arranging your total rack count into into 2 or 3 logical groupings that correlate to your defined availability zones:
We may also find situations where we have redundant TOR switch pairs in our racks, power source diversity to each rack, and lack a single point of failure. You could still place racks into availability zones as in the previous example, but the value of availability zones is marginalized, since you need to have a double failure (e.g., both TORs in the rack failing) to take down a rack.
Ultimately with any of the above scenarios, the value added by your availability zones will depend on the likelihood and expression of failure modes &8211; both the ones you have designed for, and the ones you have not. But getting accurate and complete information on such failure modes may not be easy, so the value of availability zones from this kind of unplanned failure can be difficult to pin down.
There is however another area where availability zones have the potential to provide value &8212; planned maintenance. Have you ever needed to move, recable, upgrade, or rebuild a rack? Ever needed to shut off power to part of the data center to do electrical work? Ever needed to apply disruptive updates to your hypervisors, like kernel or QEMU security updates? How about upgrades of OpenStack or the hypervisor operating system?
Chances are good that these kinds of planned maintenance are a higher source of downtime than unplanned hardware failures that happen out of the blue. Therefore, this type of planned maintenance can also impact how you define your availability zones. In general the grouping of racks into availability zones (described previously) still works well, and is the most common availability zone paradigm we see for our customers.
However, it could affect the way in which you group your racks into availability zones. For example, you may choose to create availability zones based on physical parameters like which floor, room or building the equipment is located in, or other practical considerations that would help in the event you need to vacate or rebuild certain space in your DC(s). Ex:
One of the limitations of the OpenStack implementation of availability zones made apparent in this example is that you are forced to choose one definition. Applications can request a specific availability zone, but are not offered the flexibility of requesting building level isolation, vs floor, room, or rack level isolation. This will be a fixed, inherited property of the availability zones you create. If you need more, then you start to enter the realm of other OpenStack abstractions like Regions and Cells.
Other uses for availability zones?
Another way in which people have found to use availability zones is in multi-hypervisor environments. In the ideal world of the “implementation-agnostic” cloud, we would abstract such underlying details from users of our platform. However there are some key differences between hypervisors that make this aspiration difficult. Take the example of KVM & VMWare.
When an iSCSI target is provisioned through with the LVM iSCSI Cinder driver, it cannot be attached to or consumed by ESXi nodes. The provision request must go through VMWare’s VMDK Cinder driver, which proxies the creation and attachment requests to vCenter. This incompatibility can cause errors and thus a negative user experience issues if the user tries to attach a volume to their hypervisor provisioned from the wrong backend.
To solve this problem, some operators use availability zones as a way for users to select hypervisor types (for example, AZ_KVM1, AZ_VMWARE1), and set the following configuration in their nova.conf:
[cinder]
cross_az_attach = False
This presents users with an error if they attempt to attach a volume from one availability zone (e.g., AZ_VMWARE1) to another availability zone (e.g., AZ_KVM1). The call would have certainly failed regardless, but with a different error from farther downstream from one of the nova-compute agents, instead of from nova-api. This way, it&8217;s easier for the user to see where they went wrong and correct the problem.
In this case, the availability zone also acts as a tag to remind users which hypervisor their VM resides on.
In my opinion, this is stretching the definition of availability zones as failure domains. VMWare may be considered its own failure domain separate from KVM, but that’s not the primary purpose of creating availability zones this way. The primary purpose is to differentiate hypervisor types. Different hypervisor types have a different set of features and capabilities. If we think about the problem in these terms, there are a number of other solutions that come to mind:

Nova Flavors: Define a “VMWare” set of flavors that map to your VCenter cluster(s). If your tenants that use VMWare are different than your tenants who use KVM, you can make them private flavors, ensuring that tenants only ever see or interact with the hypervisor type they expect.
Cinder: Similarly for Cinder, you can make the VMWare backend private to specific tenants, and/or set quotas differently for that backend, to ensure that tenants will only allocate from the correct storage pools for their hypervisor type.
Image metadata: You can tag your images according to the hypervisor they run on. Set image property hypervisor_type to vmware for VMDK images, and to qemu for other images. The ComputeCapabilitiesFilter in Nova will honor the hypervisor placement request.

Soo… Are availability zones right for me?
So wrapping up, think of availability zones in terms of:

Unplanned failures: If you have a good history of failure data, well understood failure modes, or some known single point of failure that availability zones can help mitigate, then availability zones may be a good fit for your environment.
Planned maintenance: If you have well understood maintenance processes that have awareness of your availability zone definitions and can take advantage of them, then availability zones may be a good fit for your environment. This is where availability zones can provide some of the creates value, but is also the most difficult to achieve, as it requires intelligent availability zone-aware rolling updates and upgrades, and affects how data center personnel perform maintenance activities.
Tenant application design/support: If your tenants are running legacy apps, apps with single points of failure, or do not support use of availability zones in their provisioning process, then availability zones will be of no use for these workloads.
Other alternatives for achieving app availability: Workloads built for geo-redundancy can achieve the same level of HA (or better) in a multi-region cloud. If this were the case for your cloud and your cloud workloads, availability zones would be unnecessary.
OpenStack projects: You should factor into your decision the limitations and differences in availability zone implementations between different OpenStack projects, and perform this analysis for all OpenStack projects in your scope of deployment.

The post The first and final words on OpenStack availability zones appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Docker Online Meetup recap: Introducing Docker 1.13

Last week, we released 1.13 to introduce several new enhancements in addition to building on and improving Docker swarm mode introduced in Docker 1.12. Docker 1.13 has many new features and fixes that we are excited about, so we asked core team member and release captain, Victor Vieux to introduce Docker 1.13 in an online .
The meetup took place on Wednesday, Jan 25 and over 1000 people RSVPed to hear Victor’s presentation live. Victor gave an overview and demo of many of the new features:

Restructuration of CLI commands
Experimental build
CLI backward compatibility
Swarm default encryption at rest
Compose to Swarm
Data management commands
Brand new “init system”
Various orchestration enhancements

In case you missed it, you can watch the recording and access Victor’s slides below.

 
Below is a short list of the questions asked to Victor at the end of the Online meetup:
Q: What will happened if we call docker stack deploy multiple times to the same file?
A: All the services that were modified in the compose file will be updated according to their respective update policy. It won’t recreate a new stack, update the current one. Same mechanism used in the docker-compose python tool.
Q: In &;docker system df&8220;, what exactly constitutes an &8220;active&; image?
A: It means it’s associated with a container, if you have (even stopped) container(s) using the `redis` image, then the `redis` images is “active”
Q: criu integration is available with `–experimental` then?
A: Yes! One of the many features I didn’t cover in the slides as there are so many new ones in Docker 1.13. There is no need to download a separate build anymore, it’s just a flag away
Q: When will we know when certain features are out of the experimental state and part of the foundation of this product?
A: Usually experimental features tend to remain in an experimental state for only one release. Larger or more complex features and capabilities may require two releases to gather feedback and make incremental improvements.
Q: Can I configure docker with multiple registries (some private and some public)?
A: It’s not really necessary to configure docker as the “configuration” happen in the image name.
docker pull my-private-registry.com:9876/my-image and docker pull my-public-registry.com:5432/my-image

Missed the Intro to Docker 1.13 online meetup w/ @vieux? Check out the video & slides here!Click To Tweet

The post Docker Online Meetup recap: Introducing Docker 1.13 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

IBM cloud revenue up 35 percent in 2016

For anyone with any lingering doubts about the rapid growth of , look no further than IBM fiscal year 2016 cloud revenue for proof.
Total cloud revenue hit $13.7 billion on the year, which represented a 35 percent increase. The cloud as-a-service annual revenue run rate trended upward to $8.6 billion in 2016, a staggering 63 percent increase.
IBM Chairman, President and Chief Executive Officer Ginni Rometty offered this explanation: &;More and more clients are choosing the IBM Cloud because of its differentiated capabilities, which are helping to transform industries, such as financial services, airlines and retail.&;
IBM Cloud clients include American Airlines, Finnair, Halliburton, Kimberly Clark, MUFG, Sky, the US Army and WhatsApp.
For more about 2016 IBM earnings in cloud and other areas, check out the infographic below and this press release.

The post IBM cloud revenue up 35 percent in 2016 appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

US Army taps IBM for $62 million private cloud data center

US Army private cloud data center
For the United States Army, security is a top concern. It&;s a big reason why it chose IBM to build out and manage a private cloud data center at its Redstone Arsenal near Huntsville, Alabama.
The contract for the data center would be worth $62 million over five years, if the Army exercises all its options. Along with the data center, the agreement also gives IBM the go-ahead to provide infrastructure-as-a-service (IaaS) solutions to migrate applications to the cloud. The goal is to move 35 apps within the first year.
The agreement requires Defense Information Systems Agency (DISA) Impact Level 5 (IL-5) authorization, which IBM announced it had received in February 2016. The authorization gives IBM the ability to manage &;manage controlled, unclassified information.&; As of this week, IBM is the only company to be authorized by DISA at IL-5 to run infrastructure-as-a-service solutions on government property. The Army expects to move IBM to IL-6 authorization, which would permit the company to work with classified information up to &8220;secret,&8221; within a year.
Lt. Gen. Robert Ferrell, US Army CIO, said, “ is a game-changing architecture that provides improved performance with high efficiency, all in a secure environment.”
Last year, the Army partnered with IBM on a hybrid cloud solution for its Logistics Support Activity.
For more about this new private cloud data center, read the full article at TechRepublic.
The post US Army taps IBM for $62 million private cloud data center appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

A dash of Salt(Stack): Using Salt for better OpenStack, Kubernetes, and Cloud — Q&A

The post A dash of Salt(Stack): Using Salt for better OpenStack, Kubernetes, and Cloud &; Q&;A appeared first on Mirantis | The Pure Play OpenStack Company.
On January 16, Ales Komarek presented an introduction to Salt. We covered the following topics:

The model-driven architectures behind how Salt stores topologies and workflows

How Salt provides solution adaptability for any custom workloads

Infrastructure as Code: How Salt provides not only configuration management, but entire life-cycle management

How Continuous Delivery/ Integration/ Management fits into the puzzle

How Salt manages and scales parallel cloud deployments that include OpenStack, Kubernetes and others

What we didn&;t do, however, is get to all of the questions from the audience, so here&8217;s a written version of the Q&A, including those we didn&8217;t have time for.
Q: Why Salt?
A: It&8217;s python, it has a huge and growing base of imperative modules and declarative states, and it has a good message bus.
Q: What tools are used to initially provision Salt across an infrastructure? Cobbler, Puppet, MAAS?
A: To create a new deployment, we rely on a single node, where we bootstrap the Salt master and Metal-as-a-Service (formerly based on Foreman, now Ironic). Then we control the MaaS service to deploy the physical bare-metal nodes.
Q: How broad a range of services do you already have recipes for, and how easy is it to write and drop in new ones if you need one that isn&8217;t already available?
A: The ecosystem is pretty vast. You can look at either https://github.com/tcpcloud or the formula ecosystem overview at http://openstack-salt.tcpcloud.eu/develop/extending-ecosystem.html. There are also guidelines for creating new formulas, which is very straight-forward process. A new service can be created in matter of hours, or even minutes.
Q: Can you convert your existing Puppet/Ansible scripts to Salt, and what would I search to find information about that?
A: Yes, we have reverse engineered autmation for some of these services in the past. For example we were deeply inspired by the Ansible module for Gerrit resource management.  You can find some information on creating Salt Formulas at https://docs.saltstack.com/en/latest/topics/development/conventions/formulas.html,  and we will be adding tutorial material here on this blog in the near future.
Q: Is there a NodeJS binding available?
A: If you meant the NodeJS formula to setup a NodeJS enironment, yes, there is such a formula. If you mean bindings to the system, you can use the Salt API to integrate NodeJS with Salt.
Q: Have you ever faced performance issues when storing a lot of data in pillars?
A: We have not faced performance issues with pillars that are deliverd by reclass ENC. It has been tested up to a few thousands of nodes.
Q: What front end GUI is typically used with Salt monitoring (e.g., Kibana, Grafana,&;)?
A: Salt monitoring uses Sensu or StackLight for the actual functional monitoring checks. It uses Kibana to display events stored in Elasticsearch and Grafana to visualize metrics coming from time-series databases such as Graphite or Influx.
Q: What is the name of the salt PKI manager? (Or what would I search for to learn more about using salt for infrastructure-wide PKI management?)
A: The PKI feature is well documented in the Salt docs, and is available at https://docs.saltstack.com/en/latest/ref/states/all/salt.states.x509.html.
Q: Can I practice installing and deploying SaltStack on my laptop? Can you recommend a link?
A: I&8217;d recommend you have a look at http://openstack-salt.tcpcloud.eu/develop/quickstart-vagrant.html where you can find a nice tutorial on how to setup a simple infrastructure.
Q: Thanks for the presentation! Within Heat, I&8217;ve only ever seen salt used in terms of software deployments. What we&8217;ve seen today, however, goes clear through to service, resource, and even infrastructure deployment! In this way, does Salt become a viable alternative to Heat? (I&8217;m trying to understand where the demarcation is between the two now.)
A: Think of Heat as part of the solution responsible for spinning up the harware resources such as networks, routers and servers, in a way that is similar to MaaS, Ironic or Foreman. Salt&8217;s part begins where Heat&8217;s part ends &; after the resources are started, Salt takes over and finishes the installation/configuration process.
Q: When you mention Orchestration, how does salt differentiate from Heat, or is Salt making Heat calls?
A: Heat is more for hardware resources orchestration. It has some capability to do software configuration, but rather limited. We have created heat resources that help to classify resources on fly. We also have salt heat modules capable of running a heat stack.
Q: Will you be showing any parts of SaltStack Enterprise, or only FREE Salt Open Source? Do you use Salt in Multi-Master deployment?
A: We are using the opensource version of SaltStack, the enterprise gets little gain given the pricing model. In some deployments, we use the salt master HA deployment setups.
Q: What HA engine is typically used for the Salt master?
A: We use 2 separate masters with shared storage provided by GlusterFS on which the master&8217;s and minions&8217; keys are stored.
Q: Is there a GUI ?
A: The creation of a GUI is currently under discussion.
Q: How do you enforce Role Based Administration in the Salt Master? Can you segregate users to specific job roles and limit which jobs they can execute in Salt?
A: We use the ACLs of the Salt master to limit the user&8217;s options. This also applies for the Jenkins-powered pipelines, which we also manage by Salt, both on the job and the user side.
Q: Can you show the salt files (.sls, pillar, &8230;)?
A: You can look at the github for existing formulas at https://github.com/tcpcloud and good example of pillars can be found at https://github.com/Mirantis/mk-lab-salt-model/.
Q: Is there a link for deploying Salt for Kubernetes? Any best practices guide?
A: The best place to look is the https://github.com/openstack/salt-formula-kubernetes README.
Q: Is SaltStack the same as what&8217;s on saltstack.com, or is it a different project?
A: These are the same project. Saltstack.com is company that is behind the Salt technology and provides support and enterprise versions.
Q: So far this looks like what Chef can do. Can you make a comparison or focus on the &;value add&; from Salt that Chef or Puppet don&8217;t give you?
A: The replaceability/reusability of the individual components is very easy, as all formulas are &;aware&8217; of the rest and share a common form and single dependency tree. This is a problem with community-based formulas in either of the other tools, as they are not very compatible with each other.
Q: In terms of purpose, is there any difference between SaltStack vs Openstack?
A: Apart from the fact that SaltStack can install OpenStack, it can also provide virtualization capabilities. However, Salt has very limited options, while OpenStack supports complex production level scenarios.
Q: Great webinar guys. Ansible seems to have a lot of traction as means of deploying OpenStack. Could you compare/contrast with SaltStack in this context?
A: With Salt, the OpenStack services are just part of wider ecosystem; the main advantage comes from the consistency across all services/formulas, the provision of support metadata to provide documentation or monitoring features.
Q: How is Salt better than Ansible/Puppet/Chef ?
A: The biggest difference is the message bus, which lets you control, and get data from, the infrastructure with great speed and concurrency.
Q: Can you elaborate mirantis fuel vs saltstack?
Fuel is an open source project that was (and is) designed to deploy OpenStack from a single ISO-based artifact, and to provide various lifecycle management functions once the cluster has been deployed. SaltStack is designed to be more granular, working with individual components or services.
Q: Are there plans to integrate SaltStack in to MOS?
A: The Mirantis Cloud Platform (MCP) will be powered by Salt/Reclass.
Q: Is Fuel obsolete or it will use Salt in the background instead of Puppet?
A: Fuel in its current form will continue to be used for deploying Mirantis OpenStack in the traditional manner (as a single ISO file). We are extending our portfolio of life cycle management tools to include appropriate technologies for deploying and managing open source software in MCP. For example, Fuel CCP will be used to deploy containerized OpenStack on Kubernetes. Similarly, Decapod will be used to deploy Ceph. All of these lifecycle management technologies are, in a sense, Fuel. Whether a particular tool uses Salt or Puppet will depend on what it&8217;s doing.
Q: MOS 10 release date?
A: We&8217;re still making plans on this.
Thanks for joining us, or if you missed it, please go ahead and view the webinar.
The post A dash of Salt(Stack): Using Salt for better OpenStack, Kubernetes, and Cloud &8212; Q&038;A appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

9 tips to properly configure your OpenStack Instance

In OpenStack jargon, an Instance is a Virtual Machine, the guest workload. It boots from an operating system image, and it is configured with a certain amount of CPU, RAM and disk space, amongst other parameters such as networking or security settings.
In this blog post kindly contributed by Marko Myllynen we’ll explore nine configuration and optimization options that will help you achieve the required performance, reliability and security that you need for your workloads.
Some of the optimizations can be done inside a guest regardless of what has the OpenStack Cloud Administrator enabled in your cloud. However, more advanced options require prior enablement and, possibly, special host capabilities. This means many of the options described here will depend on how the Administrator configured the cloud, or may not be available for some tenants as they are reserved for certain groups. More information about this subject can be found on the Red Hat Documentation Portal and its comprehensive guide on OpenStack Image Service. Similarly, the upstream OpenStack documentation has some extra guidelines available.
The following configurations should be evaluated for any VM running on any OpenStack environment. These changes have no side-effects and are typically safe enable even if unused

1) Image Format: QCOW or RAW?
OpenStack storage configuration is an implementation choice by the Cloud Administrator, often not fully visible to the tenant. Storage configuration may also change over the time without explicit notification by the Administrator, as he/she adds capacity with different specs.
When creating a new instance on OpenStack, it is based on a Glance image. The two most prevalent and recommended image formats are QCOW2 and RAW. QCOW2 images (from QEMU Copy On Write) are typically smaller in size. For instance a server with a 100 GB disk, the size of the image in RAW format, might be only 10 GBs when formatted into QCOW2. Regardless of the format, it is a good idea to process images before uploading them to Glance with virt-sysprep(1) and virt-sparsify(1).
The performance of QCOW2 depends on both the hypervisor kernel and the format version, the latest being QCOW2v3 (sometimes referred to as QCOW3) which has better performance than the earlier QCOW2, almost as good as RAW format. In general we assume RAW has better overall performance despite the operational drawbacks (like the lack of snapshots) or the increase in time it takes to upload or boot (due to its bigger size). Our latest versions of Red Hat OpenStack Platform automatically use the newer QCOW2v3 format (thanks to the recent RHEL versions) and it is possible to check and also convert between RAW and older/newer QCOW2 images with qemu-img(1).
OpenStack instances can either boot from a local image or from a remote volume. That means

Image-backed instances benefit significantly by the performance difference between older QCOW2 vs QCOW2v3 vs RAW.
Volume-backed instances can be created either from QCOW2 or RAW Glance images. However, as Cinder backends are vendor-specific (Ceph, 3PAR, EMC, etc), they may not use QCOW2 nor RAW. They may have their own mechanisms, like dedup, thin provisioning or copy-on-write

.
As a general rule of thumb, rarely used images should be stored in Glance as QCOW2, but an image which is used constantly to create new instances (locally stored), or for any volume-backed instances, using RAW should provide better performance despite the sometimes longer initial boot time (except in Ceph-backed systems, thanks to its copy-on-write approach). In the end, any actual recommendation will depend on the OpenStack storage configurations chosen by the Cloud Administrator..
2) Performance Tweaks via Image Extra Properties
Since the Mitaka version, OpenStack allows Nova to automatically optimize certain libvirt and KVM properties on the Compute host to better execute a particular OS in the guest. To provide the guest OS information to Nova, just define the following Glance image properties:

os_type=linux # Generic name, like linux or windows
os_distro=rhel7.1 # Use osinfo-query os to list supported variants

Additionally, at least for the time being (see BZ#), in order to make sure the newer and more scalable virtio-scsi para-virtualized SCSI controller is used instead of the older virt-blk, the following properties need to be set explicitly:

hw_scsi_model=virtio-scsi
hw_disk_bus=scsi

All the supported image properties are listed at the Red Hat Documentation portal as well as other CLI options. 
3) Prepare for Cloud-init
“Cloud-init” is a package used for early initialization of cloud instances, to configure basics like partition / filesystem size and SSH keys.
Ensure that you have installed the cloud-init and cloud-utils-growpart packages in your Glance image, and that the related services will be executed on boot, to allow the execution of “cloud-init” configurations to the OpenStack VM.
In many cases the default configuration is acceptable but there are lots of customization options available, for details please refer to the cloud-init documentation.
4) Enable the QEMU Guest Agent
On Linux hosts, it is recommended to install and enable the QEMU guest agent which allows graceful guest shutdown and (in the future) automatic freezing of guest filesystems when snapshots are requested, which is a necessary operation for consistent backups (see BZ#):

yum install qemu-guest-agent
systemctl enable qemu-guest-agent

In order to provide the needed virtual devices and use the filesystem freezing functionality when needed, the following properties need to be defined for Glance images (see also BZ#):

hw_qemu_guest_agent=yes # Create the needed device to allow the guest agent to run
os_require_quiesce=yes # Accept requests to freeze/thaw filesystems

5) Just in case: how to recover from guest failure

Comprehensive instance fault recovery, high availability, and service monitoring requires a layered approach which as a whole is out of scope for this document. In the paragraphs below we show the options that can be applicable purely inside a guest (which can be thought as being the innermost layer). The most frequently used fault recovery mechanisms for an instance are:

recovery from kernel crashes
recovery from guest hangs (which do not necessarily involve kernel crash/panic)

In the rare case the guest kernel crashes, kexec/kdump will capture a kernel vmcore for further analysis and reboot the guest. In case the vmcore is not wanted, kernel can be instructed to reboot after a kernel crash by setting the panic kernel parameter, for example “panic=1”.
In order to reboot an instance after other unexpected behavior, for example high load over a certain threshold or a complete system lockup without a kernel panic, the watchdog service can be utilized. Other actions than &;reboot&; can be found here. The following property needs to be defined for Glance images or Nova flavors.

hw_watchdog_action=reset

Then, install the watchdog package inside the guest, then configure the watchdog device, and finally, enable the service:

yum install watchdog
vi /etc/watchdog.conf
systemctl enable watchdog

By default watchdog detects kernel crashes and complete system lockups. See the watchdog.conf(5) man page for more information, e.g., how to add guest health-monitoring scripts as part of watchdog functionality checks.
6) Tune the Kernel
The simplest way to tune a Linux node is to use the “tuned” facility. It’s a service which configures dozens of system parameters according to the selected profile, which in the OpenStack case is “virtual-guest”. For NFV workloads, Red Hat provides a set of NFV tuned profiles to simplify the tuning of network-intensive VMs, .
In your Glance image, it is recommended to install the required package, enable the service on boot, and activate the preferred profile. You can do it by editing the image before uploading to Glance, or as part of your cloud-init recipe:

yum install tuned
systemctl enable tuned
tuned-adm profile virtual-guest

7) Improve networking via VirtIO Multiqueuing
Guest kernel virtio drivers are part of the standard RHEL/Linux kernel package and enabled automatically without any further configuration as needed. Windows guests should also use the official virtio drivers for their particular Windows version, greatly improving network and disk IO performance.
However, recent advanced in Network packet processing in the Linux kernel and also in user-space components created a myriad of extra options to tune or bypass the virtio drivers. Below you&;ll find an illustration of the virtio device model (from the RHEL Virtualization guide).
Network multiqueuing, or virtio-net multi-queue, is an approach that enables parallel packet processing to scale linearly with the number of available vCPUs of a guest, often providing notable improvement to transfer speeds especially with vhost-user.
Provided that the OpenStack Admin has provisioned the virtualization hosts with supporting components installed (at least OVS 2.5 / DPDK 2.2), this functionality can be enabled by OpenStack Tenant with the following property in those Glance images where we want network multiqueuing:

hw_vif_multiqueue_enabled=true

Inside a guest instantiated from such an image, the NIC channel setup can be checked and changed as needed with the commands below:

ethtool -l eth0 to see the current number of queues
ethtool -L eth0 combined <nr-of-queues> # to set the number of queues. Should match the number of vCPUs

There is an open RFE to implement multi-queue activation by default in the kernel, see BZ#.

8) Other Miscellaneous Tuning for Guests

It should go without saying that right-sized instances should contain only the minimum amount of installed packages and run only the services needed. Of a particular note, it is probably a good idea to install and enable the irqbalance service as, although not absolutely necessary in all scenarios, its overhead is minimal and it should be used for example in SR-IOV setups (this way the same image can be used regardless of such lower level details).
Even though implicitly set on KVM, it is a good idea to explicitly add the kernel parameter no_timer_check to prevent issues with timing devices. Enabling persistent DHCP client and disabling zeroconf route in network configuration with PERSISTENT_DHCLIENT=yes and NOZEROCONF=yes, respectively, helps to avoid networking corner case issues.
Guest MTU settings are usually adjusted correctly by default, but having a proper MTU in use on all levels of the stack is crucial to achieve maximum network performance. In environments with 10G (and faster) NICs this typically means the use of Jumbo Frames with MTU up to 9000, taking possible VXLAN encapsulation into account. For further MTU discussion, see the upstream guidelines for MTU or the Red Hat OpenStack Networking Guide.
9) Improving the way you access your instances
Although some purists may consider incompatible running SSH inside truly cloud-native instances, especially in auto-scaling production workloads, most of us will still rely on good old SSH to perform configuration tasks (via Ansible for instance) as well as maintenance and troubleshooting (e.g., to fetch logs after a software failure).
The SSH daemon should avoid DNS lookups to speed up establishing SSH connections. For this, consider using UseDNS no in /etc/ssh/sshd_config and adding OPTIONS=-u0 to /etc/sysconfig/sshd (see sshd_config(5) for details on these). Setting GSSAPIAuthentication no could be considered if Kerberos is not in use. In case instances frequently connect to each other, the ControlPersist / ControlMaster options might be considered as well.
Typically remote SSH access and console access via Horizon are enough for most use cases. During development phase direct console access from the Nova compute host may also be helpful, for this to work enable the serial-getty@ttyS1.service, allow root access via ttyS1 if needed by adding ttyS1 to /etc/securetty, and then access the guest console from the Nova compute with virsh console <instance-id> &;devname serial1.

We hope with this blog post you&8217;ve discovered new ways to improve the performance of your OpenStack instances. If you need more information, remember we have tons of documents in our OpenStack Documentation Portal and that we offer the best OpenStack courses of the industry, starting with the free of charge CL010 Introduction to OpenStack Course.
Quelle: RedHat Stack

OpenStack Developer Mailing List Digest December 31 – January 6

SuccessBot Says

Dims &; Keystone now has Devstack based functional test with everything running under python3.5.
Tell us yours via IRC channels with message &; <message>&;
All

Time To Retire Nova-docker

nova-docker has lagged behind the last 6 months of nova development.
No longer passes simple CI unit tests.

There are patches to at least get the unit tests work 1 .

If the core team no longer has time for it, perhaps we should just archive it.
People ask about it on openstack-nova about once or twice a year, but it’s not recommended as it’s not maintained.
It’s believed some people are running and hacking on it outside of the community.
The Sun project provides lifecycle management interface for containers that are started in container orchestration engines provided with Magnum.
Nova-lxc driver provides an ability of treating containers like your virtual machines. 2

Not recommended for production use though, but still better maintained than nova-docker 3.

Nova-lxd also provides the ability of treating containers like virtual machines.
Virtuozzo which is supported in Nova via libvirt provides both a virtual machine and OS containers similar to LXC.

These containers have been in production for more than 10 years already.
Well maintained and actually has CI testing.

A proposal to remove it 4 .
Full thread

Community Goals For Pike

A few months ago the community started identifying work for OpenStack-wide goals to “achieve visible common changes, push for basic levels of consistency and user experience, and efficiently improve certain areas where technical debt payments have become to high &8211; across all OpenStack projects.”
First goal defined 5 to remove copies of incubated Oslo code.
Moving forward in Pike:

Collect feedback of our first iteration. What went well and what was challenging?
Etherpad for feedback 6

Goals backlog 7

New goals welcome
Each goal should be achievable in one cycle. If not, it should be broken up.
Some goals might require documentation for how it could be achieved.

Choose goals for Pike

What is really urgent? What can wait for six months?
Who is available and interested in contributing to the goal?

Feedback was also collected at the Barcelona summit 8
Digest of feedback:

Most projects achieved the goal for Ocata, and there was interest in doing it on time.
Some confusion on acknowledging a goal and doing the work.
Some projects slow on the uptake and reviewing the patches.
Each goal should document where the “guides” are, and how to find them for help.
Achieving multiple goals in a single cycle wouldn’t be possible for all team.

The OpenStack Product Working group is also collecting feedback for goals 9
Goals set for Pike:

Split out Tempest plugins 10
Python 3 11

TC agreeements from last meeting:

2 goals might be enough for the Pike cycle.
The deadline to define Pike goals would be Ocata-3 (Jan 23-27 week).

Full thread

POST /api-wg/news

Guidelines current review:

Add guidelines on usage of state vs. status 12
Add guidelines for boolean names 13
Clarify the status values in versions 14
Define pagination guidelines 15
Add API capabilities discovery guideline 16

Full thread

 
Quelle: openstack.org

Introduction to YAML: Creating a Kubernetes deployment

The post Introduction to YAML: Creating a Kubernetes deployment appeared first on Mirantis | The Pure Play OpenStack Company.
In previous articles, we&;ve been talking about how to use Kubernetes to spin up resources. So far, we&8217;ve been working exclusively on the command line, but there&8217;s an easier and more useful way to do it: creating configuration files using YAML. In this article, we&8217;ll look at how YAML works and use it to define first a Kubernetes Pod, and then a Kubernetes Deployment.
YAML Basics
It&8217;s difficult to escape YAML if you&8217;re doing anything related to many software fields &; particularly Kubernetes, SDN, and OpenStack. YAML, which stands for Yet Another Markup Language, or YAML Ain&8217;t Markup Language (depending who you ask) is a human-readable text-based format for specifying configuration-type information. For example, in this article, we&8217;ll pick apart the YAML definitions for creating first a Pod, and then a Deployment.
Using YAML for K8s definitions gives you a number of advantages, including:

Convenience: You&8217;ll no longer have to add all of your parameters to the command line
Maintenance: YAML files can be added to source control, so you can track changes
Flexibility: You&8217;ll be able to create much more complex structures using YAML than you can on the command line

YAML is a superset of JSON, which means that any valid JSON file is also a valid YAML file. So on the one hand, if you know JSON and you&8217;re only ever going to write your own YAML (as opposed to reading other people&8217;s) you&8217;re all set.  On the other hand, that&8217;s not very likely, unfortunately.  Even if you&8217;re only trying to find examples on the web, they&8217;re most likely in (non-JSON) YAML, so we might as well get used to it.  Still, there may be situations where the JSON format is more convenient, so it&8217;s good to know that it&8217;s available to you.
Fortunately, there are only two types of structures you need to know about in YAML:

Lists
Maps

That&8217;s it. You might have maps of lists and lists of maps, and so on, but if you&8217;ve got those two structures down, you&8217;re all set. That&8217;s not to say there aren&8217;t more complex things you can do, but in general, this is all you need to get started.
YAML Maps
Let&8217;s start by looking at YAML maps.  Maps let you associate name-value pairs, which of course is convenient when you&8217;re trying to set up configuration information.  For example, you might have a config file that starts like this:

apiVersion: v1
kind: Pod
The first line is a separator, and is optional unless you&8217;re trying to define multiple structures in a single file. From there, as you can see, we have two values, v1 and Pod, mapped to two keys, apiVersion and kind.
This kind of thing is pretty simple, of course, and you can think of it in terms of its JSON equivalent:
{
“apiVersion”: “v1″,
“kind”: “Pod”
}
Notice that in our YAML version, the quotation marks are optional; the processor can tell that you&8217;re looking at a string based on the formatting.
You can also specify more complicated structures by creating a key that maps to another map, rather than a string, as in:

apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
In this case, we have a key, metadata, that has as its value a map with 2 more keys, name and labels. The labels key itself has a map as its value. You can nest these as far as you want to.
The YAML processor knows how all of these pieces relate to each other because we&8217;ve indented the lines. In this example I&8217;ve used 2 spaces for readability, but the number of spaces doesn&8217;t matter &8212; as long as it&8217;s at least 1, and as long as you&8217;re CONSISTENT.  For example, name and labels are at the same indentation level, so the processor knows they&8217;re both part of the same map; it knows that app is a value for labels because it&8217;s indented further.
Quick note: NEVER use tabs in a YAML file.
So if we were to translate this to JSON, it would look like this:
{
“apiVersion”: “v1″,
“kind”: “Pod”,
“metadata”: {
“name”: “rss-site”,
“labels”: {
“app”: “web”
}
}
}
Now let&8217;s look at lists.
YAML lists
YAML lists are literally a sequence of objects.  For example:
args
 – sleep
 – “1000”
 – message
 – “Bring back Firefly!”
As you can see here, you can have virtually any number of items in a list, which is defined as items that start with a dash (-) indented from the parent.  So in JSON, this would be:
{
“args”: [“sleep”, “1000”, “message”, “Bring back Firefly!”]
}
And of course, members of the list can also be maps:

apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
spec:
 containers:
   – name: front-end
     image: nginx
     ports:
       – containerPort: 80
   – name: rss-reader
     image: nickchase/rss-php-nginx:v1
     ports:
       – containerPort: 88
So as you can see here, we have a list of containers &;objects&;, each of which consists of a name, an image, and a list of ports.  Each list item under ports is itself a map that lists the containerPort and its value.
For completeness, let&8217;s quickly look at the JSON equivalent:
{
“apiVersion”: “v1″,
“kind”: “Pod”,
“metadata”: {
“name”: “rss-site”,
“labels”: {
“app”: “web”
}
},
“spec”: {
“containers”: [{
“name”: “front-end”,
“image”: “nginx”,
“ports”: [{
“containerPort”: “80”
}]
},
{
“name”: “rss-reader”,
“image”: “nickchase/rss-php-nginx:v1″,
“ports”: [{
“containerPort”: “88”
}]
}]
}
}
As you can see, we&8217;re starting to get pretty complex, and we haven&8217;t even gotten into anything particularly complicated! No wonder YAML is replacing JSON so fast.
So let&8217;s review.  We have:

maps, which are groups of name-value pairs
lists, which are individual items
maps of maps
maps of lists
lists of lists
lists of maps

Basically, whatever structure you want to put together, you can do it with those two structures.  
Creating a Pod using YAML
OK, so now that we&8217;ve got the basics out of the way, let&8217;s look at putting this to use. We&8217;re going to first create a Pod, then a Deployment, using YAML.
If you haven&8217;t set up your cluster and kubectl, go ahead and check out this article series on setting up Kubernetes before you go on.  It&8217;s OK, we&8217;ll wait&;.

Back already?  Great!  Let&8217;s start with a Pod.
Creating the pod file
In our previous example, we described a simple Pod using YAML:
&8212;
apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
spec:
 containers:
   &; name: front-end
     image: nginx
     ports:
       &8211; containerPort: 80
   &8211; name: rss-reader
     image: nickchase/rss-php-nginx:v1
     ports:
       &8211; containerPort: 88
Taking it apart one piece at a time, we start with the API version; here it&8217;s just v1. (When we get to deployments, we&8217;ll have to specify a different version because Deployments don&8217;t exist in v1.)
Next, we&8217;re specifying that we want to create a Pod; we might specify instead a Deployment, Job, Service, and so on, depending on what we&8217;re trying to achieve.
Next we specify the metadata. Here we&8217;re specifying the name of the Pod, as well as the label we&8217;ll use to identify the pod to Kubernetes.
Finally, we&8217;ll specify the actual objects that make up the pod. The spec property includes any containers, storage volumes, or other pieces that Kubernetes needs to know about, as well as properties such as whether to restart the container if it fails. You can find a complete list of Kubernetes Pod properties in the Kubernetes API specification, but let&8217;s take a closer look at a typical container definition:
&8230;
spec:
 containers:
   &8211; name: front-end
     image: nginx
     ports:
       &8211; containerPort: 80
   &8211; name: rss-reader
&8230;
In this case, we have a simple, fairly minimal definition: a name (front-end), the image on which it&8217;s based (nginx), and one port on which the container will listen internally (80).  Of these, only the name is really required, but in general, if you want it to do anything useful, you&8217;ll need more information.
You can also specify more complex properties, such as a command to run when the container starts, arguments it should use, a working directory, or whether to pull a new copy of the image every time it&8217;s instantiated.  You can also specify even deeper information, such as the location of the container&8217;s exit log.  Here are the properties you can set for a Container:

name
image
command
args
workingDir
ports
env
resources
volumeMounts
livenessProbe
readinessProbe
livecycle
terminationMessagePath
imagePullPolicy
securityContext
stdin
stdinOnce
tty

Now let&8217;s go ahead and actually create the pod.
Creating the pod using the YAML file
The first step, of course, is to go ahead and create a text file.   Call it pod.yaml and add the following text, just as we specified it earlier:
&8212;
apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
spec:
 containers:
   &8211; name: front-end
     image: nginx
     ports:
       &8211; containerPort: 80
   &8211; name: rss-reader
     image: nickchase/rss-php-nginx:v1
     ports:
       &8211; containerPort: 88
Save the file, and tell Kubernetes to create its contents:
> kubectl create -f pod.yaml
pod “rss-site” created
As you can see, K8s references the name we gave the Pod.  You can see that if you ask for a list of the pods:
> kubectl get pods
NAME       READY     STATUS              RESTARTS   AGE
rss-site   0/2       ContainerCreating   0          6s
If you check early enough, you can see that the pod is still being created.  After a few seconds, you should see the containers running:
> kubectl get pods
NAME       READY     STATUS    RESTARTS   AGE
rss-site   2/2       Running   0          14s
From here, you can test out the Pod (just as we did in the previous article), but ultimately we want to create a Deployment, so let&8217;s go ahead and delete it so there aren&8217;t any name conflicts:
> kubectl delete pod rss-site
pod “rss-site” deleted
Troubleshooting pod creation
Sometimes, of course, things don&8217;t go as you expect. Maybe you&8217;ve got a networking issue, or you&8217;ve mistyped something in your YAML file.  You might see an error like this:
> kubectl get pods
NAME       READY     STATUS         RESTARTS   AGE
rss-site   1/2       ErrImagePull   0          9s
In this case, we can see that one of our containers started up just fine, but there was a problem with the other.  To track down the problem, we can ask Kubernetes for more information on the Pod:
> kubectl describe pod rss-site
Name:           rss-site
Namespace:      default
Node:           10.0.10.7/10.0.10.7
Start Time:     Sun, 08 Jan 2017 08:36:47 +0000
Labels:         app=web
Status:         Pending
IP:             10.200.18.2
Controllers:    <none>
Containers:
 front-end:
   Container ID:               docker://a42edaa6dfbfdf161f3df5bc6af05e740b97fd9ac3d35317a6dcda77b0310759
   Image:                      nginx
   Image ID:                   docker://sha256:01f818af747d88b4ebca7cdabd0c581e406e0e790be72678d257735fad84a15f
   Port:                       80/TCP
   State:                      Running
     Started:                  Sun, 08 Jan 2017 08:36:49 +0000
   Ready:                      True
   Restart Count:              0
   Environment Variables:      <none>
 rss-reader:
   Container ID:
   Image:                      nickchase/rss-php-nginx
   Image ID:
   Port:                       88/TCP
   State:                      Waiting
    Reason:                   ErrImagePull
   Ready:                      False
   Restart Count:              0
   Environment Variables:      <none>
Conditions:
 Type          Status
 Initialized   True
 Ready         False
 PodScheduled  True
No volumes.
QoS Tier:       BestEffort
Events:
 FirstSeen     LastSeen        Count   From                    SubobjectPath  Type             Reason                  Message
 ———     ——–        —–   —-                    ————-  ——– ——                  ——-
 45s           45s             1       {default-scheduler }                   Normal           Scheduled               Successfully assigned rss-site to 10.0.10.7
 44s           44s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Pulling                 pulling image “nginx”
 45s           43s             2       {kubelet 10.0.10.7}                    Warning          MissingClusterDNS       kubelet does not have ClusterDNS IP configured and cannot create Pod using “ClusterFirst” policy. Falling back to DNSDefault policy.
 43s           43s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Pulled                  Successfully pulled image “nginx”
 43s           43s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Created                 Created container with docker id a42edaa6dfbf
 43s           43s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Started                 Started container with docker id a42edaa6dfbf
 43s           29s             2       {kubelet 10.0.10.7}     spec.containers{rss-reader}     Normal          Pulling                 pulling image “nickchase/rss-php-nginx”
 42s           26s             2       {kubelet 10.0.10.7}     spec.containers{rss-reader}     Warning         Failed                  Failed to pull image “nickchase/rss-php-nginx”: Tag latest not found in repository docker.io/nickchase/rss-php-nginx
 42s           26s             2       {kubelet 10.0.10.7}                    Warning          FailedSync              Error syncing pod, skipping: failed to “StartContainer” for “rss-reader” with ErrImagePull: “Tag latest not found in repository docker.io/nickchase/rss-php-nginx”

 41s   12s     2       {kubelet 10.0.10.7}     spec.containers{rss-reader}    Normal   BackOff         Back-off pulling image “nickchase/rss-php-nginx”
 41s   12s     2       {kubelet 10.0.10.7}                                    Warning  FailedSync      Error syncing pod, skipping: failed to “StartContainer” for “rss-reader” with ImagePullBackOff: “Back-off pulling image “nickchase/rss-php-nginx””
As you can see, there&8217;s a lot of information here, but we&8217;re most interested in the Events &8212; specifically, once the warnings and errors start showing up.  From here I was able to quickly see that I&8217;d forgotten to add the :v1 tag to my image, so it was looking for the :latest tag, which didn&8217;t exist.  
To fix the problem, I first deleted the Pod, then fixed the YAML file and started again. Instead, I could have fixed the repo so that Kubernetes could find what it was looking for, and it would have continued on as though nothing had happened,.
Now that we&8217;ve successfully gotten a Pod running, let&8217;s look at doing the same for a Deployment.
Creating a Deployment using YAML
Finally, we&8217;re down to creating the actual Deployment.  Before we do that, though, it&8217;s worth understanding what it is we&8217;re actually doing.
K8s, remember, manages container-based resources. In the case of a Deployment, you&8217;re creating a set of resources to be managed. For example, where we created a single instance of the Pod in the previous example, we might create a Deployment to tell Kubernetes to manage a set of replicas of that Pod &8212; literally, a ReplicaSet &8212; to make sure that a certain number of them are always available.  So we might start our Deployment definition like this:
&8212;
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: rss-site
spec:
 replicas: 2
Here we&8217;re specifying the apiVersion as extensions/v1beta1 &8212; remember, Deployments aren&8217;t in v1, as Pods were &8212; and that we want a Deployment. Next we specify the name. We can also specify any other metadata we want, but let&8217;s keep things simple for now.
Finally, we get into the spec. In the Pod spec, we gave information about what actually went into the Pod; we&8217;ll do the same thing here with the Deployment. We&8217;ll start, in this case, by saying that whatever Pods we deploy, we always want to have 2 replicas. You can set this number however you like, of course, and you can also set properties such as the selector that defines the Pods affected by this Deployment, or the minimum number of seconds a pod must be up without any errors before it&8217;s considered &8220;ready&8221;.  You can find a full list of the Deployment specification properties in the Kuberenetes v1beta1 API reference.
OK, so now that we know we want 2 replicas, we need to answer the question: &8220;Replicas of what?&8221;  They&8217;re defined by templates:
&8212;
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: rss-site
spec:
 replicas: 2
 template:
   metadata:
     labels:
       app: web
   spec:
     containers:
       &8211; name: front-end
         image: nginx
         ports:
           &8211; containerPort: 80
       &8211; name: rss-reader
         image: nickchase/rss-php-nginx:v1
         ports:
           &8211; containerPort: 88
Look familiar?  It should; it&8217;s virtually identical to the Pod definition in the previous section, and that&8217;s by design. Templates are simply definitions of objects to be replicated &8212; objects that might, in other circumstances, by created on their own.
Now let&8217;s go ahead and create the deployment.  Add the YAML to a file called deployment.yaml and point Kubernetes at it:
> kubectl create -f deployment.yaml
deployment “rss-site” created
To see how it&8217;s doing, we can check on the deployments list:
> kubectl get deployments
NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
rss-site   2         2         2            1           7s
As you can see, Kubernetes has started both replicas, but only one is available. You can check the event log by describing the Deployment, as before:
> kubectl describe deployment rss-site
Name:                   rss-site
Namespace:              default
CreationTimestamp:      Mon, 09 Jan 2017 17:42:14 +0000=
Labels:                 app=web
Selector:               app=web
Replicas:               2 updated | 2 total | 1 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          rss-site-4056856218 (2/2 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–        ——                  ——-
 46s           46s             1       {deployment-controller }               Normal           ScalingReplicaSet       Scaled up replica set rss-site-4056856218 to 2
As you can see here, there&8217;s no problem, it just hasn&8217;t finished scaling up yet. Another few seconds, and we can see that both Pods are running:
> kubectl get deployments
NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
rss-site   2         2         2            2           1m
What we&8217;ve seen so far
OK, so let&8217;s review. We&8217;ve basically covered three topics:

YAML is a human-readable text-based format that let&8217;s you easily specify configuration-type information by using a combination of maps of name-value pairs and lists of items (and nested versions of each).
YAML is the most convenient way to work with Kubernetes objects, and in this article we looked at creating Pods and Deployments.
You can get more information on running (or should-be-running) objects by asking Kubernetes to describe them.

So that&8217;s our basic YAML tutorial. We&8217;re going to be tackling a great deal of Kubernetes-related content in the coming months, so if there&8217;s something specific you want to hear about, let us know in the comments, or tweet us at @MirantisIT.
The post Introduction to YAML: Creating a Kubernetes deployment appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Streaming your next all-hands meeting? Prep execs for their video debut

A CEO may be a great strategist and leader in the office, perhaps even a great public speaker. Yet many executives find it challenging to communicate their passion and charisma when speaking in front of a video camera. Too often, the result is a bland performance that doesn&;t inspire employees&8217; confidence. That’s a problem, since video is becoming an essential way for executives to communicate with their global workforces.
Brian Burkhart, President at presentation development firm SquarePlanet and an instructor at Northwestern University&8217;s Farley Center for Entrepreneurship and Innovation, says bad on-screen performances happen because a live audience provides a palpable energy that the mechanical lens of a camera can&8217;t provide.
“There is something about live human interaction that you simply can&8217;t recreate with a camera,&; Burkhart says.
He says that business leaders can avoid a dud performance by following a few tips:
Don’t just prepare between meetings
Burkhart says that the best way to ensure a great performance is by spending the necessary time preparing. “I see many CEOs walk into the room where the camera is set up with their email buzzing, phones vibrating and a very full calendar. They ask, ‘Now, what is it that I&8217;m supposed to talk about?’ right before the camera is turned on.&8221; That tells Burkhart that the CEO is not fully present.
It&8217;s all about having the right mindset, he says. The trick is to view the speaking engagement as an opportunity to connect with an audience by sharing the most important messages.
“It&8217;s a little bit like making a soup. Yes, you can prepare the dish in 30 minutes, but if you let it simmer for six hours, then it&8217;s going to be way better,&8221; says Burkhart. “When CEOs spend time marinating on the message, it becomes fully authentic instead of just reading a script.&8221;
Bring energy and excitement on camera
The presenter must compensate for the lower excitement and energy levels of video compared to those of a live event. To pull it off, Burkhart offers three tips:

Bring the energy: If a CEO gives 50 percent of her energy in a live speech, then they should give 100 percent on camera. Small changes, such as standing up instead of sitting behind a desk, can make a big difference in the energy the audience perceives.
Talk loudly: Presenters often assume that since they are wearing a microphone, they can use their normal voice. This isn&8217;t true. Speaking louder translates into a higher energy level.
Be animated:Use voice inflection, facial expressions and body language to convey energy and excitement. Smile bigger than you think you should. If a CEO has a look of dread, terror or pain as she presents, audiences will notice.

Video, even live video streams, rarely disappear. Those video streams are often recorded for reuse later, or shared on YouTube or the company&8217;s website. But with the right mindset and energy, busy executives can ensure the energy translates into an effective, engaging presentation.

“The bottom line is really truly understanding who you are trying to connect to on the other side of the cameras,&8221; says Burkhart. &;It&8217;s not just a lens and metal, but an audience of human beings.&8221;
Learn more about IBM Cloud video solutions.
The post Streaming your next all-hands meeting? Prep execs for their video debut appeared first on news.
Quelle: Thoughts on Cloud

OpenStack Developer Mailing List Digest December 31 – January 6

SuccessBot Says

Dims &; Keystone now has Devstack based functional test with everything running under python3.5.
Tell us yours via IRC channels with message &; <message>&;
All

Time To Retire Nova-docker

nova-docker has lagged behind the last 6 months of nova development.
No longer passes simple CI unit tests.

There are patches to at least get the unit tests work 1 .

If the core team no longer has time for it, perhaps we should just archive it.
People ask about it on openstack-nova about once or twice a year, but it’s not recommended as it’s not maintained.
It’s believed some people are running and hacking on it outside of the community.
The Sun project provides lifecycle management interface for containers that are started in container orchestration engines provided with Magnum.
Nova-lxc driver provides an ability of treating containers like your virtual machines. 2

Not recommended for production use though, but still better maintained than nova-docker 3.

Nova-lxd also provides the ability of treating containers like virtual machines.
Virtuozzo which is supported in Nova via libvirt provides both a virtual machine and OS containers similar to LXC.

These containers have been in production for more than 10 years already.
Well maintained and actually has CI testing.

A proposal to remove it 4 .
Full thread

Community Goals For Pike

A few months ago the community started identifying work for OpenStack-wide goals to “achieve visible common changes, push for basic levels of consistency and user experience, and efficiently improve certain areas where technical debt payments have become to high &8211; across all OpenStack projects.”
First goal defined 5 to remove copies of incubated Oslo code.
Moving forward in Pike:

Collect feedback of our first iteration. What went well and what was challenging?
Etherpad for feedback 6

Goals backlog 7

New goals welcome
Each goal should be achievable in one cycle. If not, it should be broken up.
Some goals might require documentation for how it could be achieved.

Choose goals for Pike

What is really urgent? What can wait for six months?
Who is available and interested in contributing to the goal?

Feedback was also collected at the Barcelona summit 8
Digest of feedback:

Most projects achieved the goal for Ocata, and there was interest in doing it on time.
Some confusion on acknowledging a goal and doing the work.
Some projects slow on the uptake and reviewing the patches.
Each goal should document where the “guides” are, and how to find them for help.
Achieving multiple goals in a single cycle wouldn’t be possible for all team.

The OpenStack Product Working group is also collecting feedback for goals 9
Goals set for Pike:

Split out Tempest plugins 10
Python 3 11

TC agreeements from last meeting:

2 goals might be enough for the Pike cycle.
The deadline to define Pike goals would be Ocata-3 (Jan 23-27 week).

Full thread

POST /api-wg/news

Guidelines current review:

Add guidelines on usage of state vs. status 12
Add guidelines for boolean names 13
Clarify the status values in versions 14
Define pagination guidelines 15
Add API capabilities discovery guideline 16

Full thread

 
Quelle: openstack.org