The First DockerCon with Windows Containers

DockerCon 2017 is only a few weeks away, and the schedule is available now on the DockerCon Agenda Builder. This will be the first DockerCon since Server 2016 was released, bringing native support for containers to Windows. There will be plenty of content for Windows developers and admins &; here are some of the standouts.

Windows and .NET Sessions
On the main stages, there will be hours of content dedicated to Windows and .NET.
Docker for .NET Developers
Michele Bustamante, CIO of Solliance, looks at what Docker can do for .NET applications. Michele will start with a full .NET Framework application and show how to run it in a Windows container. Then Michele will move on to .NET Core and show how the new cross-platform framework can build apps which run in Windows or Linux containers, making for true portability throughout the data center and the cloud.
Escape From Your VMs with Image2Docker
I’ll be presenting with Docker Captain Jeff Nickoloff, covering the Image2Docker tool, which automates app migration from virtual machines to Docker images. There’s Image2Docker for Linux, and Image2Docker for Windows. We’ll demonstrate both, porting an app with a Linux front end and a Windows back end from VMs to Docker images. Then we’ll run the whole application in containers on one Docker swarm, a cluster with Linux and Windows nodes.
Beyond “” &8211; the Path to WIndows and Linux Parity in Docker
Taylor Brown and Dinesh Govindasamy from Microsoft will talk about how Docker support was built for Windows Server 2016. Their session will cover the technical implementation in Windows, the current gaps between Docker on Linux and Docker on Windows, and the plans to bring parity to the Windows experience. This session is from the team at Microsoft who actually delivered the kernel changes to support Windows containers running in Docker.
Creating Effective Images
Abby Fuller from AWS will talk about making efficient Docker images. Optimized Docker images build quickly, are as small as possible, and include only the components needed to run the app. Abby will talk about image layers, caching, Dockerfile best practices, and Docker Security Scanning, in a cross-platform session which looks at Linux and Windows Docker images.
Other Sessions
Check out the topics in the Agenda Builder for sessions from speakers who have been using Docker in production, and seen a huge change in their ability to deliver quality software, quickly. These are Linux case studies, but the principles equally apply to Windows projects.

In Architecture, Cornell University use Docker Datacenter to run monolithic legacy apps alongside greenfield microservice apps &8211; with consistent monitoring and management
In Production, PayPal are on a  journey migrating all their legacy apps to Docker, and using Docker as their production application platform
In Enterprise, MetLife delivered a new microservice application running on Docker in 5 months, embracing new approaches to design, test and engineering.

 
Workshops
Workshops are instructor-led sessions, which run on the Monday of DockerCon. There are a lot of great sessions to choose from, but for Windows folks these two are particularly well-suited:
Learn Docker. Get to grips with the basics of Docker, learning about the basics of images and containers, and moving on to networking, orchestration, security and volumes. This session will focus on Linux containers, which you can run with Docker for Windows, but the principles you’ll learn apply equally to Windows containers.
Modernizing Monolithic ASP.NET Applications with Docker. A workshop focused on Windows and ASP.NET. You’ll learn how to run a monolithic ASP.NET app in Docker without changing code, and then see how to break features out from the main app and run them in separate Docker containers, giving you a path to modernize your app without rebuilding it.
Hands-On Labs
As well as the main sessions and guided workshops, there will be hands-on labs for you to experience Docker on Windows. We’ll provision a Docker environment for you in Azure, and provide self-paced learning guides. The hands-on labs will cover:
Docker on Windows 101. Get started with Docker on Windows, and learn why the world is moving to containers. You’ll start by exploring the Windows Docker images from Microsoft, then you’ll run some simple applications, and learn how to scale apps across multiple servers running Docker in swarm mode
Modernize .NET Apps &8211; for Ops. An admin guide to migrating .NET apps to Docker images, showing how the build, ship, run workflow makes application maintenance fast and risk-free. You’ll start by migrating a sample app to  Docker, and then learn how to upgrade the application, patch the Windows version the app uses, and patch the Windows version on the host &8211; all with zero downtime.
Modernize .NET Apps &8211; for Devs. A developer guide to app migration, showing how the Docker platform lets you update a monolithic application without doing a full rebuild. You’ll start with a sample app and see how to break components out into separate units, plumbing the units together with the Docker platform and the tried-and-trusted applications available on Docker Hub.
Book Your Ticket Now!
DockerCon is always a sell-out conference, so book your DockerCon tickets while there are still spaces left. If you follow the Docker Captains on Twitter, you may find they have discount codes to share.

Check out all the Docker content for Windows at To Tweet

The post The First DockerCon with Windows Containers appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

A Hard Road from Dev to Ops

The post A Hard Road from Dev to Ops appeared first on Mirantis | Pure Play Open Cloud.
Last June, in his provocative blog, Mirantis’ co-founder Boris Renski proclaimed to the world that infrastructure software was dead. That blog was a battle cry for us as a company, and the beginning of an organizational evolution away from our exclusive focus on delivering software and towards providing customers with a turnkey infrastructure experience.
It was clear that the future consumption model for infrastructure is defined by public clouds where everything is self-service, API-driven, fully managed and continuously delivered. It was also clear that most vendors, Mirantis included, had misinterpreted where the core of cloud disruption was, overemphasizing disruption in software capabilities around “self-service and API-driven,” while largely ignoring the disruption in delivery approach codified as “fully managed and continuously delivered.” Private cloud had become a label for the new type of software, whereas public cloud was a label for a combination of software and, most importantly, a new delivery model. Private cloud had failed and we needed to change.
As we started piercing the market with our new Build-Operate-Transfer delivery model for open cloud last year, we pulled the trigger on changing the company internally. Mirantis had to reinvent itself, re-examine every part of the company and ask if it was built correctly and/or was needed in order to deliver an awesome customer operations experience. Organically and through acquisition, we added new engineering and operations folks who brought with them the relentless focus on keeping things simple, and emphasized continuously integrating and managing change. We went away from using advanced computer science as the only means to avoid failures in favor of selecting simple configurations that are less likely to fail and investing heavily in the monitoring and intelligence that predicts failure before it occurs and proactively alerts the operator to avoid failures all together.
In the meantime, and despite the challenges, things were picking up in the field. We weren’t alone in realizing that cloud operations are hard, so many OpenStack DIYers that had failed at operations got intrigued by our model. We started winning big managed cloud deals, and made meaningful strides in transitioning our existing marquee accounts like AT&T and VW toward managed open cloud. Most importantly, we weren’t just winning new deals; we were expanding existing ones &; a much more important sign of delivering customer value. Today, some of the world’s most iconic companies are running their customer-facing businesses on our managed clouds without needing to pay much attention to how the cloud is run. They simply expect that it works.
Now we are staring at an explosion of new clouds in our sales pipeline. In order to scale and provide an awesome user experience, this week we’ve announced the final set of organizational changes that will complete our transformation, putting our 12 months of difficult transition behind:  

We are simplifying the services we offer in our portfolio, focusing less on one-off cloud infrastructure integration and more on strategy, site readiness and cloud tenant on-boarding and care.

We are combining our 24&;7 software support team and our managed operations team into a single-focused customer success team.

Since many of our customers don’t accept managed services from Russia and Ukraine locations (due to regulatory, compliance and corporate security policies), we are shifting roughly 70 jobs from those locations to the U.S., Poland and Czech Republic.

As founders, we felt it was important to share this update publicly, not just because we want the world to know that Mirantis is changing, but also because this transformation is personal to us. We founded Mirantis back in 2000 &8211; originally a small IT services firm, and following this change, some of our best friends and colleagues who have travelled with us for well over a decade will no longer be with the company. We want those who are leaving to know that we are humbled by your brilliance and eternally grateful to have worked alongside such committed and true friends.
As we look at the last twelve months, we’re proud of the change we persevered through as a company. Evolving a company is never easy &8211; for management, employees, partners or customers. Many in our space will need to go through a similar evolution to stay relevant in the public cloud world, and not everybody will make it through. We are fully determined that Mirantis to be part of the pack that does.
Onwards and upwards!
The post A Hard Road from Dev to Ops appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Docker and Cisco Launch Cisco Validated Designs for Cisco UCS and Flexpod Infrastructures on Docker Enterprise Edition

Last week, and jointly announced a strategic alliance between our organizations. Based on customer feedback, one of the initial joint initiatives is the validation of Docker Enterprise Edition (which includes Docker Datacenter) against Cisco UCS and the Nexus infrastructures. We are excited to announce that Cisco Validated Designs (CVDs) for Cisco UCS and on Docker Enterprise Edition (EE) are immediately available.
CVDs represent the gold standard reference architecture methodology for enterprise customers looking to deploy an end-to-end solution. The CVDs follow defined processes and covers not only provisioning and configuration of the solution, but also test and document the solutions against performance, scale and availability/failure &; something that requires a lab setup with a significant amount of hardware that reflects actual production deployments. This enables our customers achieve faster, more reliable and predictable implementations.
The two new CVDs published for container management offers enterprises a well designed and an end-to-end lab tested configuration for Docker EE on Cisco UCS and Flexpod Datacenter. The collaborative engineering effort between Cisco, NetApp and Docker provides enterprises best of breed solutions for Docker Datacenter on Cisco Infrastructure and NetApp Enterprise Storage to run stateless or stateful containers.
The first CVD includes 2 configurations:

4-node rack servers Bare Metal deployment, co-locating Docker UCP Controller and DTR on 3 manager nodes in a Highly Available configuration and 1 UCP worker node.

10-node Blade servers Bare Metal deployment, with 3 nodes for UCP controllers, 3 nodes for DTR and remaining 4 nodes as UCP worker nodes

The second CVD was based on FlexPod Datacenter in collaboration with NetApp using Cisco UCS Blades and NetApp FAS and E-Series storage.
These CVDs leverage the Docker native user experience of Docker EE, along with Cisco’s UCS converged infrastructure capabilities to provide simple management control planes to orchestrate compute, network and storage provisioning for the application containers to run in a secure and scalable environment. It also uses built in security features of the UCS such as I/O isolation through VLANs, secure bootup of bare metal hosts, and physical storage access path isolation through Cisco VIC’s virtual network interfaces. The combination of UCS and Docker EE’s built-in security such as Secrets Management, Docker Content Trust, and Docker Security Scanning provides a secure end-to-end Container-as-a-Service (CaaS) solution.

Both these solutions use Cisco UCS Service Profiles to provision and configure the UCS servers and their I/O properties to automate the complete installation process. Docker commands and Ansible were used for Docker EE  installation. After configuring proper certificates across the DTR and UCP nodes, we were able to push and pull images successfully. Container images such as busybox, nginx, etc. and applications such as WordPress, Voting application, etc. to test and validate the configuration were pulled from Docker Hub, a central repository for Docker developers to store container images.
The scaling test included the deployment of containers and applications. We were able to deploy 700+ containers on single node and more than 7000 containers across 10 nodes without performance degradation. The scaling tests also covered dynamically adding/deleting nodes to ensure the cluster remains responsive during this change. This excellent scaling and resiliency tests on the clusters are result of swarm mode, container orchestration tightly integrated into Docker EE with Docker Datacenter, and Cisco’s Nexus switches which provides high performance and low latency network speed.
The fail-over tests covered node shutdown, reboot, induce fault at Cisco Fabric Interconnects to adapters on Cisco UCS blade servers. When the UCP manager node was shutdown/rebooted, we were able to validate that users were still able to access containers through Docker UCP UI or CLI. The system was able to start up quickly after a reboot and the UCP cluster and services were restored. Hardware failure resulted in cluster operating in reduced capacity, but there was no single point of failure.
As part of the FlexPod CVD, NFS was configured for Docker Trusted Registry (DTR) nodes for shared access. Flexpod is configured with NetApp enterprise class storage, and NetApp Docker Volume Plugin (nDVP) provides direct integration with Docker ecosystem for NetApp’s ONTAP, E-Series and SolidFire Storage. FlexPod uses NetApp ONTAP storage backend for DTR as well as Container Storage management, and can verify Container volumes deployed using NetApp OnCommand System Manager.
Please refer to CVDs for detailed configuration information.

FlexPod Datacenter with Docker Datacenter for Container Management
Cisco UCS Infrastructure with Docker Datacenter for Container Management

 

Docker Enterprise Edition now w/ @Cisco Validated Designs for Cisco UCS and Flexpod&;Click To Tweet

The post Docker and Cisco Launch Cisco Validated Designs for Cisco UCS and Flexpod Infrastructures on Docker Enterprise Edition appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Application performance blues? Dear Cloudie can help

Interested in adopting application performance monitoring? Let’s take a look at how you can embrace automation in the style of one of my favorite things to read: a classic, therapeutic newspaper advice column.
Dear Cloudie,
It was a beautiful spring Sunday with great weather outside. But it was ruined once again with an alert. My customer was having issues with their web application. The ticket said that the app was loading slowly but doesn’t elaborate further. As a network admin, I had to then go pull wireshark traces and TCP dumps and then comb through those logs. But I was still unsure of the root cause. So I had to guess that it was the network’s fault and take the blame. I have gone on one too many Sunday hunts for a needle in a haystack. Please help.
— Another Network Admin Buried Deep in Haystacks
Dear Not-Just-Another Network Admin,
Those of us with network responsibilities often worry about application deployment and delivery. But many of us desperately lack architectural innovation and access to real-time telemetry.
For future incidents, I recommend you research application performance monitoring technologies. This will equip you well for when incidents occur in future. And they will.
Here’s a simple, three-step methodology that will help you get started.
1. Architecture
Some may try trick you into believing that you can achieve the results you seek with traditional approaches to . But you need infrastructure that not only captures real-time telemetry but also can process millions of data points in real time without any performance impact.
Solutions built on software-defined principles separate the data plane from the control plane. This gives you flexibility. The data plane can just capture real-time application traffic telemetry and feed it to the off-path control plane. Your control plane can analyze these metrics and present the insights in a visual dashboard without impacting performance.
2. Analytics
Of the various elements of application traffic that you can measure, you need to identify the relevant metrics. Then you can configure your tools to collect real-time telemetry from your application instances.
You will need insights into:

End-user performance
Page load times
Media and files accesses
URLs and URIs accessed
Response codes
Client analytics such as location, device types, operating system versions and browsers

Together, all of this can average millions of data points per second. Traditional computing models can neither scale nor process potential petabytes of data without performance degradation.
3. Automate
I’m reminded of an IT joke: “Automate painful processes and now you do stupid things faster.”
We adopt cloud-native architectures to achieve flexibility, agility and continuous delivery. Automation plays a critical role in achieving these benefits. Based on the insights you get from real-time application analytics, your network team can automatically scale their resources to mirror traffic patterns. Application teams win too – they can automate application services thereby shortening development life cycle.
With these three steps to get you and your team started, you will notice that your teams and your infrastructure solutions walk in sync.
And before you know it, sunny days ruined by alerts will be a thing of past.
Yours,
Cloudie
PS:  To learn more about APM, join us at IBM InterConnect March 19 &; 23, 2017. Or download the DevOps APM for Dummies ebook to learn how teams work together to continuously deliver secure, available, application insights.
The post Application performance blues? Dear Cloudie can help appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options

The post Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options appeared first on Mirantis | Pure Play Open Cloud.
As a container management tool, Kubernetes was designed to orchestrate multiple containers and replication, and in fact there are currently several ways to do it. In this article, we&;ll look at three options: Replication Controllers, Replica Sets, and Deployments.
What is Kubernetes replication for?
Before we go into how you would do replication, let&8217;s talk about why.  Typically you would want to replicate your containers (and thereby your applications) for several reasons, including:

Reliability: By having multiple versions of an application, you prevent problems if one or more fails.  This is particularly true if the system replaces any containers that fail.
Load balancing: Having multiple versions of a container enables you to easily send traffic to different instances to prevent overloading of a single instance or node. This is something that Kubernetes does out of the box, making it extremely convenient.
Scaling: When load does become too much for the number of existing instances, Kubernetes enables you to easily scale up your application, adding additional instances as needed.

Replication is appropriate for numerous use cases, including:

Microservices-based applications: In these cases, multiple small applications provide very specific functionality.
Cloud native applications: Because cloud-native applications are based on the theory that any component can fail at any time, replication is a perfect environment for implementing them, as multiple instances are baked into the architecture.
Mobile applications: Mobile applications can often be architected so that the mobile client interacts with an isolated version of the server application.

Kubernetes has multiple ways in which you can implement replication.
Types of Kubernetes replication
In this article, we&8217;ll discuss three different forms of replication: the Replication Controller, Replica Sets, and Deployments.
Replication Controller
The Replication Controller is the original form of replication in Kubernetes.  It&8217;s being replaced by Replica Sets, but it&8217;s still in wide use, so it&8217;s worth understanding what it is and how it works.

A Replication Controller is a structure that enables you to easily create multiple pods, then make sure that that number of pods always exists. If a pod does crash, the Replication Controller replaces it.

Replication Controllers also provide other benefits, such as the ability to scale the number of pods, and to update or delete multiple pods with a single command.

You can create a Replication Controller with an imperative command, or declaratively, from a file.  For example, create a new file called rc.yaml and add the following text:
apiVersion: v1
kind: ReplicationController
metadata:
 name: soaktestrc
spec:
 replicas: 3
 selector:
   app: soaktestrc
 template:
   metadata:
     name: soaktestrc
     labels:
       app: soaktestrc
   spec:
     containers:
     – name: soaktestrc
       image: nickchase/soaktest
       ports:
       – containerPort: 80
Most of this structure should look familiar from our discussion of Deployments; we&8217;ve got the name of the actual Replication Controller (soaktestrc) and we&8217;re designating that we should have 3 replicas, each of which are defined by the template.  The selector defines how we know which pods belong to this Replication Controller.

Now tell Kubernetes to create the Replication Controller based on that file:
# kubectl create -f rc.yaml
replicationcontroller “soaktestrc” created
Let&8217;s take a look at what we have using the describe command:
# kubectl describe rc soaktestrc
Name:           soaktestrc
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app=soaktestrc
Labels:         app=soaktestrc
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type   Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                  ——-
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-g5snq
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-cws05
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-ro2bl
As you can see, we&8217;ve got the Replication Controller, and there are 3 replicas, of the 3 that we wanted.  All 3 of them are currently running.  You can also see the individual pods listed underneath, along with their names.  If you ask Kubernetes to show you the pods, you can see those same names show up:
# kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
soaktestrc-cws05   1/1       Running   0          3m
soaktestrc-g5snq   1/1       Running   0          3m
soaktestrc-ro2bl   1/1       Running   0          3m
Next we&8217;ll look at Replica Sets, but first let&8217;s clean up:
# kubectl delete rc soaktestrc
replicationcontroller “soaktestrc” deleted

# kubectl get pods
As you can see, when you delete the Replication Controller, you also delete all of the pods that it created.
Replica Sets
Replica Sets are a sort of hybrid, in that they are in some ways more powerful than Replication Controllers, and in others they are less powerful.

Replica Sets are declared in essentially the same way as Replication Controllers, except that they have more options for the selector.  For example, we could create a Replica Set like this:
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
 name: soaktestrs
spec:
 replicas: 3
 selector:
   matchLabels:
     app: soaktestrs
 template:
   metadata:
     labels:
       app: soaktestrs
  environment: dev
   spec:
     containers:
     – name: soaktestrs
       image: nickchase/soaktest
       ports:
       – containerPort: 80
In this case, it&8217;s more or less the same as when we were creating the Replication Controller, except we&8217;re using matchLabels instead of label.  But we could just as easily have said:

spec:
 replicas: 3
 selector:
    matchExpressions:
     – {key: app, operator: In, values: [soaktestrs, soaktestrs, soaktest]}
     – {key: teir, operator: NotIn, values: [production]}
 template:
   metadata:

In this case, we&8217;re looking at two different conditions:

The app label must be soaktestrc, soaktestrs, or soaktest
The tier label (if it exists) must not be production

Let&8217;s go ahead and create the Replica Set and get a look at it:
# kubectl create -f replicaset.yaml
replicaset “soaktestrs” created

# kubectl describe rs soaktestrs
Name:           soaktestrs
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app in (soaktest,soaktestrs),teir notin (production)
Labels:         app=soaktestrs
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type    Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                   ——-
 1m            1m              1       {replicaset-controller }                        Normal  SuccessfulCreate Created pod: soaktestrs-it2hf
 1m            1m              1       {replicaset-controller }                       Normal  SuccessfulCreate Created pod: soaktestrs-kimmm
 1m            1m              1       {replicaset-controller }                        Normal  SuccessfulCreate Created pod: soaktestrs-8i4ra

# kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
soaktestrs-8i4ra   1/1       Running   0          1m
soaktestrs-it2hf   1/1       Running   0          1m
soaktestrs-kimmm   1/1       Running   0          1m
As you can see, the output is pretty much the same as for a Replication Controller (except for the selector), and for most intents and purposes, they are similar.  The major difference is that the rolling-update command works with Replication Controllers, but won&8217;t work with a Replica Set.  This is because Replica Sets are meant to be used as the backend for Deployments.

Let&8217;s clean up before we move on.
# kubectl delete rs soaktestrs
replicaset “soaktestrs” deleted

# kubectl get pods
Again, the pods that were created are deleted when we delete the Replica Set.
Deployments
Deployments are intended to replace Replication Controllers.  They provide the same replication functions (through Replica Sets) and also the ability to rollout changes and roll them back if necessary.

Let&8217;s create a simple Deployment using the same image we&8217;ve been using.  First create a new file, deployment.yaml, and add the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: soaktest
spec:
 replicas: 5
 template:
   metadata:
     labels:
       app: soaktest
   spec:
     containers:
     – name: soaktest
       image: nickchase/soaktest
       ports:
       – containerPort: 80
Now go ahead and create the Deployment:
# kubectl create -f deployment.yaml
deployment “soaktest” created
Now let&8217;s go ahead and describe the Deployment:
# kubectl describe deployment soaktest
Name:                   soaktest
Namespace:              default
CreationTimestamp:      Sun, 05 Mar 2017 16:21:19 +0000
Labels:                 app=soaktest
Selector:               app=soaktest
Replicas:               5 updated | 5 total | 5 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          soaktest-3914185155 (5/5 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type    Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                   ——-
 38s           38s             1       {deployment-controller }                        Normal  ScalingReplicaSet        Scaled up replica set soaktest-3914185155 to 3
 36s           36s             1       {deployment-controller }                        Normal  ScalingReplicaSet        Scaled up replica set soaktest-3914185155 to 5
As you can see, rather than listing the individual pods, Kubernetes shows us the Replica Set.  Notice that the name of the Replica Set is the Deployment name and a hash value.

A complete discussion of updates is out of scope for this article &; we&8217;ll cover it in the future &8212; but couple of interesting things here:

The StrategyType is RollingUpdate. This value can also be set to Recreate.
By default we have a minReadySeconds value of 0; we can change that value if we want pods to be up and running for a certain amount of time &8212; say, to load resources &8212; before they&8217;re truly considered &;ready&;.
The RollingUpdateStrategy shows that we have a limit of 1 maxUnavailable &8212; meaning that when we&8217;re updating the Deployment, we can have up to 1 missing pod before it&8217;s replaced, and 1 maxSurge, meaning we can have one extra pod as we scale the new pods back up.

As you can see, the Deployment is backed, in this case, by Replica Set soaktest-3914185155. If we go ahead and look at the list of actual pods&;
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3914185155-7gyja   1/1       Running   0          2m
soaktest-3914185155-lrm20   1/1       Running   0          2m
soaktest-3914185155-o28px   1/1       Running   0          2m
soaktest-3914185155-ojzn8   1/1       Running   0          2m
soaktest-3914185155-r2pt7   1/1       Running   0          2m
&8230; you can see that their names consist of the Replica Set name and an additional identifier.
Passing environment information: identifying a specific pod
Before we look at the different ways that we can affect replicas, let&8217;s set up our deployment so that we can see what pod we&8217;re actually hitting with a particular request.  To do that, the image we&8217;ve been using displays the pod name when it outputs:
<?php
$limit = $_GET[‘limit’];
if (!isset($limit)) $limit = 250;
for ($i; $i < $limit; $i++){
    $d = tan(atan(tan(atan(tan(atan(tan(atan(tan(atan(123456789.123456789))))))))));
}
echo “Pod “.$_SERVER[‘POD_NAME’].” has finished!n”;
?>
As you can see, we&8217;re displaying an environment variable, POD_NAME.  Since each container is essentially it&8217;s own server, this will display the name of the pod when we execute the PHP.

Now we just have to pass that information to the pod.

We do that through the use of the Kubernetes Downward API, which lets us pass environment variables into the containers:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: soaktest
spec:
 replicas: 3
 template:
   metadata:
     labels:
       app: soaktest
   spec:
     containers:
     – name: soaktest
       image: nickchase/soaktest
       ports:
       – containerPort: 80
       env:
       – name: POD_NAME
         valueFrom:
           fieldRef:
             fieldPath: metadata.name
As you can see, we&8217;re passing an environment variable and assigning it a value from the Deployment&8217;s metadata.  (You can find more information on metadata here.)

So let&8217;s go ahead and clean up the Deployment we created earlier&8230;
# kubectl delete deployment soaktest
deployment “soaktest” deleted

# kubectl get pods
&8230; and recreate it with the new definition:
# kubectl create -f deployment.yaml
deployment “soaktest” created
Next let&8217;s go ahead and expose the pods to outside network requests so we can call the nginx server that is inside the containers:
# kubectl expose deployment soaktest –port=80 –target-port=80 –type=NodePort
service “soaktest” exposed
Now let&8217;s describe the services we just created so we can find out what port the Deployment is listening on:
# kubectl describe services soaktest
Name:                   soaktest
Namespace:              default
Labels:                 app=soaktest
Selector:               app=soaktest
Type:                   NodePort
IP:                     11.1.32.105
Port:                   <unset> 80/TCP
NodePort:               <unset> 30800/TCP
Endpoints:              10.200.18.2:80,10.200.18.3:80,10.200.18.4:80 + 2 more…
Session Affinity:       None
No events.
As you can see, the NodePort is 30800 in this case; in your case it will be different, so make sure to check.  That means that each of the servers involved is listening on port 30800, and requests are being forwarded to port 80 of the containers.  That means we can call the PHP script with:
http://[HOST_NAME OR HOST_IP]:[PROVIDED PORT]
In my case, I&8217;ve set the IP for my Kubernetes hosts to hostnames to make my life easier, and the PHP file is the default for nginx, so I can simply call:
# curl http://kube-2:30800
Pod soaktest-3869910569-xnfme has finished!
So as you can see, this time the request was served by pod soaktest-3869910569-xnfme.
Recovering from crashes: Creating a fixed number of replicas
Now that we know everything is running, let&8217;s take a look at some replication use cases.

The first thing we think of when it comes to replication is recovering from crashes. If there are 5 (or 50, or 500) copies of an application running, and one or more crashes, it&8217;s not a catastrophe.  Kubernetes improves the situation further by ensuring that if a pod goes down, it&8217;s replaced.

Let&8217;s see this in action.  Start by refreshing our memory about the pods we&8217;ve got running:
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-qqwqc   1/1       Running   0          11m
soaktest-3869910569-qu8k7   1/1       Running   0          11m
soaktest-3869910569-uzjxu   1/1       Running   0          11m
soaktest-3869910569-x6vmp   1/1       Running   0          11m
soaktest-3869910569-xnfme   1/1       Running   0          11m
If we repeatedly call the Deployment, we can see that we get different pods on a random basis:
# curl http://kube-2:30800
Pod soaktest-3869910569-xnfme has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-x6vmp has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-uzjxu has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-x6vmp has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-uzjxu has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-qu8k7 has finished!
To simulate a pod crashing, let&8217;s go ahead and delete one:
# kubectl delete pod soaktest-3869910569-x6vmp
pod “soaktest-3869910569-x6vmp” deleted

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-516kx   1/1       Running   0          18s
soaktest-3869910569-qqwqc   1/1       Running   0          27m
soaktest-3869910569-qu8k7   1/1       Running   0          27m
soaktest-3869910569-uzjxu   1/1       Running   0          27m
soaktest-3869910569-xnfme   1/1       Running   0          27m
As you can see, pod *x6vmp is gone, and it&8217;s been replaced by *516kx.  (You can easily find the new pod by looking at the AGE column.)

If we once again call the Deployment, we can (eventually) see the new pod:
# curl http://kube-2:30800
Pod soaktest-3869910569-516kx has finished!
Now let&8217;s look at changing the number of pods.
Scaling up or down: Manually changing the number of replicas
One common task is to scale up a Deployment in response to additional load. Kubernetes has autoscaling, but we&8217;ll talk about that in another article.  For now, let&8217;s look at how to do this task manually.

The most straightforward way is to simply use the scale command:
# kubectl scale –replicas=7 deployment/soaktest
deployment “soaktest” scaled

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-2w8i6   1/1       Running   0          6s
soaktest-3869910569-516kx   1/1       Running   0          11m
soaktest-3869910569-qqwqc   1/1       Running   0          39m
soaktest-3869910569-qu8k7   1/1       Running   0          39m
soaktest-3869910569-uzjxu   1/1       Running   0          39m
soaktest-3869910569-xnfme   1/1       Running   0          39m
soaktest-3869910569-z4rx9   1/1       Running   0          6s
In this case, we specify a new number of replicas, and Kubernetes adds enough to bring it to the desired level, as you can see.

One thing to keep in mind is that Kubernetes isn&8217;t going to scale the Deployment down to be below the level at which you first started it up.  For example, if we try to scale back down to 4&8230;
# kubectl scale –replicas=4 -f deployment.yaml
deployment “soaktest” scaled

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-l5wx8   1/1       Running   0          11s
soaktest-3869910569-qqwqc   1/1       Running   0          40m
soaktest-3869910569-qu8k7   1/1       Running   0          40m
soaktest-3869910569-uzjxu   1/1       Running   0          40m
soaktest-3869910569-xnfme   1/1       Running   0          40m
&8230; Kubernetes only brings us back down to 5, because that&8217;s what was specified by the original deployment.
Deploying a new version: Replacing replicas by changing their label
Another way you can use deployments is to make use of the selector.  In other words, if a Deployment controls all the pods with a tier value of dev, changing a pod&8217;s teir label to prod will remove it from the Deployment&8217;s sphere of influence.

This mechanism enables you to selectively replace individual pods. For example, you might move pods from a dev environment to a production environment, or you might do a manual rolling update, updating the image, then removing some fraction of pods from the Deployment; when they&8217;re replaced, it will be with the new image. If you&8217;re happy with the changes, you can then replace the rest of the pods.

Let&8217;s see this in action.  As you recall, this is our Deployment:
# kubectl describe deployment soaktest
Name:                   soaktest
Namespace:              default
CreationTimestamp:      Sun, 05 Mar 2017 19:31:04 +0000
Labels:                 app=soaktest
Selector:               app=soaktest
Replicas:               3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          soaktest-3869910569 (3/3 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type              Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–  ——                  ——-
 50s           50s             1       {deployment-controller }                        Normal            ScalingReplicaSet       Scaled up replica set soaktest-3869910569 to 3
And these are our pods:
# kubectl describe replicaset soaktest-3869910569
Name:           soaktest-3869910569
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app=soaktest,pod-template-hash=3869910569
Labels:         app=soaktest
               pod-template-hash=3869910569
Replicas:       5 current / 5 desired
Pods Status:    5 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type              Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–  ——                  ——-
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-0577c
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-wje85
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-xuhwl
 1m            1m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-8cbo2
 1m            1m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-pwlm4
We can also get a list of pods by label:
# kubectl get pods -l app=soaktest
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          7m
soaktest-3869910569-8cbo2   1/1       Running   0          6m
soaktest-3869910569-pwlm4   1/1       Running   0          6m
soaktest-3869910569-wje85   1/1       Running   0          7m
soaktest-3869910569-xuhwl   1/1       Running   0          7m
So those are our original soaktest pods; what if we wanted to add a new label?  We can do that on the command line:
# kubectl label pods soaktest-3869910569-xuhwl experimental=true
pod “soaktest-3869910569-xuhwl” labeled

# kubectl get pods -l experimental=true
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-xuhwl   1/1       Running   0          14m
So now we have one experimental pod.  But since the experimental label has nothing to do with the selector for the Deployment, it doesn&8217;t affect anything.

So what if we change the value of the app label, which the Deployment is looking at?
# kubectl label pods soaktest-3869910569-wje85 app=notsoaktest –overwrite
pod “soaktest-3869910569-wje85″ labeled
In this case, we need to use the overwrite flag because the app label already exists. Now let&8217;s look at the existing pods.
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          17m
soaktest-3869910569-4cedq   1/1       Running   0          4s
soaktest-3869910569-8cbo2   1/1       Running   0          16m
soaktest-3869910569-pwlm4   1/1       Running   0          16m
soaktest-3869910569-wje85   1/1       Running   0          17m
soaktest-3869910569-xuhwl   1/1       Running   0          17m
As you can see, we now have six pods instead of five, with a new pod having been created to replace *wje85, which was removed from the deployment. We can see the changes by requesting pods by label:
# kubectl get pods -l app=soaktest
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          17m
soaktest-3869910569-4cedq   1/1       Running   0          20s
soaktest-3869910569-8cbo2   1/1       Running   0          16m
soaktest-3869910569-pwlm4   1/1       Running   0          16m
soaktest-3869910569-xuhwl   1/1       Running   0          17m
Now, there is one wrinkle that you have to take into account; because we&8217;ve removed this pod from the Deployment, the Deployment no longer manages it.  So if we were to delete the Deployment&8230;
# kubectl delete deployment soaktest
deployment “soaktest” deleted
The pod remains:
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-wje85   1/1       Running   0          19m
You can also easily replace all of the pods in a Deployment using the &;all flag, as in:
# kubectl label pods –all app=notsoaktesteither –overwrite
But remember that you&8217;ll have to delete them all manually!
Conclusion
Replication is a large part of Kubernetes&8217; purpose in life, so it&8217;s no surprise that we&8217;ve just scratched the surface of what it can do, and how to use it. It is useful for reliability purposes, for scalability, and even as a basis for your architecture.

What do you anticipate using replication for, and what would you like to know more about? Let us know in the comments!The post Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options

The post Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options appeared first on Mirantis | Pure Play Open Cloud.
As a container management tool, Kubernetes was designed to orchestrate multiple containers and replication, and in fact there are currently several ways to do it. In this article, we&;ll look at three options: Replication Controllers, Replica Sets, and Deployments.
What is Kubernetes replication for?
Before we go into how you would do replication, let&8217;s talk about why.  Typically you would want to replicate your containers (and thereby your applications) for several reasons, including:

Reliability: By having multiple versions of an application, you prevent problems if one or more fails.  This is particularly true if the system replaces any containers that fail.
Load balancing: Having multiple versions of a container enables you to easily send traffic to different instances to prevent overloading of a single instance or node. This is something that Kubernetes does out of the box, making it extremely convenient.
Scaling: When load does become too much for the number of existing instances, Kubernetes enables you to easily scale up your application, adding additional instances as needed.

Replication is appropriate for numerous use cases, including:

Microservices-based applications: In these cases, multiple small applications provide very specific functionality.
Cloud native applications: Because cloud-native applications are based on the theory that any component can fail at any time, replication is a perfect environment for implementing them, as multiple instances are baked into the architecture.
Mobile applications: Mobile applications can often be architected so that the mobile client interacts with an isolated version of the server application.

Kubernetes has multiple ways in which you can implement replication.
Types of Kubernetes replication
In this article, we&8217;ll discuss three different forms of replication: the Replication Controller, Replica Sets, and Deployments.
Replication Controller
The Replication Controller is the original form of replication in Kubernetes.  It&8217;s being replaced by Replica Sets, but it&8217;s still in wide use, so it&8217;s worth understanding what it is and how it works.

A Replication Controller is a structure that enables you to easily create multiple pods, then make sure that that number of pods always exists. If a pod does crash, the Replication Controller replaces it.

Replication Controllers also provide other benefits, such as the ability to scale the number of pods, and to update or delete multiple pods with a single command.

You can create a Replication Controller with an imperative command, or declaratively, from a file.  For example, create a new file called rc.yaml and add the following text:
apiVersion: v1
kind: ReplicationController
metadata:
 name: soaktestrc
spec:
 replicas: 3
 selector:
   app: soaktestrc
 template:
   metadata:
     name: soaktestrc
     labels:
       app: soaktestrc
   spec:
     containers:
     – name: soaktestrc
       image: nickchase/soaktest
       ports:
       – containerPort: 80
Most of this structure should look familiar from our discussion of Deployments; we&8217;ve got the name of the actual Replication Controller (soaktestrc) and we&8217;re designating that we should have 3 replicas, each of which are defined by the template.  The selector defines how we know which pods belong to this Replication Controller.

Now tell Kubernetes to create the Replication Controller based on that file:
# kubectl create -f rc.yaml
replicationcontroller “soaktestrc” created
Let&8217;s take a look at what we have using the describe command:
# kubectl describe rc soaktestrc
Name:           soaktestrc
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app=soaktestrc
Labels:         app=soaktestrc
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type   Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                  ——-
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-g5snq
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-cws05
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-ro2bl
As you can see, we&8217;ve got the Replication Controller, and there are 3 replicas, of the 3 that we wanted.  All 3 of them are currently running.  You can also see the individual pods listed underneath, along with their names.  If you ask Kubernetes to show you the pods, you can see those same names show up:
# kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
soaktestrc-cws05   1/1       Running   0          3m
soaktestrc-g5snq   1/1       Running   0          3m
soaktestrc-ro2bl   1/1       Running   0          3m
Next we&8217;ll look at Replica Sets, but first let&8217;s clean up:
# kubectl delete rc soaktestrc
replicationcontroller “soaktestrc” deleted

# kubectl get pods
As you can see, when you delete the Replication Controller, you also delete all of the pods that it created.
Replica Sets
Replica Sets are a sort of hybrid, in that they are in some ways more powerful than Replication Controllers, and in others they are less powerful.

Replica Sets are declared in essentially the same way as Replication Controllers, except that they have more options for the selector.  For example, we could create a Replica Set like this:
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
 name: soaktestrs
spec:
 replicas: 3
 selector:
   matchLabels:
     app: soaktestrs
 template:
   metadata:
     labels:
       app: soaktestrs
  environment: dev
   spec:
     containers:
     – name: soaktestrs
       image: nickchase/soaktest
       ports:
       – containerPort: 80
In this case, it&8217;s more or less the same as when we were creating the Replication Controller, except we&8217;re using matchLabels instead of label.  But we could just as easily have said:

spec:
 replicas: 3
 selector:
    matchExpressions:
     – {key: app, operator: In, values: [soaktestrs, soaktestrs, soaktest]}
     – {key: teir, operator: NotIn, values: [production]}
 template:
   metadata:

In this case, we&8217;re looking at two different conditions:

The app label must be soaktestrc, soaktestrs, or soaktest
The tier label (if it exists) must not be production

Let&8217;s go ahead and create the Replica Set and get a look at it:
# kubectl create -f replicaset.yaml
replicaset “soaktestrs” created

# kubectl describe rs soaktestrs
Name:           soaktestrs
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app in (soaktest,soaktestrs),teir notin (production)
Labels:         app=soaktestrs
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type    Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                   ——-
 1m            1m              1       {replicaset-controller }                        Normal  SuccessfulCreate Created pod: soaktestrs-it2hf
 1m            1m              1       {replicaset-controller }                       Normal  SuccessfulCreate Created pod: soaktestrs-kimmm
 1m            1m              1       {replicaset-controller }                        Normal  SuccessfulCreate Created pod: soaktestrs-8i4ra

# kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
soaktestrs-8i4ra   1/1       Running   0          1m
soaktestrs-it2hf   1/1       Running   0          1m
soaktestrs-kimmm   1/1       Running   0          1m
As you can see, the output is pretty much the same as for a Replication Controller (except for the selector), and for most intents and purposes, they are similar.  The major difference is that the rolling-update command works with Replication Controllers, but won&8217;t work with a Replica Set.  This is because Replica Sets are meant to be used as the backend for Deployments.

Let&8217;s clean up before we move on.
# kubectl delete rs soaktestrs
replicaset “soaktestrs” deleted

# kubectl get pods
Again, the pods that were created are deleted when we delete the Replica Set.
Deployments
Deployments are intended to replace Replication Controllers.  They provide the same replication functions (through Replica Sets) and also the ability to rollout changes and roll them back if necessary.

Let&8217;s create a simple Deployment using the same image we&8217;ve been using.  First create a new file, deployment.yaml, and add the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: soaktest
spec:
 replicas: 5
 template:
   metadata:
     labels:
       app: soaktest
   spec:
     containers:
     – name: soaktest
       image: nickchase/soaktest
       ports:
       – containerPort: 80
Now go ahead and create the Deployment:
# kubectl create -f deployment.yaml
deployment “soaktest” created
Now let&8217;s go ahead and describe the Deployment:
# kubectl describe deployment soaktest
Name:                   soaktest
Namespace:              default
CreationTimestamp:      Sun, 05 Mar 2017 16:21:19 +0000
Labels:                 app=soaktest
Selector:               app=soaktest
Replicas:               5 updated | 5 total | 5 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          soaktest-3914185155 (5/5 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type    Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                   ——-
 38s           38s             1       {deployment-controller }                        Normal  ScalingReplicaSet        Scaled up replica set soaktest-3914185155 to 3
 36s           36s             1       {deployment-controller }                        Normal  ScalingReplicaSet        Scaled up replica set soaktest-3914185155 to 5
As you can see, rather than listing the individual pods, Kubernetes shows us the Replica Set.  Notice that the name of the Replica Set is the Deployment name and a hash value.

A complete discussion of updates is out of scope for this article &; we&8217;ll cover it in the future &8212; but couple of interesting things here:

The StrategyType is RollingUpdate. This value can also be set to Recreate.
By default we have a minReadySeconds value of 0; we can change that value if we want pods to be up and running for a certain amount of time &8212; say, to load resources &8212; before they&8217;re truly considered &;ready&;.
The RollingUpdateStrategy shows that we have a limit of 1 maxUnavailable &8212; meaning that when we&8217;re updating the Deployment, we can have up to 1 missing pod before it&8217;s replaced, and 1 maxSurge, meaning we can have one extra pod as we scale the new pods back up.

As you can see, the Deployment is backed, in this case, by Replica Set soaktest-3914185155. If we go ahead and look at the list of actual pods&;
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3914185155-7gyja   1/1       Running   0          2m
soaktest-3914185155-lrm20   1/1       Running   0          2m
soaktest-3914185155-o28px   1/1       Running   0          2m
soaktest-3914185155-ojzn8   1/1       Running   0          2m
soaktest-3914185155-r2pt7   1/1       Running   0          2m
&8230; you can see that their names consist of the Replica Set name and an additional identifier.
Passing environment information: identifying a specific pod
Before we look at the different ways that we can affect replicas, let&8217;s set up our deployment so that we can see what pod we&8217;re actually hitting with a particular request.  To do that, the image we&8217;ve been using displays the pod name when it outputs:
<?php
$limit = $_GET[‘limit’];
if (!isset($limit)) $limit = 250;
for ($i; $i < $limit; $i++){
    $d = tan(atan(tan(atan(tan(atan(tan(atan(tan(atan(123456789.123456789))))))))));
}
echo “Pod “.$_SERVER[‘POD_NAME’].” has finished!n”;
?>
As you can see, we&8217;re displaying an environment variable, POD_NAME.  Since each container is essentially it&8217;s own server, this will display the name of the pod when we execute the PHP.

Now we just have to pass that information to the pod.

We do that through the use of the Kubernetes Downward API, which lets us pass environment variables into the containers:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: soaktest
spec:
 replicas: 3
 template:
   metadata:
     labels:
       app: soaktest
   spec:
     containers:
     – name: soaktest
       image: nickchase/soaktest
       ports:
       – containerPort: 80
       env:
       – name: POD_NAME
         valueFrom:
           fieldRef:
             fieldPath: metadata.name
As you can see, we&8217;re passing an environment variable and assigning it a value from the Deployment&8217;s metadata.  (You can find more information on metadata here.)

So let&8217;s go ahead and clean up the Deployment we created earlier&8230;
# kubectl delete deployment soaktest
deployment “soaktest” deleted

# kubectl get pods
&8230; and recreate it with the new definition:
# kubectl create -f deployment.yaml
deployment “soaktest” created
Next let&8217;s go ahead and expose the pods to outside network requests so we can call the nginx server that is inside the containers:
# kubectl expose deployment soaktest –port=80 –target-port=80 –type=NodePort
service “soaktest” exposed
Now let&8217;s describe the services we just created so we can find out what port the Deployment is listening on:
# kubectl describe services soaktest
Name:                   soaktest
Namespace:              default
Labels:                 app=soaktest
Selector:               app=soaktest
Type:                   NodePort
IP:                     11.1.32.105
Port:                   <unset> 80/TCP
NodePort:               <unset> 30800/TCP
Endpoints:              10.200.18.2:80,10.200.18.3:80,10.200.18.4:80 + 2 more…
Session Affinity:       None
No events.
As you can see, the NodePort is 30800 in this case; in your case it will be different, so make sure to check.  That means that each of the servers involved is listening on port 30800, and requests are being forwarded to port 80 of the containers.  That means we can call the PHP script with:
http://[HOST_NAME OR HOST_IP]:[PROVIDED PORT]
In my case, I&8217;ve set the IP for my Kubernetes hosts to hostnames to make my life easier, and the PHP file is the default for nginx, so I can simply call:
# curl http://kube-2:30800
Pod soaktest-3869910569-xnfme has finished!
So as you can see, this time the request was served by pod soaktest-3869910569-xnfme.
Recovering from crashes: Creating a fixed number of replicas
Now that we know everything is running, let&8217;s take a look at some replication use cases.

The first thing we think of when it comes to replication is recovering from crashes. If there are 5 (or 50, or 500) copies of an application running, and one or more crashes, it&8217;s not a catastrophe.  Kubernetes improves the situation further by ensuring that if a pod goes down, it&8217;s replaced.

Let&8217;s see this in action.  Start by refreshing our memory about the pods we&8217;ve got running:
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-qqwqc   1/1       Running   0          11m
soaktest-3869910569-qu8k7   1/1       Running   0          11m
soaktest-3869910569-uzjxu   1/1       Running   0          11m
soaktest-3869910569-x6vmp   1/1       Running   0          11m
soaktest-3869910569-xnfme   1/1       Running   0          11m
If we repeatedly call the Deployment, we can see that we get different pods on a random basis:
# curl http://kube-2:30800
Pod soaktest-3869910569-xnfme has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-x6vmp has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-uzjxu has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-x6vmp has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-uzjxu has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-qu8k7 has finished!
To simulate a pod crashing, let&8217;s go ahead and delete one:
# kubectl delete pod soaktest-3869910569-x6vmp
pod “soaktest-3869910569-x6vmp” deleted

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-516kx   1/1       Running   0          18s
soaktest-3869910569-qqwqc   1/1       Running   0          27m
soaktest-3869910569-qu8k7   1/1       Running   0          27m
soaktest-3869910569-uzjxu   1/1       Running   0          27m
soaktest-3869910569-xnfme   1/1       Running   0          27m
As you can see, pod *x6vmp is gone, and it&8217;s been replaced by *516kx.  (You can easily find the new pod by looking at the AGE column.)

If we once again call the Deployment, we can (eventually) see the new pod:
# curl http://kube-2:30800
Pod soaktest-3869910569-516kx has finished!
Now let&8217;s look at changing the number of pods.
Scaling up or down: Manually changing the number of replicas
One common task is to scale up a Deployment in response to additional load. Kubernetes has autoscaling, but we&8217;ll talk about that in another article.  For now, let&8217;s look at how to do this task manually.

The most straightforward way is to simply use the scale command:
# kubectl scale –replicas=7 deployment/soaktest
deployment “soaktest” scaled

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-2w8i6   1/1       Running   0          6s
soaktest-3869910569-516kx   1/1       Running   0          11m
soaktest-3869910569-qqwqc   1/1       Running   0          39m
soaktest-3869910569-qu8k7   1/1       Running   0          39m
soaktest-3869910569-uzjxu   1/1       Running   0          39m
soaktest-3869910569-xnfme   1/1       Running   0          39m
soaktest-3869910569-z4rx9   1/1       Running   0          6s
In this case, we specify a new number of replicas, and Kubernetes adds enough to bring it to the desired level, as you can see.

One thing to keep in mind is that Kubernetes isn&8217;t going to scale the Deployment down to be below the level at which you first started it up.  For example, if we try to scale back down to 4&8230;
# kubectl scale –replicas=4 -f deployment.yaml
deployment “soaktest” scaled

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-l5wx8   1/1       Running   0          11s
soaktest-3869910569-qqwqc   1/1       Running   0          40m
soaktest-3869910569-qu8k7   1/1       Running   0          40m
soaktest-3869910569-uzjxu   1/1       Running   0          40m
soaktest-3869910569-xnfme   1/1       Running   0          40m
&8230; Kubernetes only brings us back down to 5, because that&8217;s what was specified by the original deployment.
Deploying a new version: Replacing replicas by changing their label
Another way you can use deployments is to make use of the selector.  In other words, if a Deployment controls all the pods with a tier value of dev, changing a pod&8217;s teir label to prod will remove it from the Deployment&8217;s sphere of influence.

This mechanism enables you to selectively replace individual pods. For example, you might move pods from a dev environment to a production environment, or you might do a manual rolling update, updating the image, then removing some fraction of pods from the Deployment; when they&8217;re replaced, it will be with the new image. If you&8217;re happy with the changes, you can then replace the rest of the pods.

Let&8217;s see this in action.  As you recall, this is our Deployment:
# kubectl describe deployment soaktest
Name:                   soaktest
Namespace:              default
CreationTimestamp:      Sun, 05 Mar 2017 19:31:04 +0000
Labels:                 app=soaktest
Selector:               app=soaktest
Replicas:               3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          soaktest-3869910569 (3/3 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type              Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–  ——                  ——-
 50s           50s             1       {deployment-controller }                        Normal            ScalingReplicaSet       Scaled up replica set soaktest-3869910569 to 3
And these are our pods:
# kubectl describe replicaset soaktest-3869910569
Name:           soaktest-3869910569
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app=soaktest,pod-template-hash=3869910569
Labels:         app=soaktest
               pod-template-hash=3869910569
Replicas:       5 current / 5 desired
Pods Status:    5 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type              Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–  ——                  ——-
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-0577c
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-wje85
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-xuhwl
 1m            1m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-8cbo2
 1m            1m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-pwlm4
We can also get a list of pods by label:
# kubectl get pods -l app=soaktest
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          7m
soaktest-3869910569-8cbo2   1/1       Running   0          6m
soaktest-3869910569-pwlm4   1/1       Running   0          6m
soaktest-3869910569-wje85   1/1       Running   0          7m
soaktest-3869910569-xuhwl   1/1       Running   0          7m
So those are our original soaktest pods; what if we wanted to add a new label?  We can do that on the command line:
# kubectl label pods soaktest-3869910569-xuhwl experimental=true
pod “soaktest-3869910569-xuhwl” labeled

# kubectl get pods -l experimental=true
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-xuhwl   1/1       Running   0          14m
So now we have one experimental pod.  But since the experimental label has nothing to do with the selector for the Deployment, it doesn&8217;t affect anything.

So what if we change the value of the app label, which the Deployment is looking at?
# kubectl label pods soaktest-3869910569-wje85 app=notsoaktest –overwrite
pod “soaktest-3869910569-wje85″ labeled
In this case, we need to use the overwrite flag because the app label already exists. Now let&8217;s look at the existing pods.
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          17m
soaktest-3869910569-4cedq   1/1       Running   0          4s
soaktest-3869910569-8cbo2   1/1       Running   0          16m
soaktest-3869910569-pwlm4   1/1       Running   0          16m
soaktest-3869910569-wje85   1/1       Running   0          17m
soaktest-3869910569-xuhwl   1/1       Running   0          17m
As you can see, we now have six pods instead of five, with a new pod having been created to replace *wje85, which was removed from the deployment. We can see the changes by requesting pods by label:
# kubectl get pods -l app=soaktest
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          17m
soaktest-3869910569-4cedq   1/1       Running   0          20s
soaktest-3869910569-8cbo2   1/1       Running   0          16m
soaktest-3869910569-pwlm4   1/1       Running   0          16m
soaktest-3869910569-xuhwl   1/1       Running   0          17m
Now, there is one wrinkle that you have to take into account; because we&8217;ve removed this pod from the Deployment, the Deployment no longer manages it.  So if we were to delete the Deployment&8230;
# kubectl delete deployment soaktest
deployment “soaktest” deleted
The pod remains:
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-wje85   1/1       Running   0          19m
You can also easily replace all of the pods in a Deployment using the &;all flag, as in:
# kubectl label pods –all app=notsoaktesteither –overwrite
But remember that you&8217;ll have to delete them all manually!
Conclusion
Replication is a large part of Kubernetes&8217; purpose in life, so it&8217;s no surprise that we&8217;ve just scratched the surface of what it can do, and how to use it. It is useful for reliability purposes, for scalability, and even as a basis for your architecture.

What do you anticipate using replication for, and what would you like to know more about? Let us know in the comments!The post Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

5 IBM professional certifications to know

It’s one thing to know; it’s another to be certified.
The IBM Professional Certification Program provides industry-recognized credentials focused on IBM technology and solutions relevant to IBM customers, business partners and employees. These credentials help demonstrate the knowledge and skills required to excel in a given area of information technology. Certification from IBM is available across a variety of technical areas including Cloud, Watson, IOT, Security and more. Visit the IBM Professional Certification Program to learn more.
As with previous years, attendees at InterConnect 2017 will have the opportunity to take IBM Professional Certification exams. Unlike previous years, however, InterConnect 2017 attendees have the opportunity to sit for an unlimited number of exams.
With a focus on cloud platforms and cognitive solutions, let’s take a look at five of the certification opportunities available at InterConnect 2017:
IBM certified solution advisor &; Cloud reference architecture
This broad IBM Cloud certification suits a person who can clearly explain the benefits and underlying concepts of . They can also demonstrate how IBM Cloud solution offerings can help customers realize these benefits.
IBM certified solution architect &8211; Cloud platform solution
Professionals with the skills to design, plan and architect a cloud infrastructure should consider this certification. The cloud platform solution architect demonstrates the ability to evaluate customers’ current state and architect an IBM Cloud Infrastructure solution.
Certified application developer &8211; Cloud platform
The certified application developer is technical professional who understands concepts essential to the development of cloud applications. They have experience using the IBM Bluemix platform and are able to consume Bluemix services in an application.
Certified advanced application developer &8211; Cloud platform
This newly released certificate focuses on technical professionals who understand advanced concepts essential to the development of cloud applications. They have demonstrated understanding of hybrid cloud best practices and can build applications that span multiple cloud and on-premises environments.
Certified application developer &8211; Watson
The certified Watson application developer understands concepts essential to the development of applications using IBM Watson services on Bluemix. Experience using the Bluemix platform and Watson Developer Cloud are essential to passing this exam.
The IBM Professional Certification Program is linked to the IBM open badge program. Badges provide digital recognition for skills attained and offer a method of sharing credentials on social media. Badges are issued almost immediately after completion of an IBM Professional Certification.
All full-experience badge holders at InterConnect are eligible to take as many free exams from the entire IBM Certification exam portfolio as desired. No registration is necessary, and seating is on a first-come, first-served basis.
A complete listing of available IBM tests is available online on the IBM Professional Certification Program&;s website. See you in the exam hall.
The post 5 IBM professional certifications to know appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Using Software Factory to manage Red Hat OpenStack Platform lifecycle

by Nicolas Hicher, Senior Software Engineer &; Continuous Integration and Delivery
Software-Factory
Software-Factory is a collection of services that provides a powerful platform to build software. It enables the same workflow used to develop OpenStack: using Gerrit for code reviews, Zuul/Nodepool/Jenkins as a CI system, and Storyboard for stories and issues tracker. Also, it ensures a reproducible test environment with ephemeral Jenkins slaves.
In this video, Nicolas Hicher will demonstrate how to use Software-Factory to manage a Red Hat OpenStack Platform 9 lifecycle. We will do a deployment and an update on a virtual environment (within an OpenStack tenant).

Python-tripleo-helper
For this demo, we will do a deployment within an OpenStack tenant. Using a tool, developed by the engineering team that builds DCI, called python-tripleo-helper. With this tool, we can do a deployment within an OpenStack tenant using the same steps of a full deployment (boot server via IPMI, discover nodes, introspection and deployment). We also patched python-tripleo-helper to add an update command to update the OpenStack (changing parameters, not doing a major upgrade).
Workflow
The workflow is simple and robust:

Submit a review with the templates, the installation script and the tests scripts. A CI job validates the templates.
When the review is approved, the gate jobs are executed (installation or update).
After the deployment/update is completed, the review is merged.

Deployment
For this demo, we will do a simple deployment (1 controller and 1 compute nodes) with Red Hat OpenStack 9.0
Limitations
Since we do the deployment in a virtual environment, we can&;t test some advanced features, especially for networking and storage. But other features of the deployed cloud can be validated using the appropriate environments.
Improvements
We plan to continue to improve this workflow to be able to:

Do a major upgrade from Red Hat OpenStack Platform (X to X+1).
Manage a bare metal deployment.
Improve the Ceph deployment to be able to use more than one object storage device (OSD).
Use smoke jobs like tempest to validate the deployment before merging the review.

Also, it should be possible to manage pre-production and production environments within a single git repository, the check job will do the tasks on pre production and after receiving a peer’s validation, the same actions will be applied on production.
Quelle: RedHat Stack

6 Java developer highlights at InterConnect 2017

Java developers, listen up. By now, you may have heard about the sessions, labs, roundtables, activities and events being offered at IBM InterConnect 2017. With more than 2,000 sessions and 200 labs to choose from, it can be a daunting task to create your agenda for the conference. Luckily, we’ve done some of the work for you.
Here are six things that Java developers shouldn’t miss at InterConnect.
1. Hit the road with Code Rally
Code Rally is an open-source racing game where you play by programming a vehicle in Java—or Node.js if you prefer—to race around virtual tracks. Code Rally is an example of a microservice architecture, and each vehicle is its own self-contained microservice.  When deployed, each microservice works within our race simulation service to compete against other coders. Head over to the DevZone in the Concourse during the conference to give Code Rally a test drive.
2. DevZone
As a developer, the DevZone is the place to be. Located in the Concourse, you can hang out throughout the week with other developers to learn and share technical knowledge that will help you create the next generation of apps and services. While you’re at the DevZone, you can also talk to an IBM expert at an Ask Me Anything Expert Station, or learn a new skill in a short 20-minute Hello World Lab.
3. Session : The rise of microservices
Microservices are a hot topic in the world of software development. They help teams divide and conquer to solve problems faster and deliver more rapidly. In this session, RedMonk analyst and co-founder James Governor will discuss the rise of microservices with IBM Fellow and Cloud Platform CTO Jason McGee. James and Jason will explore the concept of microservices and how cloud has enabled their rise. They will cover the capabilities needed to be successful combined with real-world examples, lessons learned, and insights on how to get started from where you are today.
4. Open Tech Summit
Mobile, cloud, and big data are all trends that are changing the way we interact with people. Capturing the value from these interactions requires rapid innovation, interoperability and scalability enabled by an open approach. At the Open Tech Summit on Sunday, March 19th from 4:00 PM &; 7:00 PM, leaders of the most game-changing open technologies communities will share their perspective on the benefits of open technology. Come network and engage directly with experts across the industry.
5. Lab: Agile development using MicroProfile and IBM WebSphere Liberty
MicroProfile and Java EE 7 make developing and deploying microservice style applications quick and efficient. In this lab, you will learn how to use MicroProfile and Java EE 7’s  application development capabilities to create a microservice that uses CDI, JAX-RS, WebSockets, Concurrency Utilities for Java and a NoSQL database running on WebSphere Liberty.
6. Session : Building cloud-native microservices with Liberty and Node.js, a product development journey
In addition to talking about the benefits of developing applications as microservices, and showing you how to build them, IBM teams have also been building new microservice-based offerings. Head over to this session where I will discuss the latest IBM offerings with senior technical staff member Brian Pulito. We’ll cover how this was developed as a collection of cloud native microservices built on WebSphere Application Server and Node.js technologies. Learn about the tools, team structure, and development practices used when building the IBM Voice Gateway.
There will be no shortage of Java activity at the conference. You don’t want to miss this opportunity to train, network, and learn about developing with Java. Register for IBM InterConnect today.   
The post 6 Java developer highlights at InterConnect 2017 appeared first on news.
Quelle: Thoughts on Cloud

What is the best NFV Orchestration platform? A review of OSM, Open-O, CORD, and Cloudify

The post What is the best NFV Orchestration platform? A review of OSM, Open-O, CORD, and Cloudify appeared first on Mirantis | Pure Play Open Cloud.
As Network Functions Virtualization (NFV) technology matures, multiple NFV orchestration solutions have emerged and 2016 was a busy year. While some commercial products were already available on the market, multiple open source initiatives were also launched, with most  delivering initial code releases, and others planning to roll-out software artifacts later this year.
With so much going on, we thought we&;d provide you with a technical overview of some of the various NFV orchestration options, so you can get a feel for what&8217;s right for you. In particular, we&8217;ll cover:

Open Source MANO (OSM)
OPEN-O
CORD
Gigaspaces Cloudify

In addition, multiple NFV projects have been funded under European Union R&D programs. Projects such as OpenBaton, T-NOVA/TeNor and SONATA have their codebases available in  public repos, but industry support, involvement of external contributors and further  sustainability might be a challenging for these projects, so for now we&8217;ll consider them out of scope for this post, where we&8217;ll review and compare orchestration projects across the following areas:

General overview and current project state
Compliance with NFV MANO reference architecture
Software architecture
NSD definition approach
VIM and VNFM support
Capabilities to provision End to End service
Interaction with relevant standardization bodies and communities

General overview and current project state
We’ll start with a general overview of each project, along with, its ambitions, development approach, the involved community, and related information.
OSM
The OpenSource MANO project was officially launched at the World Mobile Congress (WMC) in 2016. Starting with several founding members, including Mirantis, Telefónica, BT, Canonical, Intel, RIFT.io, Telekom Austria Group and Telenor, the OSM community now includes 55 different organisations. The OSM project is hosted at ETSI facilities and targets delivering an open source management and orchestration (MANO) stack closely aligned with the ETSI NFV reference architecture.
OSM issued two releases,Rel 0 and Rel 1, during 2016. The most recent at the time of this writing, OSM Rel. 1, has been publicly available since October, 2016, and can be downloaded from the official website. Project governance is managed via several groups, including the Technical Steering group responsible for OSM&8217;s technical aspects, the Leadership group, and the End User Advisory group. You can find more details about OSM project may be found at the official Wiki.
OPEN-O
The OPEN-O project is hosted by the Linux foundation and was also formally announced at 2016 MWC. Initial project advocates were mostly from Asian companies, such as Huawei, ZTE and China Mobile. Eventually, the project project got further support from Brocade, Ericsson, GigaSpaces, Intel and others.
The main project objective is to enable end-to-end service agility across multiple domains using unified platform for NFV and SDN orchestration. The OPEN-O project delivered its first release in November, 2016 plans to roll-out future releases in a 6 month cycle. Overall project governance is managed by the project Board, with technology-specific issues managed by the Technical Steering Committee. You can find more general details about the OPEN-O project may be found at the project web-site.
CORD/XOS
Originally CORD (Central Office Re-architected as a Datacenter) was introduced as one of the use cases for the ONOS SDN Controller, but it grew-up into a separate project under ON.Lab governance. (ON.Lab recently merged with the Open Networking Foundation.)
The ultimate project goal is to combine NFV, SDN and the elasticity of commodity clouds to bring datacenter economics and cloud agility to the Telco Central Office. The reference implementation of CORD combines commodity servers, white-box switches, and disaggregated access technologies with open source software to provide an extensible service delivery platform. CORD Rel.1 and Rel.2 integrate a number of open source projects, such as ONOS to manage SDN infrastructure, OpenStack to deploy NFV workloads, and XOS as a service orchestrator. To reflect use cases&8217; uniqueness, CORD introduces a number of service profiles, such as Mobile (M-CORD), Residential (R-CORD), and Enterprise (E-CORD).  You can find more details about CORD project can be found at the official project web site.   
Gigaspaces Cloudify
Gigaspaces’ Cloudify is the open source TOSCA-based cloud orchestration software platform.  Originally introduced as a pure cloud orchestration solution (similar to OpenStack HEAT), the platform was further expanded to include NFV-related use cases, and the Cloudify Telecom Edition emerged.  
Considering its original platform purpose, Cloudify has an extensible architecture and can interact with multiple IaaS/PaaS providers such as AWS, OpenStack, Microsoft Azure and so on. Overall, Cloudify software is open source under the Apache 2 license and the source code is hosted in a public repository. While the Cloudify platform is open source and welcomes community contributions, the overall project roadmap is defined by Gigaspaces. You can find more details about the Cloudify platform at the official web site.
Compliance with ETSI NFV MANO reference architecture
At the time of this writing, a number of alternatives and specific approaches, such as Lifecycle Service Orchestration (LSO) from Metro Ethernet Forum, have emerged, huge industry support and involvement has helped to promote ETSI NFV Management and Orchestration (MANO) as the de-facto reference NFV architecture. From this standpoint, NFV MANO provides comprehensive guidance for entities, reference points and workflows to be implemented by appropriate NFV platforms (fig. 1):

Figure 1 &; ETSI NFV MANO reference architecture
OSM
As this project is hosted by ETSI, the OSM community tries to be compliant with the ETSI NFV MANO reference architecture, respecting appropriate IFA working group specifications. Key reference points, such as Or-Vnfm and Or-Vi might be identified within OSM components. The VNF and Network Service (NS) catalog are explicitly present in an OSM service orchestrator (SO) component. Meanwhile, a lot of further development efforts are planned to reach feature parity with currently specified features and interfaces.  
OPEN-O
While the OPEN-O project in general has no objective to be compliant with NFV MANO, the NFVO component of OPEN-O is aligned with an ETSI reference model, and all key MANO elements, such as VNFM and VIM might be found in an NFVO architecture. Moreover, the scope of the OPEN-O project goes beyond just NFV orchestration, and as a result goes beyond the scope identified by the ETSI NFV MANO reference architecture. One important piece of this project relates to SDN-based networking services provisioning and orchestration, which might be further used either in conjunction with NFV services or as a standalone feature.
CORD
Since its invention, CORD has defined its own reference architecture and cross-component communication logic. The reference CORD implementation is very OpenFlow-centric around ONOS, the orchestration component (XOS), and whitebox hardware. Technically, most of the CORD building blocks might be mapped to MANO-defined NFVI, VIM and VNFM, but this is incidental; the overall architectural approach defined by ETSI MANO, as well as the appropriate reference points and interfaces were not considered in scope by the CORD community. Similar to OPEN-O, the scope of this project goes beyond just NFV services provisioning. Instead, NFV services provisioning is considered as one of the several possible use cases for the CORD platform.
Gigaspaces Cloudify
The original focus of the Cloudify platform was orchestration of application deployment in a cloud. Later, when the NFV use case emerged, the Telecom Edition of the Cloudify platform was delivered. This platform combines both NFVO and generic VNFM components of the MANO defined entities (fig. 2).

Figure 2 &8211; Cloudify in relation to a NFV MANO reference architecture
By its very nature, Cloudify Blueprints might be considered as the NS and VNF catalog entities defined by MANO. Meanwhile, some interfaces and actions specified by the NFV IFA subgroup are not present or considered as out of scope for the Cloudify platform.  From this standpoint, you could say that Cloudify is aligned with the MANO reference architecture but not fully compliant.
Software architecture and components  
As you might expect, all NFV Orchestration solutions are complex integrated software platforms combined from multiple components.
OSM
The Open Source MANO (OSM) project consists of 3 basic components (fig. 3):

Figure 3 &8211; OSM project architecture

The Service Orchestrator (SO), responsible for end-to-end service orchestration and provisioning. The SO stores the VNF definitions and NS catalogs, manages workflow of the service deployment and can query the status of already deployed services. OSM integrates the rift.io orchestration engine as an SO.
The Resource Orchestrator (RO) is used to provision services over a particular IaaS provider in a given location. At the time if this writing, the RO component is capable of deploying networking services over OpenStack, VMware, and OpenVIM.  The SO and RO components can be jointly mapped to the NFVO entity in the ETSI MANO architecture
The VNF Configuration and Abstraction (VCA) module performs the initial VNF configuration using Juju Charms. Considering this purpose, the VCA module can be considered as a generic VNFM with a limited feature set.

Additionally, OSM hosts the OpenVIM project, which is a lightweight VIM layer implementation suitable for small NFV deployments as an alternative to heavyweight OpenStack or VMware VIMs.
Most of the software components are developed in python, while SO, as a user facing entity, heavily relies on a JavaScript and NodeJS framework.
OPEN-O
From a general standpoint, the complete OPEN-O software architecture can be split into 5 component groups (Fig.4):

Figure 4 &8211; OPEN-O project software architecture

Common service: Consists of shared services used by all other components.
Common TOSCA:  Provides TOSCA-related features such as NSD catalog management, NSD definition parsing, workflow execution, and so on; this component is based on the ARIA TOSCA project.
Global Service Orchestrator (GSO): As the name suggests, this group provides overall lifecycle management of the end-to-end service.
SDN Orchestrator (SDN-O): Provides abstraction and lifecycle management of SDN services; an essential piece of this block are the SDN drivers, which provide device-specific modules for communication with a particular device or SDN controller.
NFV Orchestrator (NFV-O): This group provides NFV services instantiation and lifecycle management.

The OPEN-O project uses a microservices-based architecture, and consists of more than 20 microservices. The central platform element is the Microservice Bus, which is the core microservice of the Common Service components group. Each platform component should register with this bus. During registration, each microservice specifies exposed APIs and endpoint addresses. As a result, the overall software architecture is flexible and can be easily extended with additional modules. OPEN-O Rel. 1 consists of both Java and python-based microservices.   
CORD/XOS
As mentioned above, CORD was introduced originally as an ONOS application, but grew into a standalone platform that covers both ONOS-managed SDN regions and service orchestration entities implemented by XOS.
Both ONOS and XOS provide a service framework to enable the Everything-as-a-Service (XaaS) concept. Thus, the reference CORD implementation consists of both a hardware Pod (consisting of whitebox switches and servers) and a software platform (such as ONOS or XOS with appropriate applications). From the software standpoint, the CORD platform implements an agent or driver-based approach in which XOS ensures that each registered driver used for a particular service is in an operational state (Fig. 5):

Figure 5 &8211; CORD platform architecture
The CORD reference implementation consists of Java (ONOS and its applications) and python (XOS) software stacks. Additionally, Ansible is heavily used by the CORD for automation and configuration management
Gigaspaces Cloudify
From the high-level perspective, platform consists of several different pieces, as you can see in figure 6:

Figure 6 &8211; Cloudify platform architecture

Cloudify Manager is the orchestrator that performs deployment and lifecycle management of the applications or NSDs described in the templates, called blueprints.
The Cloudify Agents are used to manage workflow execution via an appropriate plugin.

To provide overall lifecycle management, Cloudify integrates third-party components such as:

Elasticsearch, used as a data store of the deployment state, including runtime data and logs data coming from various platform components.
Logstash, used to process log information coming from platform components and agents.
Riemann, used as a policy engine to process runtime decisions about availability, SLA and overall monitoring.
RabbitMQ, used as an async transport for communication among all platform components, including remote agents.

The orchestration functionality itself is provided by the ARIA TOSCA project, which defines the TOSCA-based blueprint format and deployment workflow engine. Cloudify “native” components and plugins are python applications.
Approach for NSD definition
The Network Service Descriptor (NSD) specifies components and the relations between them to be deployed on the top of the IaaS during the NFV service instantiation. Orchestration platforms typically use some templating language to define NSDs. While the industry in general considers TOSCA as a de-facto standard to define NSDs, alternative approaches are also available across various platforms.
OSM
OSM follows the official MANO specification, which has definitions both for NSDs and VNF Descriptors (VNFD). To define NSD templates, YAML-based documents are used.  NSD is processed by the OSM Service Orchestrator to instantiate a Network Service, which itself might include VNFs, Forwarding Graphs, and Links between them.  A VNFD is a deployment template that specifies a VNF in terms of deployment and operational behaviour requirements.  Additionally VNFD specifies connections between Virtual Deployment Units (VDUs) using the internal Virtual Links (VLs). Each VDU in an OSM presentation relates to a VM or a Container.  OSM uses archived format both for NSD and VNFD. This archive consists of the service/VNF description, initial configuration scripts and other auxiliary details. You can find more information about OSM NSD/VNFD structure at the official website.
OPEN-O
In OPEN-O, the TOSCA-based  templates is used to describe the NS/VNF Package. Both the TOSCA general service profile and the more recent NFV profile can be used for NSD/VNFD, which is further packaged according to the the Cloud Service Archive (CSAR) format.   
A CSAR is a zip archive that contains at least two directories: TOSCA-Metadata and Definitions. The TOSCA-Metadata directory contains information that describes the content of the CSAR and is referred to as the TOSCA metafile. The Definitions directory contains one or more TOSCA Definitions documents. These Definitions documents contain definitions of the cloud application to be deployed during CSAR processing. More details about OPEN-O NSD/VNFD definitions may be found at the official web site.
CORD/XOS
To define a new CORD service, you need to define both TOSCA-­based templates and Python-based software components. Particularly when adding a new service, depending on its nature, you might alter one of several platform elements:

TOSCA service definition files, appropriate models, specified as YAML text files
REST APIs models, specified in Python
XOS models, implemented as a django application
Synchronizers, used to ensure the Service instantiated correctly and transitioned  to the required state.

The overall service definition format is based on the TOSCA Simple Profile language specification and presented in the YAML format.
Gigaspaces Cloudify
To instantiate a service or application, Cloudify uses templates called “Blueprints” which are effectively orchestration and deployment plans. Blueprints are specified in the form of TOSCA YAML files  and describe the service topology as a set of nodes, relationships, dependencies, instantiation and configuration settings, monitoring, and maintenance. Other than the YAML itself, a Blueprint can include multiple external resources such as configuration and installation scripts (or Puppet Manifests, or Chef Recipes, and so on) and basically any other resource required to run the application. You can find more details about the structure of Blueprints here.
VNFM and VIM support
NFV service deployment is performed on the appropriate IaaS, which itself is a set of virtualized compute, network and storage resources.  The ETSI MANO reference architecture identifies a component to manage these virtualized resources. This component is referred to as the Virtual Infrastructure Manager (VIM). Traditionally, the open source community treats OpenStack/KVM as a “de-facto” standard VIM. However, an NFV service might be span across various VIM types and various hypervisors. Thus multi-VIM support is a common requirement for an Orchestration engine.
Additionally, a separate element in a NFV MANO architecture is the VNF Manager, which is responsible for lifecycle management of the particular VNF. The VNFM component might be either generic, treating the VNF as a black box and performing similar operations for various VNFs, or there might be a vendor-specific VNFM that has unique capabilities for management of a given VNF. Both VIM and VNFM communication are performed via appropriate reference points, as defined by the NFV MANO architecture.
OSM
The OSM project was initially considered a multi-VIM platform, and at the time of this writing, it supports OpenStack, Vmware and OpenVIM. OpenVIM is a lightweight VIM implementation that is effectively a python wrapper around libvirt and a basic host networking configuration.
At the time of this writing, the OSM VCA has limited capabilities, but still can be considered a generic VNFM based on JuJu Charms. Further, it is possible to introduce support for vendor-specific VNFMs,  but additional development and integration efforts might be required on the Service Orchestrator (Rift.io) side.
OPEN-O
Release 1 of the  OPEN-O project supports only OpenStack as a VIM. This support is available as a Java-based driver for the NFVO component. For further releases, support for VMware as a VIM is planned.
The Open-O Rel.1 platform has a generic VNFM that is based on JuJu Charms. Furthermore, the pluggable architecture of the OPEN-O platform can support any vendor-specific VNFM, but additional development and integration efforts will be required.
CORD/XOS
At the time of this writing the reference implementation of the CORD platform is architectured around OpenStack as a platform to spawn NFV workloads. While there is no direct relationship to the NFV MANO architecture, the XOS orchestrator is responsible for VNF lifecycle management, and thus might be thought of as the entity that provides VNFM-like functions.
Gigaspaces Cloudify
When Cloudify was adapted for the NFV use case, it inherited plugins for OpenStack, VMware, Azure and others that were already available for general-purpose cloud deployments. So we can say that Cloudify has MultiVIM support and any arbitrary VIM support may be added via the appropriate plugin. Following Gigaspaces’ reference model for NFV, there is a  generic VNFM that can be used with a Cloudify NFV orchestrator out of the box. Additional vendor-specific VNFM can be onboarded, but appropriate plugin development is required.
Capabilities to provision end-to-end service
NFV service provisioning consists of multiple steps, such as VNF instantiation, configuration, underlay network provisioning, and so on.  Moreover, an NFV service might span multiple clouds and geographical locations. This kind of architecture requires complex workflow management by an NFV Orchestrator, and coordination and synchronisation between infrastructure entities. This section provides an overview of the various orchestrators&8217; abilities to provision end-to-end service.
OSM
The OSM orchestration platform supports NFV service deployment spanning multiple VIMs. In particular, the OSM RO component (openmano) stores information about all VIMs available for deployment, while the Service Orchestrator can use this information during the NSD instantiation process. Meanwhile, underlay networking between VIMs should be preconfigured. There are plans to enable End-to-End network provisioning in future, but OSM Rel. 1 has no such capability.
OPEN-O
By design, the OPEN-O platform considers both NFV and SDN infrastructure regions that might be used to provision end-to-end service. So technically, you can say that Multisite NFV service can be provisioned by OPEN-O platform. However, the OPEN-O Rel.1 platform implements just a couple of specific use cases, and at the time of this writing, you can&8217;t use it to provision an arbitrary Multisite NFV service.
CORD/XOS
The reference implementation of the CORD platform defines the provisioning of a service over a defined CORD Pod. To enable Multisite NFV Service instantiation, an additional orchestration level on the top of CORD/XOS is required. So from this perspective, at the time of this writing, CORD is not capable of instantiating a Multisite NFV service.
Gigaspaces Cloudify
As Cloudify originally supported application deployment over multiple IaaS providers, technically it is possible to create a blueprint to deploy an NFV service that spans across multiple VIMs. However underlay network provisioning might require specific plugin development.
Interaction with standardization bodies and relevant communities
Each of the reviewed projects has strong industry community support. Depending on the nature of each community and the priorities of the project, there is a different focus on collaboration with an industry, other open source projects and standardization bodies.
OSM
Being hosted by ETSI, the OSM project closely collaborates with the ETSI NFV working group and follows the appropriate specifications, reference points and interfaces. At the time of this writing there are no collaborations between OSM in the scope of the OPNFV project, but it is under consideration by the OSM community. The same relates to other relevant open source projects, such as OpenStack and OpenDaylight; these projects are used “AS-IS” by the OSM platform without cross collaboration.
OPEN-O
The OPEN-O project aims to integrate both SDN and NFV solutions to provide end-to-end service, so there is formal communication to the ETSI NFV group, while the project itself doesn’t strictly follows interfaces defined by the ETSI NFV IFA working group. On other hand there is strong integration effort with the OPNFV community via initiation of the OPERA project, which aims to integrate the OPEN-O platform as a MANO orchestrator for the OPNFV platform.  Additionally there is strong interaction between OPEN-O and MEF as a part of the OpenLSO platform, and the ONOS project towards seamless integration and enabling end-to-end SDN Orchestration.  
CORD/XOS
Having originated at the On.LAB (recently merged with ONF) this project follows the approach and technology stack defined by ONF. As of the time of this writing, the CORD project has no formal presence in OPNFV. Meanwhile, there is communication with MEF and ONF towards requirements gathering and use cases for the CORD project. In particular, MEF explicitly refers to E-CORD and its applicability for defining their OpenCS MEF project.
Gigaspaces Cloudify
While the Cloudify platform is an open source product, it is mostly developed by a single company, thus the overall roadmap and community strategy is defined by Gigaspaces. This also relates to any collaboration with standardisation bodies: GigaSpaces participates in ETSI-approved NFV PoCs where Cloudify is used as a service orchestrator, and in an MEF-initiated LSO Proof of Concept, where Cloudify is used to provision E-Line EVPL service, and so on.  Additionally, the Cloudify platform is used separately by the OPNFV community in the FuncTest project for vIMS test cases, but this mostly relates to Cloudify use cases, rather than vendor-initiated community collaboration.
Conclusions
Summarising the current state of the NFV orchestration platforms, we may conclude the following:
The OSM platform is already suitable for evaluation purposes, and has relatively simple and straightforward architecture. Several sample NSDs and VNFDs are available for evaluation in the public gerrit repo. As a result, the platform can be easily installed and integrated with an appropriate VIM to evaluate basic NFV capabilities, trial use cases and PoCs. The project is relatively young, however, and a number of features still require development and will be available in upcoming releases. Furthermore, lack of support for end-to-end NFV service provisioning across multiple regions, including underlay network provisioning, should be considered in relation to your desired use case. Considering mature OSM community and close interaction with ETSI NFV group this project might emerge as a viable option for production-grade NFV Orchestration.
At the time of this writing, the main visible benefit of the OPEN-O platform is the flexible and extendable microservices-based architecture. The OPEN-O approach considers End-to-End service provisioning spanning multiple SDN and NFV regions from the very beginning. Additionally, the OPEN-O project actively collaborates with the OPNFV community toward tight integration of the Orchestrator with OPNFV platform. Unfortunately, at the time of this writing, the OPEN-O platform requires further development to be capable of providing arbitrary NFV service provisioning. Additionally a lack of documentation makes it hard to understand the microservice logic and the interaction workflow. Meanwhile, the recent OPEN-O and ECOMP merge under the ONAP project creates powerful open source community with strong industry support, which may reshape the overall NFV orchestration market.
The CORD project is the right option when OpenFlow and whiteboxes are the primary option for computing and networking infrastructure. The platform considers multiple use cases, and a large community is involved in platform development.  Meanwhile, at the time of this writing, the  CORD platform is a relatively “niche” solution around OpenFlow and related technologies pushed to the market by ONF.
Gigaspaces Cloudify is a platform that already has a relatively long history, and at the time of this writing emerges as the most mature orchestration solution among the reviewed platforms. While the NFV use case for a Cloudify platform wasn’t originally considered, Cloudify&8217;s pluggable and extendable architecture and embedded workflow engine enables arbitrary NFV service provisioning. However, if you do consider Cloudify as an orchestration engine, be sure to consider the risk of having the decision-making process regarding the overall platform strategy controlled solely by Gigaspaces.
References

OSM official website
OSM project wiki
OPEN-O project official website
CORD project official website
Cloudify platform official website
Network Functions Virtualisation (NFV); Management and Orchestration
Cloudify approach for NFV Management & Orchestration
ARIA TOSCA project
TOSCA Simple Profile Specification
TOSCA Simple Profile for Network Functions Virtualization
OPNF OPERA project
OpenCS project   
MEF OpenLSO and OpenCS projects
OPNFV vIMS functional testing
OSM Data Models; NSD and VNFD format
Cloudify Blueprint overview

The post What is the best NFV Orchestration platform? A review of OSM, Open-O, CORD, and Cloudify appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis