Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options

The post Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options appeared first on Mirantis | Pure Play Open Cloud.
As a container management tool, Kubernetes was designed to orchestrate multiple containers and replication, and in fact there are currently several ways to do it. In this article, we&;ll look at three options: Replication Controllers, Replica Sets, and Deployments.
What is Kubernetes replication for?
Before we go into how you would do replication, let&8217;s talk about why.  Typically you would want to replicate your containers (and thereby your applications) for several reasons, including:

Reliability: By having multiple versions of an application, you prevent problems if one or more fails.  This is particularly true if the system replaces any containers that fail.
Load balancing: Having multiple versions of a container enables you to easily send traffic to different instances to prevent overloading of a single instance or node. This is something that Kubernetes does out of the box, making it extremely convenient.
Scaling: When load does become too much for the number of existing instances, Kubernetes enables you to easily scale up your application, adding additional instances as needed.

Replication is appropriate for numerous use cases, including:

Microservices-based applications: In these cases, multiple small applications provide very specific functionality.
Cloud native applications: Because cloud-native applications are based on the theory that any component can fail at any time, replication is a perfect environment for implementing them, as multiple instances are baked into the architecture.
Mobile applications: Mobile applications can often be architected so that the mobile client interacts with an isolated version of the server application.

Kubernetes has multiple ways in which you can implement replication.
Types of Kubernetes replication
In this article, we&8217;ll discuss three different forms of replication: the Replication Controller, Replica Sets, and Deployments.
Replication Controller
The Replication Controller is the original form of replication in Kubernetes.  It&8217;s being replaced by Replica Sets, but it&8217;s still in wide use, so it&8217;s worth understanding what it is and how it works.

A Replication Controller is a structure that enables you to easily create multiple pods, then make sure that that number of pods always exists. If a pod does crash, the Replication Controller replaces it.

Replication Controllers also provide other benefits, such as the ability to scale the number of pods, and to update or delete multiple pods with a single command.

You can create a Replication Controller with an imperative command, or declaratively, from a file.  For example, create a new file called rc.yaml and add the following text:
apiVersion: v1
kind: ReplicationController
metadata:
 name: soaktestrc
spec:
 replicas: 3
 selector:
   app: soaktestrc
 template:
   metadata:
     name: soaktestrc
     labels:
       app: soaktestrc
   spec:
     containers:
     – name: soaktestrc
       image: nickchase/soaktest
       ports:
       – containerPort: 80
Most of this structure should look familiar from our discussion of Deployments; we&8217;ve got the name of the actual Replication Controller (soaktestrc) and we&8217;re designating that we should have 3 replicas, each of which are defined by the template.  The selector defines how we know which pods belong to this Replication Controller.

Now tell Kubernetes to create the Replication Controller based on that file:
# kubectl create -f rc.yaml
replicationcontroller “soaktestrc” created
Let&8217;s take a look at what we have using the describe command:
# kubectl describe rc soaktestrc
Name:           soaktestrc
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app=soaktestrc
Labels:         app=soaktestrc
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type   Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                  ——-
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-g5snq
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-cws05
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-ro2bl
As you can see, we&8217;ve got the Replication Controller, and there are 3 replicas, of the 3 that we wanted.  All 3 of them are currently running.  You can also see the individual pods listed underneath, along with their names.  If you ask Kubernetes to show you the pods, you can see those same names show up:
# kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
soaktestrc-cws05   1/1       Running   0          3m
soaktestrc-g5snq   1/1       Running   0          3m
soaktestrc-ro2bl   1/1       Running   0          3m
Next we&8217;ll look at Replica Sets, but first let&8217;s clean up:
# kubectl delete rc soaktestrc
replicationcontroller “soaktestrc” deleted

# kubectl get pods
As you can see, when you delete the Replication Controller, you also delete all of the pods that it created.
Replica Sets
Replica Sets are a sort of hybrid, in that they are in some ways more powerful than Replication Controllers, and in others they are less powerful.

Replica Sets are declared in essentially the same way as Replication Controllers, except that they have more options for the selector.  For example, we could create a Replica Set like this:
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
 name: soaktestrs
spec:
 replicas: 3
 selector:
   matchLabels:
     app: soaktestrs
 template:
   metadata:
     labels:
       app: soaktestrs
  environment: dev
   spec:
     containers:
     – name: soaktestrs
       image: nickchase/soaktest
       ports:
       – containerPort: 80
In this case, it&8217;s more or less the same as when we were creating the Replication Controller, except we&8217;re using matchLabels instead of label.  But we could just as easily have said:

spec:
 replicas: 3
 selector:
    matchExpressions:
     – {key: app, operator: In, values: [soaktestrs, soaktestrs, soaktest]}
     – {key: teir, operator: NotIn, values: [production]}
 template:
   metadata:

In this case, we&8217;re looking at two different conditions:

The app label must be soaktestrc, soaktestrs, or soaktest
The tier label (if it exists) must not be production

Let&8217;s go ahead and create the Replica Set and get a look at it:
# kubectl create -f replicaset.yaml
replicaset “soaktestrs” created

# kubectl describe rs soaktestrs
Name:           soaktestrs
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app in (soaktest,soaktestrs),teir notin (production)
Labels:         app=soaktestrs
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type    Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                   ——-
 1m            1m              1       {replicaset-controller }                        Normal  SuccessfulCreate Created pod: soaktestrs-it2hf
 1m            1m              1       {replicaset-controller }                       Normal  SuccessfulCreate Created pod: soaktestrs-kimmm
 1m            1m              1       {replicaset-controller }                        Normal  SuccessfulCreate Created pod: soaktestrs-8i4ra

# kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
soaktestrs-8i4ra   1/1       Running   0          1m
soaktestrs-it2hf   1/1       Running   0          1m
soaktestrs-kimmm   1/1       Running   0          1m
As you can see, the output is pretty much the same as for a Replication Controller (except for the selector), and for most intents and purposes, they are similar.  The major difference is that the rolling-update command works with Replication Controllers, but won&8217;t work with a Replica Set.  This is because Replica Sets are meant to be used as the backend for Deployments.

Let&8217;s clean up before we move on.
# kubectl delete rs soaktestrs
replicaset “soaktestrs” deleted

# kubectl get pods
Again, the pods that were created are deleted when we delete the Replica Set.
Deployments
Deployments are intended to replace Replication Controllers.  They provide the same replication functions (through Replica Sets) and also the ability to rollout changes and roll them back if necessary.

Let&8217;s create a simple Deployment using the same image we&8217;ve been using.  First create a new file, deployment.yaml, and add the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: soaktest
spec:
 replicas: 5
 template:
   metadata:
     labels:
       app: soaktest
   spec:
     containers:
     – name: soaktest
       image: nickchase/soaktest
       ports:
       – containerPort: 80
Now go ahead and create the Deployment:
# kubectl create -f deployment.yaml
deployment “soaktest” created
Now let&8217;s go ahead and describe the Deployment:
# kubectl describe deployment soaktest
Name:                   soaktest
Namespace:              default
CreationTimestamp:      Sun, 05 Mar 2017 16:21:19 +0000
Labels:                 app=soaktest
Selector:               app=soaktest
Replicas:               5 updated | 5 total | 5 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          soaktest-3914185155 (5/5 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type    Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                   ——-
 38s           38s             1       {deployment-controller }                        Normal  ScalingReplicaSet        Scaled up replica set soaktest-3914185155 to 3
 36s           36s             1       {deployment-controller }                        Normal  ScalingReplicaSet        Scaled up replica set soaktest-3914185155 to 5
As you can see, rather than listing the individual pods, Kubernetes shows us the Replica Set.  Notice that the name of the Replica Set is the Deployment name and a hash value.

A complete discussion of updates is out of scope for this article &; we&8217;ll cover it in the future &8212; but couple of interesting things here:

The StrategyType is RollingUpdate. This value can also be set to Recreate.
By default we have a minReadySeconds value of 0; we can change that value if we want pods to be up and running for a certain amount of time &8212; say, to load resources &8212; before they&8217;re truly considered &;ready&;.
The RollingUpdateStrategy shows that we have a limit of 1 maxUnavailable &8212; meaning that when we&8217;re updating the Deployment, we can have up to 1 missing pod before it&8217;s replaced, and 1 maxSurge, meaning we can have one extra pod as we scale the new pods back up.

As you can see, the Deployment is backed, in this case, by Replica Set soaktest-3914185155. If we go ahead and look at the list of actual pods&;
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3914185155-7gyja   1/1       Running   0          2m
soaktest-3914185155-lrm20   1/1       Running   0          2m
soaktest-3914185155-o28px   1/1       Running   0          2m
soaktest-3914185155-ojzn8   1/1       Running   0          2m
soaktest-3914185155-r2pt7   1/1       Running   0          2m
&8230; you can see that their names consist of the Replica Set name and an additional identifier.
Passing environment information: identifying a specific pod
Before we look at the different ways that we can affect replicas, let&8217;s set up our deployment so that we can see what pod we&8217;re actually hitting with a particular request.  To do that, the image we&8217;ve been using displays the pod name when it outputs:
<?php
$limit = $_GET[‘limit’];
if (!isset($limit)) $limit = 250;
for ($i; $i < $limit; $i++){
    $d = tan(atan(tan(atan(tan(atan(tan(atan(tan(atan(123456789.123456789))))))))));
}
echo “Pod “.$_SERVER[‘POD_NAME’].” has finished!n”;
?>
As you can see, we&8217;re displaying an environment variable, POD_NAME.  Since each container is essentially it&8217;s own server, this will display the name of the pod when we execute the PHP.

Now we just have to pass that information to the pod.

We do that through the use of the Kubernetes Downward API, which lets us pass environment variables into the containers:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: soaktest
spec:
 replicas: 3
 template:
   metadata:
     labels:
       app: soaktest
   spec:
     containers:
     – name: soaktest
       image: nickchase/soaktest
       ports:
       – containerPort: 80
       env:
       – name: POD_NAME
         valueFrom:
           fieldRef:
             fieldPath: metadata.name
As you can see, we&8217;re passing an environment variable and assigning it a value from the Deployment&8217;s metadata.  (You can find more information on metadata here.)

So let&8217;s go ahead and clean up the Deployment we created earlier&8230;
# kubectl delete deployment soaktest
deployment “soaktest” deleted

# kubectl get pods
&8230; and recreate it with the new definition:
# kubectl create -f deployment.yaml
deployment “soaktest” created
Next let&8217;s go ahead and expose the pods to outside network requests so we can call the nginx server that is inside the containers:
# kubectl expose deployment soaktest –port=80 –target-port=80 –type=NodePort
service “soaktest” exposed
Now let&8217;s describe the services we just created so we can find out what port the Deployment is listening on:
# kubectl describe services soaktest
Name:                   soaktest
Namespace:              default
Labels:                 app=soaktest
Selector:               app=soaktest
Type:                   NodePort
IP:                     11.1.32.105
Port:                   <unset> 80/TCP
NodePort:               <unset> 30800/TCP
Endpoints:              10.200.18.2:80,10.200.18.3:80,10.200.18.4:80 + 2 more…
Session Affinity:       None
No events.
As you can see, the NodePort is 30800 in this case; in your case it will be different, so make sure to check.  That means that each of the servers involved is listening on port 30800, and requests are being forwarded to port 80 of the containers.  That means we can call the PHP script with:
http://[HOST_NAME OR HOST_IP]:[PROVIDED PORT]
In my case, I&8217;ve set the IP for my Kubernetes hosts to hostnames to make my life easier, and the PHP file is the default for nginx, so I can simply call:
# curl http://kube-2:30800
Pod soaktest-3869910569-xnfme has finished!
So as you can see, this time the request was served by pod soaktest-3869910569-xnfme.
Recovering from crashes: Creating a fixed number of replicas
Now that we know everything is running, let&8217;s take a look at some replication use cases.

The first thing we think of when it comes to replication is recovering from crashes. If there are 5 (or 50, or 500) copies of an application running, and one or more crashes, it&8217;s not a catastrophe.  Kubernetes improves the situation further by ensuring that if a pod goes down, it&8217;s replaced.

Let&8217;s see this in action.  Start by refreshing our memory about the pods we&8217;ve got running:
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-qqwqc   1/1       Running   0          11m
soaktest-3869910569-qu8k7   1/1       Running   0          11m
soaktest-3869910569-uzjxu   1/1       Running   0          11m
soaktest-3869910569-x6vmp   1/1       Running   0          11m
soaktest-3869910569-xnfme   1/1       Running   0          11m
If we repeatedly call the Deployment, we can see that we get different pods on a random basis:
# curl http://kube-2:30800
Pod soaktest-3869910569-xnfme has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-x6vmp has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-uzjxu has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-x6vmp has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-uzjxu has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-qu8k7 has finished!
To simulate a pod crashing, let&8217;s go ahead and delete one:
# kubectl delete pod soaktest-3869910569-x6vmp
pod “soaktest-3869910569-x6vmp” deleted

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-516kx   1/1       Running   0          18s
soaktest-3869910569-qqwqc   1/1       Running   0          27m
soaktest-3869910569-qu8k7   1/1       Running   0          27m
soaktest-3869910569-uzjxu   1/1       Running   0          27m
soaktest-3869910569-xnfme   1/1       Running   0          27m
As you can see, pod *x6vmp is gone, and it&8217;s been replaced by *516kx.  (You can easily find the new pod by looking at the AGE column.)

If we once again call the Deployment, we can (eventually) see the new pod:
# curl http://kube-2:30800
Pod soaktest-3869910569-516kx has finished!
Now let&8217;s look at changing the number of pods.
Scaling up or down: Manually changing the number of replicas
One common task is to scale up a Deployment in response to additional load. Kubernetes has autoscaling, but we&8217;ll talk about that in another article.  For now, let&8217;s look at how to do this task manually.

The most straightforward way is to simply use the scale command:
# kubectl scale –replicas=7 deployment/soaktest
deployment “soaktest” scaled

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-2w8i6   1/1       Running   0          6s
soaktest-3869910569-516kx   1/1       Running   0          11m
soaktest-3869910569-qqwqc   1/1       Running   0          39m
soaktest-3869910569-qu8k7   1/1       Running   0          39m
soaktest-3869910569-uzjxu   1/1       Running   0          39m
soaktest-3869910569-xnfme   1/1       Running   0          39m
soaktest-3869910569-z4rx9   1/1       Running   0          6s
In this case, we specify a new number of replicas, and Kubernetes adds enough to bring it to the desired level, as you can see.

One thing to keep in mind is that Kubernetes isn&8217;t going to scale the Deployment down to be below the level at which you first started it up.  For example, if we try to scale back down to 4&8230;
# kubectl scale –replicas=4 -f deployment.yaml
deployment “soaktest” scaled

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-l5wx8   1/1       Running   0          11s
soaktest-3869910569-qqwqc   1/1       Running   0          40m
soaktest-3869910569-qu8k7   1/1       Running   0          40m
soaktest-3869910569-uzjxu   1/1       Running   0          40m
soaktest-3869910569-xnfme   1/1       Running   0          40m
&8230; Kubernetes only brings us back down to 5, because that&8217;s what was specified by the original deployment.
Deploying a new version: Replacing replicas by changing their label
Another way you can use deployments is to make use of the selector.  In other words, if a Deployment controls all the pods with a tier value of dev, changing a pod&8217;s teir label to prod will remove it from the Deployment&8217;s sphere of influence.

This mechanism enables you to selectively replace individual pods. For example, you might move pods from a dev environment to a production environment, or you might do a manual rolling update, updating the image, then removing some fraction of pods from the Deployment; when they&8217;re replaced, it will be with the new image. If you&8217;re happy with the changes, you can then replace the rest of the pods.

Let&8217;s see this in action.  As you recall, this is our Deployment:
# kubectl describe deployment soaktest
Name:                   soaktest
Namespace:              default
CreationTimestamp:      Sun, 05 Mar 2017 19:31:04 +0000
Labels:                 app=soaktest
Selector:               app=soaktest
Replicas:               3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          soaktest-3869910569 (3/3 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type              Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–  ——                  ——-
 50s           50s             1       {deployment-controller }                        Normal            ScalingReplicaSet       Scaled up replica set soaktest-3869910569 to 3
And these are our pods:
# kubectl describe replicaset soaktest-3869910569
Name:           soaktest-3869910569
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app=soaktest,pod-template-hash=3869910569
Labels:         app=soaktest
               pod-template-hash=3869910569
Replicas:       5 current / 5 desired
Pods Status:    5 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type              Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–  ——                  ——-
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-0577c
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-wje85
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-xuhwl
 1m            1m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-8cbo2
 1m            1m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-pwlm4
We can also get a list of pods by label:
# kubectl get pods -l app=soaktest
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          7m
soaktest-3869910569-8cbo2   1/1       Running   0          6m
soaktest-3869910569-pwlm4   1/1       Running   0          6m
soaktest-3869910569-wje85   1/1       Running   0          7m
soaktest-3869910569-xuhwl   1/1       Running   0          7m
So those are our original soaktest pods; what if we wanted to add a new label?  We can do that on the command line:
# kubectl label pods soaktest-3869910569-xuhwl experimental=true
pod “soaktest-3869910569-xuhwl” labeled

# kubectl get pods -l experimental=true
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-xuhwl   1/1       Running   0          14m
So now we have one experimental pod.  But since the experimental label has nothing to do with the selector for the Deployment, it doesn&8217;t affect anything.

So what if we change the value of the app label, which the Deployment is looking at?
# kubectl label pods soaktest-3869910569-wje85 app=notsoaktest –overwrite
pod “soaktest-3869910569-wje85″ labeled
In this case, we need to use the overwrite flag because the app label already exists. Now let&8217;s look at the existing pods.
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          17m
soaktest-3869910569-4cedq   1/1       Running   0          4s
soaktest-3869910569-8cbo2   1/1       Running   0          16m
soaktest-3869910569-pwlm4   1/1       Running   0          16m
soaktest-3869910569-wje85   1/1       Running   0          17m
soaktest-3869910569-xuhwl   1/1       Running   0          17m
As you can see, we now have six pods instead of five, with a new pod having been created to replace *wje85, which was removed from the deployment. We can see the changes by requesting pods by label:
# kubectl get pods -l app=soaktest
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          17m
soaktest-3869910569-4cedq   1/1       Running   0          20s
soaktest-3869910569-8cbo2   1/1       Running   0          16m
soaktest-3869910569-pwlm4   1/1       Running   0          16m
soaktest-3869910569-xuhwl   1/1       Running   0          17m
Now, there is one wrinkle that you have to take into account; because we&8217;ve removed this pod from the Deployment, the Deployment no longer manages it.  So if we were to delete the Deployment&8230;
# kubectl delete deployment soaktest
deployment “soaktest” deleted
The pod remains:
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-wje85   1/1       Running   0          19m
You can also easily replace all of the pods in a Deployment using the &;all flag, as in:
# kubectl label pods –all app=notsoaktesteither –overwrite
But remember that you&8217;ll have to delete them all manually!
Conclusion
Replication is a large part of Kubernetes&8217; purpose in life, so it&8217;s no surprise that we&8217;ve just scratched the surface of what it can do, and how to use it. It is useful for reliability purposes, for scalability, and even as a basis for your architecture.

What do you anticipate using replication for, and what would you like to know more about? Let us know in the comments!The post Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

5 IBM professional certifications to know

It’s one thing to know; it’s another to be certified.
The IBM Professional Certification Program provides industry-recognized credentials focused on IBM technology and solutions relevant to IBM customers, business partners and employees. These credentials help demonstrate the knowledge and skills required to excel in a given area of information technology. Certification from IBM is available across a variety of technical areas including Cloud, Watson, IOT, Security and more. Visit the IBM Professional Certification Program to learn more.
As with previous years, attendees at InterConnect 2017 will have the opportunity to take IBM Professional Certification exams. Unlike previous years, however, InterConnect 2017 attendees have the opportunity to sit for an unlimited number of exams.
With a focus on cloud platforms and cognitive solutions, let’s take a look at five of the certification opportunities available at InterConnect 2017:
IBM certified solution advisor &; Cloud reference architecture
This broad IBM Cloud certification suits a person who can clearly explain the benefits and underlying concepts of . They can also demonstrate how IBM Cloud solution offerings can help customers realize these benefits.
IBM certified solution architect &8211; Cloud platform solution
Professionals with the skills to design, plan and architect a cloud infrastructure should consider this certification. The cloud platform solution architect demonstrates the ability to evaluate customers’ current state and architect an IBM Cloud Infrastructure solution.
Certified application developer &8211; Cloud platform
The certified application developer is technical professional who understands concepts essential to the development of cloud applications. They have experience using the IBM Bluemix platform and are able to consume Bluemix services in an application.
Certified advanced application developer &8211; Cloud platform
This newly released certificate focuses on technical professionals who understand advanced concepts essential to the development of cloud applications. They have demonstrated understanding of hybrid cloud best practices and can build applications that span multiple cloud and on-premises environments.
Certified application developer &8211; Watson
The certified Watson application developer understands concepts essential to the development of applications using IBM Watson services on Bluemix. Experience using the Bluemix platform and Watson Developer Cloud are essential to passing this exam.
The IBM Professional Certification Program is linked to the IBM open badge program. Badges provide digital recognition for skills attained and offer a method of sharing credentials on social media. Badges are issued almost immediately after completion of an IBM Professional Certification.
All full-experience badge holders at InterConnect are eligible to take as many free exams from the entire IBM Certification exam portfolio as desired. No registration is necessary, and seating is on a first-come, first-served basis.
A complete listing of available IBM tests is available online on the IBM Professional Certification Program&;s website. See you in the exam hall.
The post 5 IBM professional certifications to know appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Using Software Factory to manage Red Hat OpenStack Platform lifecycle

by Nicolas Hicher, Senior Software Engineer &; Continuous Integration and Delivery
Software-Factory
Software-Factory is a collection of services that provides a powerful platform to build software. It enables the same workflow used to develop OpenStack: using Gerrit for code reviews, Zuul/Nodepool/Jenkins as a CI system, and Storyboard for stories and issues tracker. Also, it ensures a reproducible test environment with ephemeral Jenkins slaves.
In this video, Nicolas Hicher will demonstrate how to use Software-Factory to manage a Red Hat OpenStack Platform 9 lifecycle. We will do a deployment and an update on a virtual environment (within an OpenStack tenant).

Python-tripleo-helper
For this demo, we will do a deployment within an OpenStack tenant. Using a tool, developed by the engineering team that builds DCI, called python-tripleo-helper. With this tool, we can do a deployment within an OpenStack tenant using the same steps of a full deployment (boot server via IPMI, discover nodes, introspection and deployment). We also patched python-tripleo-helper to add an update command to update the OpenStack (changing parameters, not doing a major upgrade).
Workflow
The workflow is simple and robust:

Submit a review with the templates, the installation script and the tests scripts. A CI job validates the templates.
When the review is approved, the gate jobs are executed (installation or update).
After the deployment/update is completed, the review is merged.

Deployment
For this demo, we will do a simple deployment (1 controller and 1 compute nodes) with Red Hat OpenStack 9.0
Limitations
Since we do the deployment in a virtual environment, we can&;t test some advanced features, especially for networking and storage. But other features of the deployed cloud can be validated using the appropriate environments.
Improvements
We plan to continue to improve this workflow to be able to:

Do a major upgrade from Red Hat OpenStack Platform (X to X+1).
Manage a bare metal deployment.
Improve the Ceph deployment to be able to use more than one object storage device (OSD).
Use smoke jobs like tempest to validate the deployment before merging the review.

Also, it should be possible to manage pre-production and production environments within a single git repository, the check job will do the tasks on pre production and after receiving a peer’s validation, the same actions will be applied on production.
Quelle: RedHat Stack

What is the best NFV Orchestration platform? A review of OSM, Open-O, CORD, and Cloudify

The post What is the best NFV Orchestration platform? A review of OSM, Open-O, CORD, and Cloudify appeared first on Mirantis | Pure Play Open Cloud.
As Network Functions Virtualization (NFV) technology matures, multiple NFV orchestration solutions have emerged and 2016 was a busy year. While some commercial products were already available on the market, multiple open source initiatives were also launched, with most  delivering initial code releases, and others planning to roll-out software artifacts later this year.
With so much going on, we thought we&;d provide you with a technical overview of some of the various NFV orchestration options, so you can get a feel for what&8217;s right for you. In particular, we&8217;ll cover:

Open Source MANO (OSM)
OPEN-O
CORD
Gigaspaces Cloudify

In addition, multiple NFV projects have been funded under European Union R&D programs. Projects such as OpenBaton, T-NOVA/TeNor and SONATA have their codebases available in  public repos, but industry support, involvement of external contributors and further  sustainability might be a challenging for these projects, so for now we&8217;ll consider them out of scope for this post, where we&8217;ll review and compare orchestration projects across the following areas:

General overview and current project state
Compliance with NFV MANO reference architecture
Software architecture
NSD definition approach
VIM and VNFM support
Capabilities to provision End to End service
Interaction with relevant standardization bodies and communities

General overview and current project state
We’ll start with a general overview of each project, along with, its ambitions, development approach, the involved community, and related information.
OSM
The OpenSource MANO project was officially launched at the World Mobile Congress (WMC) in 2016. Starting with several founding members, including Mirantis, Telefónica, BT, Canonical, Intel, RIFT.io, Telekom Austria Group and Telenor, the OSM community now includes 55 different organisations. The OSM project is hosted at ETSI facilities and targets delivering an open source management and orchestration (MANO) stack closely aligned with the ETSI NFV reference architecture.
OSM issued two releases,Rel 0 and Rel 1, during 2016. The most recent at the time of this writing, OSM Rel. 1, has been publicly available since October, 2016, and can be downloaded from the official website. Project governance is managed via several groups, including the Technical Steering group responsible for OSM&8217;s technical aspects, the Leadership group, and the End User Advisory group. You can find more details about OSM project may be found at the official Wiki.
OPEN-O
The OPEN-O project is hosted by the Linux foundation and was also formally announced at 2016 MWC. Initial project advocates were mostly from Asian companies, such as Huawei, ZTE and China Mobile. Eventually, the project project got further support from Brocade, Ericsson, GigaSpaces, Intel and others.
The main project objective is to enable end-to-end service agility across multiple domains using unified platform for NFV and SDN orchestration. The OPEN-O project delivered its first release in November, 2016 plans to roll-out future releases in a 6 month cycle. Overall project governance is managed by the project Board, with technology-specific issues managed by the Technical Steering Committee. You can find more general details about the OPEN-O project may be found at the project web-site.
CORD/XOS
Originally CORD (Central Office Re-architected as a Datacenter) was introduced as one of the use cases for the ONOS SDN Controller, but it grew-up into a separate project under ON.Lab governance. (ON.Lab recently merged with the Open Networking Foundation.)
The ultimate project goal is to combine NFV, SDN and the elasticity of commodity clouds to bring datacenter economics and cloud agility to the Telco Central Office. The reference implementation of CORD combines commodity servers, white-box switches, and disaggregated access technologies with open source software to provide an extensible service delivery platform. CORD Rel.1 and Rel.2 integrate a number of open source projects, such as ONOS to manage SDN infrastructure, OpenStack to deploy NFV workloads, and XOS as a service orchestrator. To reflect use cases&8217; uniqueness, CORD introduces a number of service profiles, such as Mobile (M-CORD), Residential (R-CORD), and Enterprise (E-CORD).  You can find more details about CORD project can be found at the official project web site.   
Gigaspaces Cloudify
Gigaspaces’ Cloudify is the open source TOSCA-based cloud orchestration software platform.  Originally introduced as a pure cloud orchestration solution (similar to OpenStack HEAT), the platform was further expanded to include NFV-related use cases, and the Cloudify Telecom Edition emerged.  
Considering its original platform purpose, Cloudify has an extensible architecture and can interact with multiple IaaS/PaaS providers such as AWS, OpenStack, Microsoft Azure and so on. Overall, Cloudify software is open source under the Apache 2 license and the source code is hosted in a public repository. While the Cloudify platform is open source and welcomes community contributions, the overall project roadmap is defined by Gigaspaces. You can find more details about the Cloudify platform at the official web site.
Compliance with ETSI NFV MANO reference architecture
At the time of this writing, a number of alternatives and specific approaches, such as Lifecycle Service Orchestration (LSO) from Metro Ethernet Forum, have emerged, huge industry support and involvement has helped to promote ETSI NFV Management and Orchestration (MANO) as the de-facto reference NFV architecture. From this standpoint, NFV MANO provides comprehensive guidance for entities, reference points and workflows to be implemented by appropriate NFV platforms (fig. 1):

Figure 1 &; ETSI NFV MANO reference architecture
OSM
As this project is hosted by ETSI, the OSM community tries to be compliant with the ETSI NFV MANO reference architecture, respecting appropriate IFA working group specifications. Key reference points, such as Or-Vnfm and Or-Vi might be identified within OSM components. The VNF and Network Service (NS) catalog are explicitly present in an OSM service orchestrator (SO) component. Meanwhile, a lot of further development efforts are planned to reach feature parity with currently specified features and interfaces.  
OPEN-O
While the OPEN-O project in general has no objective to be compliant with NFV MANO, the NFVO component of OPEN-O is aligned with an ETSI reference model, and all key MANO elements, such as VNFM and VIM might be found in an NFVO architecture. Moreover, the scope of the OPEN-O project goes beyond just NFV orchestration, and as a result goes beyond the scope identified by the ETSI NFV MANO reference architecture. One important piece of this project relates to SDN-based networking services provisioning and orchestration, which might be further used either in conjunction with NFV services or as a standalone feature.
CORD
Since its invention, CORD has defined its own reference architecture and cross-component communication logic. The reference CORD implementation is very OpenFlow-centric around ONOS, the orchestration component (XOS), and whitebox hardware. Technically, most of the CORD building blocks might be mapped to MANO-defined NFVI, VIM and VNFM, but this is incidental; the overall architectural approach defined by ETSI MANO, as well as the appropriate reference points and interfaces were not considered in scope by the CORD community. Similar to OPEN-O, the scope of this project goes beyond just NFV services provisioning. Instead, NFV services provisioning is considered as one of the several possible use cases for the CORD platform.
Gigaspaces Cloudify
The original focus of the Cloudify platform was orchestration of application deployment in a cloud. Later, when the NFV use case emerged, the Telecom Edition of the Cloudify platform was delivered. This platform combines both NFVO and generic VNFM components of the MANO defined entities (fig. 2).

Figure 2 &8211; Cloudify in relation to a NFV MANO reference architecture
By its very nature, Cloudify Blueprints might be considered as the NS and VNF catalog entities defined by MANO. Meanwhile, some interfaces and actions specified by the NFV IFA subgroup are not present or considered as out of scope for the Cloudify platform.  From this standpoint, you could say that Cloudify is aligned with the MANO reference architecture but not fully compliant.
Software architecture and components  
As you might expect, all NFV Orchestration solutions are complex integrated software platforms combined from multiple components.
OSM
The Open Source MANO (OSM) project consists of 3 basic components (fig. 3):

Figure 3 &8211; OSM project architecture

The Service Orchestrator (SO), responsible for end-to-end service orchestration and provisioning. The SO stores the VNF definitions and NS catalogs, manages workflow of the service deployment and can query the status of already deployed services. OSM integrates the rift.io orchestration engine as an SO.
The Resource Orchestrator (RO) is used to provision services over a particular IaaS provider in a given location. At the time if this writing, the RO component is capable of deploying networking services over OpenStack, VMware, and OpenVIM.  The SO and RO components can be jointly mapped to the NFVO entity in the ETSI MANO architecture
The VNF Configuration and Abstraction (VCA) module performs the initial VNF configuration using Juju Charms. Considering this purpose, the VCA module can be considered as a generic VNFM with a limited feature set.

Additionally, OSM hosts the OpenVIM project, which is a lightweight VIM layer implementation suitable for small NFV deployments as an alternative to heavyweight OpenStack or VMware VIMs.
Most of the software components are developed in python, while SO, as a user facing entity, heavily relies on a JavaScript and NodeJS framework.
OPEN-O
From a general standpoint, the complete OPEN-O software architecture can be split into 5 component groups (Fig.4):

Figure 4 &8211; OPEN-O project software architecture

Common service: Consists of shared services used by all other components.
Common TOSCA:  Provides TOSCA-related features such as NSD catalog management, NSD definition parsing, workflow execution, and so on; this component is based on the ARIA TOSCA project.
Global Service Orchestrator (GSO): As the name suggests, this group provides overall lifecycle management of the end-to-end service.
SDN Orchestrator (SDN-O): Provides abstraction and lifecycle management of SDN services; an essential piece of this block are the SDN drivers, which provide device-specific modules for communication with a particular device or SDN controller.
NFV Orchestrator (NFV-O): This group provides NFV services instantiation and lifecycle management.

The OPEN-O project uses a microservices-based architecture, and consists of more than 20 microservices. The central platform element is the Microservice Bus, which is the core microservice of the Common Service components group. Each platform component should register with this bus. During registration, each microservice specifies exposed APIs and endpoint addresses. As a result, the overall software architecture is flexible and can be easily extended with additional modules. OPEN-O Rel. 1 consists of both Java and python-based microservices.   
CORD/XOS
As mentioned above, CORD was introduced originally as an ONOS application, but grew into a standalone platform that covers both ONOS-managed SDN regions and service orchestration entities implemented by XOS.
Both ONOS and XOS provide a service framework to enable the Everything-as-a-Service (XaaS) concept. Thus, the reference CORD implementation consists of both a hardware Pod (consisting of whitebox switches and servers) and a software platform (such as ONOS or XOS with appropriate applications). From the software standpoint, the CORD platform implements an agent or driver-based approach in which XOS ensures that each registered driver used for a particular service is in an operational state (Fig. 5):

Figure 5 &8211; CORD platform architecture
The CORD reference implementation consists of Java (ONOS and its applications) and python (XOS) software stacks. Additionally, Ansible is heavily used by the CORD for automation and configuration management
Gigaspaces Cloudify
From the high-level perspective, platform consists of several different pieces, as you can see in figure 6:

Figure 6 &8211; Cloudify platform architecture

Cloudify Manager is the orchestrator that performs deployment and lifecycle management of the applications or NSDs described in the templates, called blueprints.
The Cloudify Agents are used to manage workflow execution via an appropriate plugin.

To provide overall lifecycle management, Cloudify integrates third-party components such as:

Elasticsearch, used as a data store of the deployment state, including runtime data and logs data coming from various platform components.
Logstash, used to process log information coming from platform components and agents.
Riemann, used as a policy engine to process runtime decisions about availability, SLA and overall monitoring.
RabbitMQ, used as an async transport for communication among all platform components, including remote agents.

The orchestration functionality itself is provided by the ARIA TOSCA project, which defines the TOSCA-based blueprint format and deployment workflow engine. Cloudify “native” components and plugins are python applications.
Approach for NSD definition
The Network Service Descriptor (NSD) specifies components and the relations between them to be deployed on the top of the IaaS during the NFV service instantiation. Orchestration platforms typically use some templating language to define NSDs. While the industry in general considers TOSCA as a de-facto standard to define NSDs, alternative approaches are also available across various platforms.
OSM
OSM follows the official MANO specification, which has definitions both for NSDs and VNF Descriptors (VNFD). To define NSD templates, YAML-based documents are used.  NSD is processed by the OSM Service Orchestrator to instantiate a Network Service, which itself might include VNFs, Forwarding Graphs, and Links between them.  A VNFD is a deployment template that specifies a VNF in terms of deployment and operational behaviour requirements.  Additionally VNFD specifies connections between Virtual Deployment Units (VDUs) using the internal Virtual Links (VLs). Each VDU in an OSM presentation relates to a VM or a Container.  OSM uses archived format both for NSD and VNFD. This archive consists of the service/VNF description, initial configuration scripts and other auxiliary details. You can find more information about OSM NSD/VNFD structure at the official website.
OPEN-O
In OPEN-O, the TOSCA-based  templates is used to describe the NS/VNF Package. Both the TOSCA general service profile and the more recent NFV profile can be used for NSD/VNFD, which is further packaged according to the the Cloud Service Archive (CSAR) format.   
A CSAR is a zip archive that contains at least two directories: TOSCA-Metadata and Definitions. The TOSCA-Metadata directory contains information that describes the content of the CSAR and is referred to as the TOSCA metafile. The Definitions directory contains one or more TOSCA Definitions documents. These Definitions documents contain definitions of the cloud application to be deployed during CSAR processing. More details about OPEN-O NSD/VNFD definitions may be found at the official web site.
CORD/XOS
To define a new CORD service, you need to define both TOSCA-­based templates and Python-based software components. Particularly when adding a new service, depending on its nature, you might alter one of several platform elements:

TOSCA service definition files, appropriate models, specified as YAML text files
REST APIs models, specified in Python
XOS models, implemented as a django application
Synchronizers, used to ensure the Service instantiated correctly and transitioned  to the required state.

The overall service definition format is based on the TOSCA Simple Profile language specification and presented in the YAML format.
Gigaspaces Cloudify
To instantiate a service or application, Cloudify uses templates called “Blueprints” which are effectively orchestration and deployment plans. Blueprints are specified in the form of TOSCA YAML files  and describe the service topology as a set of nodes, relationships, dependencies, instantiation and configuration settings, monitoring, and maintenance. Other than the YAML itself, a Blueprint can include multiple external resources such as configuration and installation scripts (or Puppet Manifests, or Chef Recipes, and so on) and basically any other resource required to run the application. You can find more details about the structure of Blueprints here.
VNFM and VIM support
NFV service deployment is performed on the appropriate IaaS, which itself is a set of virtualized compute, network and storage resources.  The ETSI MANO reference architecture identifies a component to manage these virtualized resources. This component is referred to as the Virtual Infrastructure Manager (VIM). Traditionally, the open source community treats OpenStack/KVM as a “de-facto” standard VIM. However, an NFV service might be span across various VIM types and various hypervisors. Thus multi-VIM support is a common requirement for an Orchestration engine.
Additionally, a separate element in a NFV MANO architecture is the VNF Manager, which is responsible for lifecycle management of the particular VNF. The VNFM component might be either generic, treating the VNF as a black box and performing similar operations for various VNFs, or there might be a vendor-specific VNFM that has unique capabilities for management of a given VNF. Both VIM and VNFM communication are performed via appropriate reference points, as defined by the NFV MANO architecture.
OSM
The OSM project was initially considered a multi-VIM platform, and at the time of this writing, it supports OpenStack, Vmware and OpenVIM. OpenVIM is a lightweight VIM implementation that is effectively a python wrapper around libvirt and a basic host networking configuration.
At the time of this writing, the OSM VCA has limited capabilities, but still can be considered a generic VNFM based on JuJu Charms. Further, it is possible to introduce support for vendor-specific VNFMs,  but additional development and integration efforts might be required on the Service Orchestrator (Rift.io) side.
OPEN-O
Release 1 of the  OPEN-O project supports only OpenStack as a VIM. This support is available as a Java-based driver for the NFVO component. For further releases, support for VMware as a VIM is planned.
The Open-O Rel.1 platform has a generic VNFM that is based on JuJu Charms. Furthermore, the pluggable architecture of the OPEN-O platform can support any vendor-specific VNFM, but additional development and integration efforts will be required.
CORD/XOS
At the time of this writing the reference implementation of the CORD platform is architectured around OpenStack as a platform to spawn NFV workloads. While there is no direct relationship to the NFV MANO architecture, the XOS orchestrator is responsible for VNF lifecycle management, and thus might be thought of as the entity that provides VNFM-like functions.
Gigaspaces Cloudify
When Cloudify was adapted for the NFV use case, it inherited plugins for OpenStack, VMware, Azure and others that were already available for general-purpose cloud deployments. So we can say that Cloudify has MultiVIM support and any arbitrary VIM support may be added via the appropriate plugin. Following Gigaspaces’ reference model for NFV, there is a  generic VNFM that can be used with a Cloudify NFV orchestrator out of the box. Additional vendor-specific VNFM can be onboarded, but appropriate plugin development is required.
Capabilities to provision end-to-end service
NFV service provisioning consists of multiple steps, such as VNF instantiation, configuration, underlay network provisioning, and so on.  Moreover, an NFV service might span multiple clouds and geographical locations. This kind of architecture requires complex workflow management by an NFV Orchestrator, and coordination and synchronisation between infrastructure entities. This section provides an overview of the various orchestrators&8217; abilities to provision end-to-end service.
OSM
The OSM orchestration platform supports NFV service deployment spanning multiple VIMs. In particular, the OSM RO component (openmano) stores information about all VIMs available for deployment, while the Service Orchestrator can use this information during the NSD instantiation process. Meanwhile, underlay networking between VIMs should be preconfigured. There are plans to enable End-to-End network provisioning in future, but OSM Rel. 1 has no such capability.
OPEN-O
By design, the OPEN-O platform considers both NFV and SDN infrastructure regions that might be used to provision end-to-end service. So technically, you can say that Multisite NFV service can be provisioned by OPEN-O platform. However, the OPEN-O Rel.1 platform implements just a couple of specific use cases, and at the time of this writing, you can&8217;t use it to provision an arbitrary Multisite NFV service.
CORD/XOS
The reference implementation of the CORD platform defines the provisioning of a service over a defined CORD Pod. To enable Multisite NFV Service instantiation, an additional orchestration level on the top of CORD/XOS is required. So from this perspective, at the time of this writing, CORD is not capable of instantiating a Multisite NFV service.
Gigaspaces Cloudify
As Cloudify originally supported application deployment over multiple IaaS providers, technically it is possible to create a blueprint to deploy an NFV service that spans across multiple VIMs. However underlay network provisioning might require specific plugin development.
Interaction with standardization bodies and relevant communities
Each of the reviewed projects has strong industry community support. Depending on the nature of each community and the priorities of the project, there is a different focus on collaboration with an industry, other open source projects and standardization bodies.
OSM
Being hosted by ETSI, the OSM project closely collaborates with the ETSI NFV working group and follows the appropriate specifications, reference points and interfaces. At the time of this writing there are no collaborations between OSM in the scope of the OPNFV project, but it is under consideration by the OSM community. The same relates to other relevant open source projects, such as OpenStack and OpenDaylight; these projects are used “AS-IS” by the OSM platform without cross collaboration.
OPEN-O
The OPEN-O project aims to integrate both SDN and NFV solutions to provide end-to-end service, so there is formal communication to the ETSI NFV group, while the project itself doesn’t strictly follows interfaces defined by the ETSI NFV IFA working group. On other hand there is strong integration effort with the OPNFV community via initiation of the OPERA project, which aims to integrate the OPEN-O platform as a MANO orchestrator for the OPNFV platform.  Additionally there is strong interaction between OPEN-O and MEF as a part of the OpenLSO platform, and the ONOS project towards seamless integration and enabling end-to-end SDN Orchestration.  
CORD/XOS
Having originated at the On.LAB (recently merged with ONF) this project follows the approach and technology stack defined by ONF. As of the time of this writing, the CORD project has no formal presence in OPNFV. Meanwhile, there is communication with MEF and ONF towards requirements gathering and use cases for the CORD project. In particular, MEF explicitly refers to E-CORD and its applicability for defining their OpenCS MEF project.
Gigaspaces Cloudify
While the Cloudify platform is an open source product, it is mostly developed by a single company, thus the overall roadmap and community strategy is defined by Gigaspaces. This also relates to any collaboration with standardisation bodies: GigaSpaces participates in ETSI-approved NFV PoCs where Cloudify is used as a service orchestrator, and in an MEF-initiated LSO Proof of Concept, where Cloudify is used to provision E-Line EVPL service, and so on.  Additionally, the Cloudify platform is used separately by the OPNFV community in the FuncTest project for vIMS test cases, but this mostly relates to Cloudify use cases, rather than vendor-initiated community collaboration.
Conclusions
Summarising the current state of the NFV orchestration platforms, we may conclude the following:
The OSM platform is already suitable for evaluation purposes, and has relatively simple and straightforward architecture. Several sample NSDs and VNFDs are available for evaluation in the public gerrit repo. As a result, the platform can be easily installed and integrated with an appropriate VIM to evaluate basic NFV capabilities, trial use cases and PoCs. The project is relatively young, however, and a number of features still require development and will be available in upcoming releases. Furthermore, lack of support for end-to-end NFV service provisioning across multiple regions, including underlay network provisioning, should be considered in relation to your desired use case. Considering mature OSM community and close interaction with ETSI NFV group this project might emerge as a viable option for production-grade NFV Orchestration.
At the time of this writing, the main visible benefit of the OPEN-O platform is the flexible and extendable microservices-based architecture. The OPEN-O approach considers End-to-End service provisioning spanning multiple SDN and NFV regions from the very beginning. Additionally, the OPEN-O project actively collaborates with the OPNFV community toward tight integration of the Orchestrator with OPNFV platform. Unfortunately, at the time of this writing, the OPEN-O platform requires further development to be capable of providing arbitrary NFV service provisioning. Additionally a lack of documentation makes it hard to understand the microservice logic and the interaction workflow. Meanwhile, the recent OPEN-O and ECOMP merge under the ONAP project creates powerful open source community with strong industry support, which may reshape the overall NFV orchestration market.
The CORD project is the right option when OpenFlow and whiteboxes are the primary option for computing and networking infrastructure. The platform considers multiple use cases, and a large community is involved in platform development.  Meanwhile, at the time of this writing, the  CORD platform is a relatively “niche” solution around OpenFlow and related technologies pushed to the market by ONF.
Gigaspaces Cloudify is a platform that already has a relatively long history, and at the time of this writing emerges as the most mature orchestration solution among the reviewed platforms. While the NFV use case for a Cloudify platform wasn’t originally considered, Cloudify&8217;s pluggable and extendable architecture and embedded workflow engine enables arbitrary NFV service provisioning. However, if you do consider Cloudify as an orchestration engine, be sure to consider the risk of having the decision-making process regarding the overall platform strategy controlled solely by Gigaspaces.
References

OSM official website
OSM project wiki
OPEN-O project official website
CORD project official website
Cloudify platform official website
Network Functions Virtualisation (NFV); Management and Orchestration
Cloudify approach for NFV Management & Orchestration
ARIA TOSCA project
TOSCA Simple Profile Specification
TOSCA Simple Profile for Network Functions Virtualization
OPNF OPERA project
OpenCS project   
MEF OpenLSO and OpenCS projects
OPNFV vIMS functional testing
OSM Data Models; NSD and VNFD format
Cloudify Blueprint overview

The post What is the best NFV Orchestration platform? A review of OSM, Open-O, CORD, and Cloudify appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

People Are Pissed That Snapchat's Marie Curie Filter Adds A Full Face Of Makeup

&;Didn&;t realize eye makeup and false lashes were essential to being a baller female physicist&;&;

But some users have pointed out an odd detail in the Marie Curie filter: a smokey eye, false eyelashes, and complexion-smoothing makeup.

But some users have pointed out an odd detail in the Marie Curie filter: a smokey eye, false eyelashes, and complexion-smoothing makeup.

Twitter: @Selfies_AndCats

For anyone who isn’t familiar with Marie Curie, she was a Polish-born French physicist and chemist who won the Nobel Prize twice for research on radioactivity.

For anyone who isn't familiar with Marie Curie, she was a Polish-born French physicist and chemist who won the Nobel Prize twice for research on radioactivity.

Curie won the 1903 Nobel Prize in Physics and the 1911 Nobel Prize in Chemistry. She ultimately died due to radiation exposure from her research, and many regard her as a hero in the field of science.

mlahanas.de / Via commons.wikimedia.org

Given her work, many felt the makeup was unnecessary.

Given her work, many felt the makeup was unnecessary.

Twitter: @MarrowNator


View Entire List ›

Quelle: <a href="People Are Pissed That Snapchat&039;s Marie Curie Filter Adds A Full Face Of Makeup“>BuzzFeed

Helping PTG attendees and other developers get to the OpenStack Summit

Although the OpenStack design events have changed, developers and operators still have a critical perspective to bring to the OpenStack Summits. At the PTG, a common whisper heard in the hallways was, &;I really want to be at the Summit, but my [boss/HR/approver] doesn&;t understand why I should be there.&; To help you out, we took our original &8220;Dear Boss&8221; letter and made a few edits for the PTG crowd. If you&8217;re a contributor or developer who wasn&8217;t able to attend the PTG, with a few edits, this letter can also work for you. (Not great with words? Foundation wordsmith Anne can help you out&;anne at openstack.org)
 
Dear [Boss],
 
I would like to attend the OpenStack Summit in Boston, May 8-11, 2017. At the Pike Project Team Gathering in Atlanta (PTG), I was able to learn more about the new development event model for OpenStack. In the past I attended the Summit to participate in the Design Summit, which encapsulated the feedback and planning as well as design and development of creating OpenStack releases. One challenge was that the Design Summit did not leave enough time for “head down” work within upstream project teams (some teams ended up traveling to team-specific mid-cycle sprints to compensate for that). At the Pike PTG, we were able to kickstart the Pike cycle development, working heads down for a full week. We made great progress on both single project and OpenStack-wide goals, which will improve the software for all users, including our organization.
 
Originally, I––and many other devs––were under the impression that we no longer needed to attend the OpenStack Summit. However, after a week at the PTG, I see that I have a valuable role to play at the Summit’s “Forum” component. The Forum is where I can gather direct feedback and requirements from operators and users, and express my opinion and our organization’s about OpenStack’s future direction. The Forum will let me engage with other groups with similar challenges, project desires and solutions.
 
While our original intent may have been to send me only to the PTG, I would strongly like us to reconsider. The Summit is still an integral part of the OpenStack design process, and I think my attendance is beneficial to both my professional development and our organization. Because of my participation in the PTG, I received a free pass to the Summit, which I must redeem by March 14.      
 
Thank you for considering my request.
[Your Name]
Quelle: openstack.org

Creative Service Catalog Descriptions in CloudForms

In this post, we will show you how to make your service catalog descriptions more elegant and flexible in Red Hat CloudForms. If you just type a description, along with a long description, you&;ll get something like this:

 
This is fine, it&8217;s informative and simple. But we could improve on it.
The Long Description field in a CloudForms catalog item can take raw HTML. This means we can add some additional changes like font size and bold text. Here is a more complex catalog item for example:

 
The Implementation
Really there isn&8217;t much limit on what you can do with these other than your imagination and HTML skills. Just be aware that global style tags will not function in the self-service UI, but inline formatting works just fine. Some options, like the one above, might just add a bit of aesthetic sugar, but more complex services, especially those that will be presented to customers, can be complemented by an informative and attractive description.
For example, let’s say we have a service that provisions two virtual machines and a load balancer. We could use the following HTML:
<!DOCTYPE html>
<html>
<body>
<h1>Create 2 Virtual Machines under a Load balancer and configure Load Balancing rules
for the VMs</h1>
<p>This template allows you to create 2 Virtual Machines under a Load balancer and
configure a load balancing rule on Port 80. This template also deploys a Storage
Account, Virtual Network, Public IP address, Availability Set and Network
Interfaces.</p>
<p>In this template, we use the resource loops capability to create the network
interfaces and virtual machines</p>
</body>
</html>
Which would give us something like this:

 
Take a look at how it’s displayed in the self service UI, both when hovering over the information link:

 
And on the order page itself:

 
As you develop more complex services, the value of these features will become more and more apparent.
A More Complex Example
Let’s take a look at a service that deploys and configures multiple virtual machines in Microsoft Azure and sets up Ansible to manage them. Everything you need to create this service can be found on the GitHub page with the Orchestration Template. Especially since we can automatically generate service dialogs from these templates. This code:
<!DOCTYPE html>
<html>
<body>
<h1>Advanced Linux Ansible Template: Setup Ansible to efficiently manage N Linux VMs</h1>
<p>This advanced template deploys N Linux VMs (Ubuntu) and it configures Ansible so you
can easily manage all the VMS . Don’t suffer more pain configuring and managing all
your VMs, just use Ansible! Ansible is a very powerful masterless configuration
management system based on SSH.</p>
<p>This template creates a storage account (Standard or Premium storage), a Virtual
Network, an Availability Sets (3 Fault Domains and 10 Update Domains), one private
NIC per VM, one public IP, a Load Balancer and you can specify SSH keys to access
your VMS remotely from your latop. You will need an additional certificate / public
key for the Ansible configuration, before executing the template you have upload them
to a Private Azure storage account in a container named ssh.</p>
<p>The template uses two Custom Scripts  :</p>
<ul>
<li>The first script configures SSH keys (public) in all the VMs for the Root user
so you can manage the VMS with ansible.</li>
<li>The second script installs ansible on a A1/DS1 Jumpbox VM so you can use it as a
controller.The script also deploys the provided certificate to /root/.ssh. Then,
it will execute an ansible playbook to create a RAID with all the available
disks.</li>
<li><p>Before you execute the script, you will need to create a PRIVATE storage
account and a container named ssh, and upload your certificate and public keys
for Ansible/ssh. </p>
<p>Once the template finishes, ssh into the AnsibleController VM (by defult the
load balancer has a NAT rule using the port 64000), then you can manage your VMS
with Ansible and the root user. For instance: </p>
<pre><code>sudo su root
ansible all -m ping (to ping all the VMs)
or
ansible all -m setup (to show all VMs system info )
</code></pre></li>
</ul>
<p>This template also ilustrates how to use Outputs and Tags.</p>
<ul>
<li>The template will generate an output with the fqdn of the new public IP so you
can easily connect to the Ansible VM.</li>
<li>The template will associate two tags to all the VMS : ServerRole (Webserver,
database etc) and ServerEnvironment (DEV,PRE,INT, PRO etc)</li>
</ul>
<h2>Known Issues and Limitations</h2>
<ul>
<li>Fixed number of data disks.This is due to a current limitation on the resource
manager; this template creates 2 data disks with ReadOnly Caching</li>
<li>Only the Ansible controller VM will be accesible for SSH.</li>
<li>Scripts are not yet idempotent and cannot handle updates.</li>
<li>Current version doesn’t use secured endpoints. If you are going to host
confidential data make sure that you secure the VNET by using Security
Groups.</li>
</ul>
</body>
</html>
Renders the following:

 
And this is how it looks in the self service UI order page:

 
Additional Notes
When developing descriptions like these, it can be a bit frustrating to have to edit and save your long descriptions to see how your work is coming along. I like to use this online editor or the Try It editor from w3schools. That way you can see your results quickly and get close to what you&8217;re looking for before building the catalog item in CloudForms. The site is also a great reference for HTML syntax. You can use these editors to build things like tables that can more efficiently describe your services to the service consumer in the most efficient way possible, as in this example:

 
That’s the basic idea. The built-in features of CloudForms to allow service designers to utilize HTML in their service descriptions gives us the tools we need to create more informative, as well as professional looking, catalog items. Try it out with some of your existing services and I suspect you’ll find it’s quite easy to improve your overall presentation with very little effort.
Quelle: CloudForms

Why GPUs are taking over the enterprise

With IBM InterConnect just a few weeks away, we’re gearing up to showcase how the MapD graphics-processing-unit (GPU)-powered data analytics and visualization platform has evolved over the past year.
When MapD participated in InterConnect last year, we were just starting to ramp up. Our mission then, as now, remains unwavering: to provide the world’s fastest data exploration platform.
So much has changed. Since the last InterConnect, MapD launched its product offering and announced A round funding while steadily building on momentum throughout the year, culminating in the release of version 2.0 of the MapD Core database and Immerse visual analytics platform in December.
In the meantime, we were fortunate to pick up some prestigious awards including Gartner Cool Vendor, Fast Company Innovation by Design, The Business Intelligence Group’s Startup of the Year, CRN’s 10 Coolest Big Data Startups and Barclays Open Innovation Challenge.
One major reason for all this attention and praise is that GPUs are taking over the enterprise. We’ve written about this extensively in several blog posts, but GPUs are no longer a technology novelty item. They are mainstream, as Nvidia’s tripled data center revenues attest.
What makes MapD unique is that, from the ground up, we’ve built a querying engine and visual analytics platform that takes advantage of GPUs. With CPUs come the limitations of Moore’s Law. As more as more businesses place an emphasis on massive quantities of data, machine learning, mathematics, analytics and visualization, GPUs are poised to take over.
Here’s a simple example: say you want to run a query against a billion or more records, a pretty common request. With today’s legacy database solutions you should plan a two-martini lunch because that’s how long it&;s going to take to run.
You better have your question nailed too, because if you want to modify it, you are going to want to eat dinner before you see the updated query again.
The experience is similar to the days of dial-up page load times, which wouldn’t that problematic if we didn’t know what real speed felt like.
In the past year, we’ve been able to validate our claims in a series of powerful independent benchmarks published by noted database authority Mark Litwintschick. To date, we remain the fastest platform (by a factor of 75) that he has ever tested against the 1.2 billion-row New York city taxi dataset.
We are delighted to be speaking at IBM InterConnect again. It is a homecoming in many ways. We can’t wait to show you how transformative the MapD platform can be when tackling big data problems.
Check out the session “Speed, Scale and Visualization: How GPUs are Remaking the Analytics Landscape in the Cloud,” at InterConnect Thursday, 23 March, from 11:30 AM to 12:15 PM.
Attend this session and more by registering for IBM InterConnect 2017.
We’re looking forward to another great event in Las Vegas. If you’d like to set-up an appointment or demo, send us an email at info@mapd.com.
The post Why GPUs are taking over the enterprise appeared first on news.
Quelle: Thoughts on Cloud

What’s new in OpenStack Ocata webinar — Q&A

The post What&;s new in OpenStack Ocata webinar &; Q&;A appeared first on Mirantis | Pure Play Open Cloud.
On February 22, my colleagues Rajat Jain, Stacy Verroneau, and Michael Tillman and I held a webinar to discuss the new features in OpenStack&8217;s latest release, Ocata. Unfortunately, we ran out of time for questions and answers, so here they are.
Q: What are the benefits of using the cells capability?
Rajat: The cells concept was introduced in the Juno release, and as some of you may recall, it was to allow a large number of nova/compute instances to share openstack services.

Therefore, Cells functionality enables you to scale an OpenStack Compute cloud in a more distributed fashion without having to use complicated technologies like database and message queue clustering. It supports very large deployments.

When this functionality is enabled, the hosts in an OpenStack Compute cloud are partitioned into groups called cells. Cells are configured as a tree. The top-level cell should have a host that runs a nova-api service, but no nova-compute services. Each child cell should run all of the typical nova-* services in a regular Compute cloud except for nova-api. You can think of cells as a normal Compute deployment in that each cell has its own database server and message queue broker. This was achieved by the nova cells and nova api services to provide the capabilities.
One of the key changes in Ocata is the upgrade to cells v2, which now only relies on the nova api service for all the synchronization across the cells.
Q: What is the placement service and how can I leverage it?
Rajat: The placement service, which was introduced in the Newton release, is now a key part of OpenStack and also mandatory in determining the optimum placement of VMs. Basically, you set up pools of resources, provide an inventory of the compute nodes, and then set up allocations for resource providers. Then you can set up policies and models for optimum placements of VMs.
Q: What is the OS profiler, and why is it useful?
Rajat: OpenStack consists of multiple projects. Each project, in turn, is  composed of multiple services. To process a request &8212; for example, to boot a virtual machine &8212; OpenStack uses multiple services from different projects. If something in this process runs slowly, it&8217;s extremely complicated to understand what exactly goes wrong and to locate the bottleneck.
To resolve this issue,  a tiny but powerful library, osprofiler, was introduced. The osprofiler library will be used by all OpenStack projects and their python clients. It provides functionality to be able to generate 1 trace per request, flowing through all involved services. This trace can then be extracted and used to build a tree of calls which can be quite handy for a variety of reasons (for example, in isolating cross-project performance issues).
Q: If I have keystone connected to a backend active directory, will i benefit from the auto-provisioning of the federated identity?
Rajat: Yes. The federated identity mapping engine now supports the ability to automatically provision projects for federated users. A role assignment will automatically be created for the user on the specified project. Prior to this, a federated user had to attempt to authenticate before an administrator could assign roles directly to their shadowed identity, resulting in a strange user experience. This is therefore a big usability enhancement for deployers leveraging the federated identity plugins.
Q: Is FWaaS really used out there?
Stacy: Yes it is, but its viability in production is debatable and going with a 3rd party with a Neutron plugin is still, IMHO, the way to go.
Q: When is Octavia GA planned to be released?
Stacy: Octavia is forecast to be GA in the Pike release.
Q: Are DragonFlow and Tricircle ready for Production?
Stacy: Those are young big tent projects but pretty sure we will see a big evolution for Pike.  
Q: What&8217;s the codename for placement service please?
Stacy: It&8217;s just called the Placement API. There&8217;s no fancy name.
Q: Does Ocata continue support for Fernet tokens?
Rajat: Yes.
Q: With federated provider,  can i integrate openstack env with my on-prem AD and allow domain users to use Openstack?
Rajat: This was always supported, and is not new to ocata. More details at https://docs.openstack.org/admin-guide/identity-integrate-with-ldap.html
What&8217;s new in this area is that the federated identity mapping engine now supports the ability to automatically provision projects for federated users. A role assignment will automatically be created for the user on the specified project. Prior to this, a federated user had to attempt to authenticate before an administrator could assign roles directly to their shadowed identity, resulting in a strange user experience.

Q: if i&8217;m using my existing domain users from AD to openstack,  how would i control their rights/role to perform specific tasks in the openstack project?
Rajat: You would first set up authentication via LDAP, then provide connection settings for AD and also set the identity driver to ldap in the keystone.conf. Next you will have to do an assignment of roles and projects to the AD users. Since Mitaka, the only option that you can use is the SQL driver for the assignment in the keystone.conf, but you will have to do the mapping. Most users prefer this approach anyway, as they want to keep the AD as read only from the OpenStack connection. You can find more details on how to configure keystone with LDAP here.
Q: What, if anything, was pushed out of the &;big tent&; and/or did not get robustly worked?
Nick:  You can get a complete view of work done on every project at Stackalytics.
Q: So when is Tricircle being released for use in production?
Stacy: Not soon enough.  Being a new Big Tent project, it needs some time to develop traction.  
Q: Do we support creation of SRIOV ports from horizon during instance creation. If not, are there any plans there?
Nick: According to the Horizon team, you can pre-create the port and assign it to an instance.
Q: Way to go warp speed Michael! Good job Rajat and Stacy. Don&8217;t worry about getting behind, I blame Nick anyway. Then again I always I always blame Nick.
Nick: Thanks Ben, I appreciate you, too.

Cognitive computing and analytics come to mobile solutions for employees

The Drum caught up with Gareth Mackown, partner and European mobile leader at IBM Global Business Services, at the Mobile World Congress this week in Barcelona to ask him about how mobile solutions are becoming more vital for not only an enterprise&;s customers, but also employees.
&;Today, organizations are really being defined by the experiences they create,&; Mackown said in an interview. &8220;Often, you think of that in terms of customers, but more and more we&8217;re seeing employee experience being a really defining factor.&8221;
IBM partnered with Apple to transform employee experiences through mobility, he said, and it&8217;s just getting started. Internet of Things (IoT) technology, cognitive computing and analytics will make those mobile solutions &8220;even more critical&8221; for people working in all kinds of different fields.
Mackown pointed to the new IBM partnership with Santander, announced at Mobile World Congress. &8220;We&8217;re helping them design and develop a suite of business apps to help them transform the employee experience they have for their business customers.&8221;
The video below includes the interview with Mackown, along with mobile business leaders from several other large companies.

Find out more in The Drum&;s full article.
The post Cognitive computing and analytics come to mobile solutions for employees appeared first on news.
Quelle: Thoughts on Cloud