Self-monitoring machines define the new industrial era

Cognitive and cloud technologies have made a new industrial era possible.
The Schaeffler Group uses big-data analytics to turn data into valuable insights that help us increase operational efficiency and develop innovative services. Using the IoT data from sensors, we can react more quickly, make better decisions and understand more about how to optimize product and equipment performance. Our machines can monitor themselves. This new industrial era is made possible by our leading products and innovation, combined with strategic partnerships, for example with IBM as our strategic partner.
The Schaeffler Group is one of the world’s leading integrated automotive and industrial suppliers and one of the world’s largest companies in family ownership. We develop, manufacture and service precision-engineered products that are used in wind turbines, automobiles, trains and aircrafts, among others. To execute on our &;mobility for tomorrow&; strategy, we are transforming our entire organization using digitalization.
As a technical foundation, Schaeffler has created a digital platform connecting products over all elements of their life cycle with the related processes and machines. This generates large amounts of data allowing us to improve value for our customers.
We chose a strategic relationship with IBM because of the company’s industry knowledge, our similarity in engineering DNA and a broad IBM portfolio from consulting to technology solutions.  We use a cloud solution with IBM Bluemix, agile development and access to all IBM Watson services.
The external focus of the transformation is optimizing maintenance and TCO in the wind energy sector, the digitized monitoring and optimization of trains, providing smart elements for autonomous vehicles, and more. You can read details about Schaeffler’s upcoming plans here or here and learn more about the Schaeffler and IBM teaming in this video.
The post Self-monitoring machines define the new industrial era appeared first on news.
Quelle: Thoughts on Cloud

Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options

The post Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options appeared first on Mirantis | Pure Play Open Cloud.
As a container management tool, Kubernetes was designed to orchestrate multiple containers and replication, and in fact there are currently several ways to do it. In this article, we&;ll look at three options: Replication Controllers, Replica Sets, and Deployments.
What is Kubernetes replication for?
Before we go into how you would do replication, let&8217;s talk about why.  Typically you would want to replicate your containers (and thereby your applications) for several reasons, including:

Reliability: By having multiple versions of an application, you prevent problems if one or more fails.  This is particularly true if the system replaces any containers that fail.
Load balancing: Having multiple versions of a container enables you to easily send traffic to different instances to prevent overloading of a single instance or node. This is something that Kubernetes does out of the box, making it extremely convenient.
Scaling: When load does become too much for the number of existing instances, Kubernetes enables you to easily scale up your application, adding additional instances as needed.

Replication is appropriate for numerous use cases, including:

Microservices-based applications: In these cases, multiple small applications provide very specific functionality.
Cloud native applications: Because cloud-native applications are based on the theory that any component can fail at any time, replication is a perfect environment for implementing them, as multiple instances are baked into the architecture.
Mobile applications: Mobile applications can often be architected so that the mobile client interacts with an isolated version of the server application.

Kubernetes has multiple ways in which you can implement replication.
Types of Kubernetes replication
In this article, we&8217;ll discuss three different forms of replication: the Replication Controller, Replica Sets, and Deployments.
Replication Controller
The Replication Controller is the original form of replication in Kubernetes.  It&8217;s being replaced by Replica Sets, but it&8217;s still in wide use, so it&8217;s worth understanding what it is and how it works.

A Replication Controller is a structure that enables you to easily create multiple pods, then make sure that that number of pods always exists. If a pod does crash, the Replication Controller replaces it.

Replication Controllers also provide other benefits, such as the ability to scale the number of pods, and to update or delete multiple pods with a single command.

You can create a Replication Controller with an imperative command, or declaratively, from a file.  For example, create a new file called rc.yaml and add the following text:
apiVersion: v1
kind: ReplicationController
metadata:
 name: soaktestrc
spec:
 replicas: 3
 selector:
   app: soaktestrc
 template:
   metadata:
     name: soaktestrc
     labels:
       app: soaktestrc
   spec:
     containers:
     – name: soaktestrc
       image: nickchase/soaktest
       ports:
       – containerPort: 80
Most of this structure should look familiar from our discussion of Deployments; we&8217;ve got the name of the actual Replication Controller (soaktestrc) and we&8217;re designating that we should have 3 replicas, each of which are defined by the template.  The selector defines how we know which pods belong to this Replication Controller.

Now tell Kubernetes to create the Replication Controller based on that file:
# kubectl create -f rc.yaml
replicationcontroller “soaktestrc” created
Let&8217;s take a look at what we have using the describe command:
# kubectl describe rc soaktestrc
Name:           soaktestrc
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app=soaktestrc
Labels:         app=soaktestrc
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type   Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                  ——-
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-g5snq
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-cws05
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-ro2bl
As you can see, we&8217;ve got the Replication Controller, and there are 3 replicas, of the 3 that we wanted.  All 3 of them are currently running.  You can also see the individual pods listed underneath, along with their names.  If you ask Kubernetes to show you the pods, you can see those same names show up:
# kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
soaktestrc-cws05   1/1       Running   0          3m
soaktestrc-g5snq   1/1       Running   0          3m
soaktestrc-ro2bl   1/1       Running   0          3m
Next we&8217;ll look at Replica Sets, but first let&8217;s clean up:
# kubectl delete rc soaktestrc
replicationcontroller “soaktestrc” deleted

# kubectl get pods
As you can see, when you delete the Replication Controller, you also delete all of the pods that it created.
Replica Sets
Replica Sets are a sort of hybrid, in that they are in some ways more powerful than Replication Controllers, and in others they are less powerful.

Replica Sets are declared in essentially the same way as Replication Controllers, except that they have more options for the selector.  For example, we could create a Replica Set like this:
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
 name: soaktestrs
spec:
 replicas: 3
 selector:
   matchLabels:
     app: soaktestrs
 template:
   metadata:
     labels:
       app: soaktestrs
  environment: dev
   spec:
     containers:
     – name: soaktestrs
       image: nickchase/soaktest
       ports:
       – containerPort: 80
In this case, it&8217;s more or less the same as when we were creating the Replication Controller, except we&8217;re using matchLabels instead of label.  But we could just as easily have said:

spec:
 replicas: 3
 selector:
    matchExpressions:
     – {key: app, operator: In, values: [soaktestrs, soaktestrs, soaktest]}
     – {key: teir, operator: NotIn, values: [production]}
 template:
   metadata:

In this case, we&8217;re looking at two different conditions:

The app label must be soaktestrc, soaktestrs, or soaktest
The tier label (if it exists) must not be production

Let&8217;s go ahead and create the Replica Set and get a look at it:
# kubectl create -f replicaset.yaml
replicaset “soaktestrs” created

# kubectl describe rs soaktestrs
Name:           soaktestrs
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app in (soaktest,soaktestrs),teir notin (production)
Labels:         app=soaktestrs
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type    Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                   ——-
 1m            1m              1       {replicaset-controller }                        Normal  SuccessfulCreate Created pod: soaktestrs-it2hf
 1m            1m              1       {replicaset-controller }                       Normal  SuccessfulCreate Created pod: soaktestrs-kimmm
 1m            1m              1       {replicaset-controller }                        Normal  SuccessfulCreate Created pod: soaktestrs-8i4ra

# kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
soaktestrs-8i4ra   1/1       Running   0          1m
soaktestrs-it2hf   1/1       Running   0          1m
soaktestrs-kimmm   1/1       Running   0          1m
As you can see, the output is pretty much the same as for a Replication Controller (except for the selector), and for most intents and purposes, they are similar.  The major difference is that the rolling-update command works with Replication Controllers, but won&8217;t work with a Replica Set.  This is because Replica Sets are meant to be used as the backend for Deployments.

Let&8217;s clean up before we move on.
# kubectl delete rs soaktestrs
replicaset “soaktestrs” deleted

# kubectl get pods
Again, the pods that were created are deleted when we delete the Replica Set.
Deployments
Deployments are intended to replace Replication Controllers.  They provide the same replication functions (through Replica Sets) and also the ability to rollout changes and roll them back if necessary.

Let&8217;s create a simple Deployment using the same image we&8217;ve been using.  First create a new file, deployment.yaml, and add the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: soaktest
spec:
 replicas: 5
 template:
   metadata:
     labels:
       app: soaktest
   spec:
     containers:
     – name: soaktest
       image: nickchase/soaktest
       ports:
       – containerPort: 80
Now go ahead and create the Deployment:
# kubectl create -f deployment.yaml
deployment “soaktest” created
Now let&8217;s go ahead and describe the Deployment:
# kubectl describe deployment soaktest
Name:                   soaktest
Namespace:              default
CreationTimestamp:      Sun, 05 Mar 2017 16:21:19 +0000
Labels:                 app=soaktest
Selector:               app=soaktest
Replicas:               5 updated | 5 total | 5 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          soaktest-3914185155 (5/5 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type    Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                   ——-
 38s           38s             1       {deployment-controller }                        Normal  ScalingReplicaSet        Scaled up replica set soaktest-3914185155 to 3
 36s           36s             1       {deployment-controller }                        Normal  ScalingReplicaSet        Scaled up replica set soaktest-3914185155 to 5
As you can see, rather than listing the individual pods, Kubernetes shows us the Replica Set.  Notice that the name of the Replica Set is the Deployment name and a hash value.

A complete discussion of updates is out of scope for this article &; we&8217;ll cover it in the future &8212; but couple of interesting things here:

The StrategyType is RollingUpdate. This value can also be set to Recreate.
By default we have a minReadySeconds value of 0; we can change that value if we want pods to be up and running for a certain amount of time &8212; say, to load resources &8212; before they&8217;re truly considered &;ready&;.
The RollingUpdateStrategy shows that we have a limit of 1 maxUnavailable &8212; meaning that when we&8217;re updating the Deployment, we can have up to 1 missing pod before it&8217;s replaced, and 1 maxSurge, meaning we can have one extra pod as we scale the new pods back up.

As you can see, the Deployment is backed, in this case, by Replica Set soaktest-3914185155. If we go ahead and look at the list of actual pods&;
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3914185155-7gyja   1/1       Running   0          2m
soaktest-3914185155-lrm20   1/1       Running   0          2m
soaktest-3914185155-o28px   1/1       Running   0          2m
soaktest-3914185155-ojzn8   1/1       Running   0          2m
soaktest-3914185155-r2pt7   1/1       Running   0          2m
&8230; you can see that their names consist of the Replica Set name and an additional identifier.
Passing environment information: identifying a specific pod
Before we look at the different ways that we can affect replicas, let&8217;s set up our deployment so that we can see what pod we&8217;re actually hitting with a particular request.  To do that, the image we&8217;ve been using displays the pod name when it outputs:
<?php
$limit = $_GET[‘limit’];
if (!isset($limit)) $limit = 250;
for ($i; $i < $limit; $i++){
    $d = tan(atan(tan(atan(tan(atan(tan(atan(tan(atan(123456789.123456789))))))))));
}
echo “Pod “.$_SERVER[‘POD_NAME’].” has finished!n”;
?>
As you can see, we&8217;re displaying an environment variable, POD_NAME.  Since each container is essentially it&8217;s own server, this will display the name of the pod when we execute the PHP.

Now we just have to pass that information to the pod.

We do that through the use of the Kubernetes Downward API, which lets us pass environment variables into the containers:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: soaktest
spec:
 replicas: 3
 template:
   metadata:
     labels:
       app: soaktest
   spec:
     containers:
     – name: soaktest
       image: nickchase/soaktest
       ports:
       – containerPort: 80
       env:
       – name: POD_NAME
         valueFrom:
           fieldRef:
             fieldPath: metadata.name
As you can see, we&8217;re passing an environment variable and assigning it a value from the Deployment&8217;s metadata.  (You can find more information on metadata here.)

So let&8217;s go ahead and clean up the Deployment we created earlier&8230;
# kubectl delete deployment soaktest
deployment “soaktest” deleted

# kubectl get pods
&8230; and recreate it with the new definition:
# kubectl create -f deployment.yaml
deployment “soaktest” created
Next let&8217;s go ahead and expose the pods to outside network requests so we can call the nginx server that is inside the containers:
# kubectl expose deployment soaktest –port=80 –target-port=80 –type=NodePort
service “soaktest” exposed
Now let&8217;s describe the services we just created so we can find out what port the Deployment is listening on:
# kubectl describe services soaktest
Name:                   soaktest
Namespace:              default
Labels:                 app=soaktest
Selector:               app=soaktest
Type:                   NodePort
IP:                     11.1.32.105
Port:                   <unset> 80/TCP
NodePort:               <unset> 30800/TCP
Endpoints:              10.200.18.2:80,10.200.18.3:80,10.200.18.4:80 + 2 more…
Session Affinity:       None
No events.
As you can see, the NodePort is 30800 in this case; in your case it will be different, so make sure to check.  That means that each of the servers involved is listening on port 30800, and requests are being forwarded to port 80 of the containers.  That means we can call the PHP script with:
http://[HOST_NAME OR HOST_IP]:[PROVIDED PORT]
In my case, I&8217;ve set the IP for my Kubernetes hosts to hostnames to make my life easier, and the PHP file is the default for nginx, so I can simply call:
# curl http://kube-2:30800
Pod soaktest-3869910569-xnfme has finished!
So as you can see, this time the request was served by pod soaktest-3869910569-xnfme.
Recovering from crashes: Creating a fixed number of replicas
Now that we know everything is running, let&8217;s take a look at some replication use cases.

The first thing we think of when it comes to replication is recovering from crashes. If there are 5 (or 50, or 500) copies of an application running, and one or more crashes, it&8217;s not a catastrophe.  Kubernetes improves the situation further by ensuring that if a pod goes down, it&8217;s replaced.

Let&8217;s see this in action.  Start by refreshing our memory about the pods we&8217;ve got running:
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-qqwqc   1/1       Running   0          11m
soaktest-3869910569-qu8k7   1/1       Running   0          11m
soaktest-3869910569-uzjxu   1/1       Running   0          11m
soaktest-3869910569-x6vmp   1/1       Running   0          11m
soaktest-3869910569-xnfme   1/1       Running   0          11m
If we repeatedly call the Deployment, we can see that we get different pods on a random basis:
# curl http://kube-2:30800
Pod soaktest-3869910569-xnfme has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-x6vmp has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-uzjxu has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-x6vmp has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-uzjxu has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-qu8k7 has finished!
To simulate a pod crashing, let&8217;s go ahead and delete one:
# kubectl delete pod soaktest-3869910569-x6vmp
pod “soaktest-3869910569-x6vmp” deleted

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-516kx   1/1       Running   0          18s
soaktest-3869910569-qqwqc   1/1       Running   0          27m
soaktest-3869910569-qu8k7   1/1       Running   0          27m
soaktest-3869910569-uzjxu   1/1       Running   0          27m
soaktest-3869910569-xnfme   1/1       Running   0          27m
As you can see, pod *x6vmp is gone, and it&8217;s been replaced by *516kx.  (You can easily find the new pod by looking at the AGE column.)

If we once again call the Deployment, we can (eventually) see the new pod:
# curl http://kube-2:30800
Pod soaktest-3869910569-516kx has finished!
Now let&8217;s look at changing the number of pods.
Scaling up or down: Manually changing the number of replicas
One common task is to scale up a Deployment in response to additional load. Kubernetes has autoscaling, but we&8217;ll talk about that in another article.  For now, let&8217;s look at how to do this task manually.

The most straightforward way is to simply use the scale command:
# kubectl scale –replicas=7 deployment/soaktest
deployment “soaktest” scaled

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-2w8i6   1/1       Running   0          6s
soaktest-3869910569-516kx   1/1       Running   0          11m
soaktest-3869910569-qqwqc   1/1       Running   0          39m
soaktest-3869910569-qu8k7   1/1       Running   0          39m
soaktest-3869910569-uzjxu   1/1       Running   0          39m
soaktest-3869910569-xnfme   1/1       Running   0          39m
soaktest-3869910569-z4rx9   1/1       Running   0          6s
In this case, we specify a new number of replicas, and Kubernetes adds enough to bring it to the desired level, as you can see.

One thing to keep in mind is that Kubernetes isn&8217;t going to scale the Deployment down to be below the level at which you first started it up.  For example, if we try to scale back down to 4&8230;
# kubectl scale –replicas=4 -f deployment.yaml
deployment “soaktest” scaled

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-l5wx8   1/1       Running   0          11s
soaktest-3869910569-qqwqc   1/1       Running   0          40m
soaktest-3869910569-qu8k7   1/1       Running   0          40m
soaktest-3869910569-uzjxu   1/1       Running   0          40m
soaktest-3869910569-xnfme   1/1       Running   0          40m
&8230; Kubernetes only brings us back down to 5, because that&8217;s what was specified by the original deployment.
Deploying a new version: Replacing replicas by changing their label
Another way you can use deployments is to make use of the selector.  In other words, if a Deployment controls all the pods with a tier value of dev, changing a pod&8217;s teir label to prod will remove it from the Deployment&8217;s sphere of influence.

This mechanism enables you to selectively replace individual pods. For example, you might move pods from a dev environment to a production environment, or you might do a manual rolling update, updating the image, then removing some fraction of pods from the Deployment; when they&8217;re replaced, it will be with the new image. If you&8217;re happy with the changes, you can then replace the rest of the pods.

Let&8217;s see this in action.  As you recall, this is our Deployment:
# kubectl describe deployment soaktest
Name:                   soaktest
Namespace:              default
CreationTimestamp:      Sun, 05 Mar 2017 19:31:04 +0000
Labels:                 app=soaktest
Selector:               app=soaktest
Replicas:               3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          soaktest-3869910569 (3/3 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type              Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–  ——                  ——-
 50s           50s             1       {deployment-controller }                        Normal            ScalingReplicaSet       Scaled up replica set soaktest-3869910569 to 3
And these are our pods:
# kubectl describe replicaset soaktest-3869910569
Name:           soaktest-3869910569
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app=soaktest,pod-template-hash=3869910569
Labels:         app=soaktest
               pod-template-hash=3869910569
Replicas:       5 current / 5 desired
Pods Status:    5 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type              Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–  ——                  ——-
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-0577c
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-wje85
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-xuhwl
 1m            1m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-8cbo2
 1m            1m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-pwlm4
We can also get a list of pods by label:
# kubectl get pods -l app=soaktest
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          7m
soaktest-3869910569-8cbo2   1/1       Running   0          6m
soaktest-3869910569-pwlm4   1/1       Running   0          6m
soaktest-3869910569-wje85   1/1       Running   0          7m
soaktest-3869910569-xuhwl   1/1       Running   0          7m
So those are our original soaktest pods; what if we wanted to add a new label?  We can do that on the command line:
# kubectl label pods soaktest-3869910569-xuhwl experimental=true
pod “soaktest-3869910569-xuhwl” labeled

# kubectl get pods -l experimental=true
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-xuhwl   1/1       Running   0          14m
So now we have one experimental pod.  But since the experimental label has nothing to do with the selector for the Deployment, it doesn&8217;t affect anything.

So what if we change the value of the app label, which the Deployment is looking at?
# kubectl label pods soaktest-3869910569-wje85 app=notsoaktest –overwrite
pod “soaktest-3869910569-wje85″ labeled
In this case, we need to use the overwrite flag because the app label already exists. Now let&8217;s look at the existing pods.
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          17m
soaktest-3869910569-4cedq   1/1       Running   0          4s
soaktest-3869910569-8cbo2   1/1       Running   0          16m
soaktest-3869910569-pwlm4   1/1       Running   0          16m
soaktest-3869910569-wje85   1/1       Running   0          17m
soaktest-3869910569-xuhwl   1/1       Running   0          17m
As you can see, we now have six pods instead of five, with a new pod having been created to replace *wje85, which was removed from the deployment. We can see the changes by requesting pods by label:
# kubectl get pods -l app=soaktest
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          17m
soaktest-3869910569-4cedq   1/1       Running   0          20s
soaktest-3869910569-8cbo2   1/1       Running   0          16m
soaktest-3869910569-pwlm4   1/1       Running   0          16m
soaktest-3869910569-xuhwl   1/1       Running   0          17m
Now, there is one wrinkle that you have to take into account; because we&8217;ve removed this pod from the Deployment, the Deployment no longer manages it.  So if we were to delete the Deployment&8230;
# kubectl delete deployment soaktest
deployment “soaktest” deleted
The pod remains:
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-wje85   1/1       Running   0          19m
You can also easily replace all of the pods in a Deployment using the &;all flag, as in:
# kubectl label pods –all app=notsoaktesteither –overwrite
But remember that you&8217;ll have to delete them all manually!
Conclusion
Replication is a large part of Kubernetes&8217; purpose in life, so it&8217;s no surprise that we&8217;ve just scratched the surface of what it can do, and how to use it. It is useful for reliability purposes, for scalability, and even as a basis for your architecture.

What do you anticipate using replication for, and what would you like to know more about? Let us know in the comments!The post Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options

The post Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options appeared first on Mirantis | Pure Play Open Cloud.
As a container management tool, Kubernetes was designed to orchestrate multiple containers and replication, and in fact there are currently several ways to do it. In this article, we&;ll look at three options: Replication Controllers, Replica Sets, and Deployments.
What is Kubernetes replication for?
Before we go into how you would do replication, let&8217;s talk about why.  Typically you would want to replicate your containers (and thereby your applications) for several reasons, including:

Reliability: By having multiple versions of an application, you prevent problems if one or more fails.  This is particularly true if the system replaces any containers that fail.
Load balancing: Having multiple versions of a container enables you to easily send traffic to different instances to prevent overloading of a single instance or node. This is something that Kubernetes does out of the box, making it extremely convenient.
Scaling: When load does become too much for the number of existing instances, Kubernetes enables you to easily scale up your application, adding additional instances as needed.

Replication is appropriate for numerous use cases, including:

Microservices-based applications: In these cases, multiple small applications provide very specific functionality.
Cloud native applications: Because cloud-native applications are based on the theory that any component can fail at any time, replication is a perfect environment for implementing them, as multiple instances are baked into the architecture.
Mobile applications: Mobile applications can often be architected so that the mobile client interacts with an isolated version of the server application.

Kubernetes has multiple ways in which you can implement replication.
Types of Kubernetes replication
In this article, we&8217;ll discuss three different forms of replication: the Replication Controller, Replica Sets, and Deployments.
Replication Controller
The Replication Controller is the original form of replication in Kubernetes.  It&8217;s being replaced by Replica Sets, but it&8217;s still in wide use, so it&8217;s worth understanding what it is and how it works.

A Replication Controller is a structure that enables you to easily create multiple pods, then make sure that that number of pods always exists. If a pod does crash, the Replication Controller replaces it.

Replication Controllers also provide other benefits, such as the ability to scale the number of pods, and to update or delete multiple pods with a single command.

You can create a Replication Controller with an imperative command, or declaratively, from a file.  For example, create a new file called rc.yaml and add the following text:
apiVersion: v1
kind: ReplicationController
metadata:
 name: soaktestrc
spec:
 replicas: 3
 selector:
   app: soaktestrc
 template:
   metadata:
     name: soaktestrc
     labels:
       app: soaktestrc
   spec:
     containers:
     – name: soaktestrc
       image: nickchase/soaktest
       ports:
       – containerPort: 80
Most of this structure should look familiar from our discussion of Deployments; we&8217;ve got the name of the actual Replication Controller (soaktestrc) and we&8217;re designating that we should have 3 replicas, each of which are defined by the template.  The selector defines how we know which pods belong to this Replication Controller.

Now tell Kubernetes to create the Replication Controller based on that file:
# kubectl create -f rc.yaml
replicationcontroller “soaktestrc” created
Let&8217;s take a look at what we have using the describe command:
# kubectl describe rc soaktestrc
Name:           soaktestrc
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app=soaktestrc
Labels:         app=soaktestrc
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type   Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                  ——-
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-g5snq
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-cws05
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-ro2bl
As you can see, we&8217;ve got the Replication Controller, and there are 3 replicas, of the 3 that we wanted.  All 3 of them are currently running.  You can also see the individual pods listed underneath, along with their names.  If you ask Kubernetes to show you the pods, you can see those same names show up:
# kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
soaktestrc-cws05   1/1       Running   0          3m
soaktestrc-g5snq   1/1       Running   0          3m
soaktestrc-ro2bl   1/1       Running   0          3m
Next we&8217;ll look at Replica Sets, but first let&8217;s clean up:
# kubectl delete rc soaktestrc
replicationcontroller “soaktestrc” deleted

# kubectl get pods
As you can see, when you delete the Replication Controller, you also delete all of the pods that it created.
Replica Sets
Replica Sets are a sort of hybrid, in that they are in some ways more powerful than Replication Controllers, and in others they are less powerful.

Replica Sets are declared in essentially the same way as Replication Controllers, except that they have more options for the selector.  For example, we could create a Replica Set like this:
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
 name: soaktestrs
spec:
 replicas: 3
 selector:
   matchLabels:
     app: soaktestrs
 template:
   metadata:
     labels:
       app: soaktestrs
  environment: dev
   spec:
     containers:
     – name: soaktestrs
       image: nickchase/soaktest
       ports:
       – containerPort: 80
In this case, it&8217;s more or less the same as when we were creating the Replication Controller, except we&8217;re using matchLabels instead of label.  But we could just as easily have said:

spec:
 replicas: 3
 selector:
    matchExpressions:
     – {key: app, operator: In, values: [soaktestrs, soaktestrs, soaktest]}
     – {key: teir, operator: NotIn, values: [production]}
 template:
   metadata:

In this case, we&8217;re looking at two different conditions:

The app label must be soaktestrc, soaktestrs, or soaktest
The tier label (if it exists) must not be production

Let&8217;s go ahead and create the Replica Set and get a look at it:
# kubectl create -f replicaset.yaml
replicaset “soaktestrs” created

# kubectl describe rs soaktestrs
Name:           soaktestrs
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app in (soaktest,soaktestrs),teir notin (production)
Labels:         app=soaktestrs
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type    Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                   ——-
 1m            1m              1       {replicaset-controller }                        Normal  SuccessfulCreate Created pod: soaktestrs-it2hf
 1m            1m              1       {replicaset-controller }                       Normal  SuccessfulCreate Created pod: soaktestrs-kimmm
 1m            1m              1       {replicaset-controller }                        Normal  SuccessfulCreate Created pod: soaktestrs-8i4ra

# kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
soaktestrs-8i4ra   1/1       Running   0          1m
soaktestrs-it2hf   1/1       Running   0          1m
soaktestrs-kimmm   1/1       Running   0          1m
As you can see, the output is pretty much the same as for a Replication Controller (except for the selector), and for most intents and purposes, they are similar.  The major difference is that the rolling-update command works with Replication Controllers, but won&8217;t work with a Replica Set.  This is because Replica Sets are meant to be used as the backend for Deployments.

Let&8217;s clean up before we move on.
# kubectl delete rs soaktestrs
replicaset “soaktestrs” deleted

# kubectl get pods
Again, the pods that were created are deleted when we delete the Replica Set.
Deployments
Deployments are intended to replace Replication Controllers.  They provide the same replication functions (through Replica Sets) and also the ability to rollout changes and roll them back if necessary.

Let&8217;s create a simple Deployment using the same image we&8217;ve been using.  First create a new file, deployment.yaml, and add the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: soaktest
spec:
 replicas: 5
 template:
   metadata:
     labels:
       app: soaktest
   spec:
     containers:
     – name: soaktest
       image: nickchase/soaktest
       ports:
       – containerPort: 80
Now go ahead and create the Deployment:
# kubectl create -f deployment.yaml
deployment “soaktest” created
Now let&8217;s go ahead and describe the Deployment:
# kubectl describe deployment soaktest
Name:                   soaktest
Namespace:              default
CreationTimestamp:      Sun, 05 Mar 2017 16:21:19 +0000
Labels:                 app=soaktest
Selector:               app=soaktest
Replicas:               5 updated | 5 total | 5 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          soaktest-3914185155 (5/5 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type    Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                   ——-
 38s           38s             1       {deployment-controller }                        Normal  ScalingReplicaSet        Scaled up replica set soaktest-3914185155 to 3
 36s           36s             1       {deployment-controller }                        Normal  ScalingReplicaSet        Scaled up replica set soaktest-3914185155 to 5
As you can see, rather than listing the individual pods, Kubernetes shows us the Replica Set.  Notice that the name of the Replica Set is the Deployment name and a hash value.

A complete discussion of updates is out of scope for this article &; we&8217;ll cover it in the future &8212; but couple of interesting things here:

The StrategyType is RollingUpdate. This value can also be set to Recreate.
By default we have a minReadySeconds value of 0; we can change that value if we want pods to be up and running for a certain amount of time &8212; say, to load resources &8212; before they&8217;re truly considered &;ready&;.
The RollingUpdateStrategy shows that we have a limit of 1 maxUnavailable &8212; meaning that when we&8217;re updating the Deployment, we can have up to 1 missing pod before it&8217;s replaced, and 1 maxSurge, meaning we can have one extra pod as we scale the new pods back up.

As you can see, the Deployment is backed, in this case, by Replica Set soaktest-3914185155. If we go ahead and look at the list of actual pods&;
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3914185155-7gyja   1/1       Running   0          2m
soaktest-3914185155-lrm20   1/1       Running   0          2m
soaktest-3914185155-o28px   1/1       Running   0          2m
soaktest-3914185155-ojzn8   1/1       Running   0          2m
soaktest-3914185155-r2pt7   1/1       Running   0          2m
&8230; you can see that their names consist of the Replica Set name and an additional identifier.
Passing environment information: identifying a specific pod
Before we look at the different ways that we can affect replicas, let&8217;s set up our deployment so that we can see what pod we&8217;re actually hitting with a particular request.  To do that, the image we&8217;ve been using displays the pod name when it outputs:
<?php
$limit = $_GET[‘limit’];
if (!isset($limit)) $limit = 250;
for ($i; $i < $limit; $i++){
    $d = tan(atan(tan(atan(tan(atan(tan(atan(tan(atan(123456789.123456789))))))))));
}
echo “Pod “.$_SERVER[‘POD_NAME’].” has finished!n”;
?>
As you can see, we&8217;re displaying an environment variable, POD_NAME.  Since each container is essentially it&8217;s own server, this will display the name of the pod when we execute the PHP.

Now we just have to pass that information to the pod.

We do that through the use of the Kubernetes Downward API, which lets us pass environment variables into the containers:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: soaktest
spec:
 replicas: 3
 template:
   metadata:
     labels:
       app: soaktest
   spec:
     containers:
     – name: soaktest
       image: nickchase/soaktest
       ports:
       – containerPort: 80
       env:
       – name: POD_NAME
         valueFrom:
           fieldRef:
             fieldPath: metadata.name
As you can see, we&8217;re passing an environment variable and assigning it a value from the Deployment&8217;s metadata.  (You can find more information on metadata here.)

So let&8217;s go ahead and clean up the Deployment we created earlier&8230;
# kubectl delete deployment soaktest
deployment “soaktest” deleted

# kubectl get pods
&8230; and recreate it with the new definition:
# kubectl create -f deployment.yaml
deployment “soaktest” created
Next let&8217;s go ahead and expose the pods to outside network requests so we can call the nginx server that is inside the containers:
# kubectl expose deployment soaktest –port=80 –target-port=80 –type=NodePort
service “soaktest” exposed
Now let&8217;s describe the services we just created so we can find out what port the Deployment is listening on:
# kubectl describe services soaktest
Name:                   soaktest
Namespace:              default
Labels:                 app=soaktest
Selector:               app=soaktest
Type:                   NodePort
IP:                     11.1.32.105
Port:                   <unset> 80/TCP
NodePort:               <unset> 30800/TCP
Endpoints:              10.200.18.2:80,10.200.18.3:80,10.200.18.4:80 + 2 more…
Session Affinity:       None
No events.
As you can see, the NodePort is 30800 in this case; in your case it will be different, so make sure to check.  That means that each of the servers involved is listening on port 30800, and requests are being forwarded to port 80 of the containers.  That means we can call the PHP script with:
http://[HOST_NAME OR HOST_IP]:[PROVIDED PORT]
In my case, I&8217;ve set the IP for my Kubernetes hosts to hostnames to make my life easier, and the PHP file is the default for nginx, so I can simply call:
# curl http://kube-2:30800
Pod soaktest-3869910569-xnfme has finished!
So as you can see, this time the request was served by pod soaktest-3869910569-xnfme.
Recovering from crashes: Creating a fixed number of replicas
Now that we know everything is running, let&8217;s take a look at some replication use cases.

The first thing we think of when it comes to replication is recovering from crashes. If there are 5 (or 50, or 500) copies of an application running, and one or more crashes, it&8217;s not a catastrophe.  Kubernetes improves the situation further by ensuring that if a pod goes down, it&8217;s replaced.

Let&8217;s see this in action.  Start by refreshing our memory about the pods we&8217;ve got running:
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-qqwqc   1/1       Running   0          11m
soaktest-3869910569-qu8k7   1/1       Running   0          11m
soaktest-3869910569-uzjxu   1/1       Running   0          11m
soaktest-3869910569-x6vmp   1/1       Running   0          11m
soaktest-3869910569-xnfme   1/1       Running   0          11m
If we repeatedly call the Deployment, we can see that we get different pods on a random basis:
# curl http://kube-2:30800
Pod soaktest-3869910569-xnfme has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-x6vmp has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-uzjxu has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-x6vmp has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-uzjxu has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-qu8k7 has finished!
To simulate a pod crashing, let&8217;s go ahead and delete one:
# kubectl delete pod soaktest-3869910569-x6vmp
pod “soaktest-3869910569-x6vmp” deleted

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-516kx   1/1       Running   0          18s
soaktest-3869910569-qqwqc   1/1       Running   0          27m
soaktest-3869910569-qu8k7   1/1       Running   0          27m
soaktest-3869910569-uzjxu   1/1       Running   0          27m
soaktest-3869910569-xnfme   1/1       Running   0          27m
As you can see, pod *x6vmp is gone, and it&8217;s been replaced by *516kx.  (You can easily find the new pod by looking at the AGE column.)

If we once again call the Deployment, we can (eventually) see the new pod:
# curl http://kube-2:30800
Pod soaktest-3869910569-516kx has finished!
Now let&8217;s look at changing the number of pods.
Scaling up or down: Manually changing the number of replicas
One common task is to scale up a Deployment in response to additional load. Kubernetes has autoscaling, but we&8217;ll talk about that in another article.  For now, let&8217;s look at how to do this task manually.

The most straightforward way is to simply use the scale command:
# kubectl scale –replicas=7 deployment/soaktest
deployment “soaktest” scaled

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-2w8i6   1/1       Running   0          6s
soaktest-3869910569-516kx   1/1       Running   0          11m
soaktest-3869910569-qqwqc   1/1       Running   0          39m
soaktest-3869910569-qu8k7   1/1       Running   0          39m
soaktest-3869910569-uzjxu   1/1       Running   0          39m
soaktest-3869910569-xnfme   1/1       Running   0          39m
soaktest-3869910569-z4rx9   1/1       Running   0          6s
In this case, we specify a new number of replicas, and Kubernetes adds enough to bring it to the desired level, as you can see.

One thing to keep in mind is that Kubernetes isn&8217;t going to scale the Deployment down to be below the level at which you first started it up.  For example, if we try to scale back down to 4&8230;
# kubectl scale –replicas=4 -f deployment.yaml
deployment “soaktest” scaled

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-l5wx8   1/1       Running   0          11s
soaktest-3869910569-qqwqc   1/1       Running   0          40m
soaktest-3869910569-qu8k7   1/1       Running   0          40m
soaktest-3869910569-uzjxu   1/1       Running   0          40m
soaktest-3869910569-xnfme   1/1       Running   0          40m
&8230; Kubernetes only brings us back down to 5, because that&8217;s what was specified by the original deployment.
Deploying a new version: Replacing replicas by changing their label
Another way you can use deployments is to make use of the selector.  In other words, if a Deployment controls all the pods with a tier value of dev, changing a pod&8217;s teir label to prod will remove it from the Deployment&8217;s sphere of influence.

This mechanism enables you to selectively replace individual pods. For example, you might move pods from a dev environment to a production environment, or you might do a manual rolling update, updating the image, then removing some fraction of pods from the Deployment; when they&8217;re replaced, it will be with the new image. If you&8217;re happy with the changes, you can then replace the rest of the pods.

Let&8217;s see this in action.  As you recall, this is our Deployment:
# kubectl describe deployment soaktest
Name:                   soaktest
Namespace:              default
CreationTimestamp:      Sun, 05 Mar 2017 19:31:04 +0000
Labels:                 app=soaktest
Selector:               app=soaktest
Replicas:               3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          soaktest-3869910569 (3/3 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type              Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–  ——                  ——-
 50s           50s             1       {deployment-controller }                        Normal            ScalingReplicaSet       Scaled up replica set soaktest-3869910569 to 3
And these are our pods:
# kubectl describe replicaset soaktest-3869910569
Name:           soaktest-3869910569
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app=soaktest,pod-template-hash=3869910569
Labels:         app=soaktest
               pod-template-hash=3869910569
Replicas:       5 current / 5 desired
Pods Status:    5 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type              Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–  ——                  ——-
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-0577c
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-wje85
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-xuhwl
 1m            1m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-8cbo2
 1m            1m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-pwlm4
We can also get a list of pods by label:
# kubectl get pods -l app=soaktest
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          7m
soaktest-3869910569-8cbo2   1/1       Running   0          6m
soaktest-3869910569-pwlm4   1/1       Running   0          6m
soaktest-3869910569-wje85   1/1       Running   0          7m
soaktest-3869910569-xuhwl   1/1       Running   0          7m
So those are our original soaktest pods; what if we wanted to add a new label?  We can do that on the command line:
# kubectl label pods soaktest-3869910569-xuhwl experimental=true
pod “soaktest-3869910569-xuhwl” labeled

# kubectl get pods -l experimental=true
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-xuhwl   1/1       Running   0          14m
So now we have one experimental pod.  But since the experimental label has nothing to do with the selector for the Deployment, it doesn&8217;t affect anything.

So what if we change the value of the app label, which the Deployment is looking at?
# kubectl label pods soaktest-3869910569-wje85 app=notsoaktest –overwrite
pod “soaktest-3869910569-wje85″ labeled
In this case, we need to use the overwrite flag because the app label already exists. Now let&8217;s look at the existing pods.
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          17m
soaktest-3869910569-4cedq   1/1       Running   0          4s
soaktest-3869910569-8cbo2   1/1       Running   0          16m
soaktest-3869910569-pwlm4   1/1       Running   0          16m
soaktest-3869910569-wje85   1/1       Running   0          17m
soaktest-3869910569-xuhwl   1/1       Running   0          17m
As you can see, we now have six pods instead of five, with a new pod having been created to replace *wje85, which was removed from the deployment. We can see the changes by requesting pods by label:
# kubectl get pods -l app=soaktest
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          17m
soaktest-3869910569-4cedq   1/1       Running   0          20s
soaktest-3869910569-8cbo2   1/1       Running   0          16m
soaktest-3869910569-pwlm4   1/1       Running   0          16m
soaktest-3869910569-xuhwl   1/1       Running   0          17m
Now, there is one wrinkle that you have to take into account; because we&8217;ve removed this pod from the Deployment, the Deployment no longer manages it.  So if we were to delete the Deployment&8230;
# kubectl delete deployment soaktest
deployment “soaktest” deleted
The pod remains:
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-wje85   1/1       Running   0          19m
You can also easily replace all of the pods in a Deployment using the &;all flag, as in:
# kubectl label pods –all app=notsoaktesteither –overwrite
But remember that you&8217;ll have to delete them all manually!
Conclusion
Replication is a large part of Kubernetes&8217; purpose in life, so it&8217;s no surprise that we&8217;ve just scratched the surface of what it can do, and how to use it. It is useful for reliability purposes, for scalability, and even as a basis for your architecture.

What do you anticipate using replication for, and what would you like to know more about? Let us know in the comments!The post Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

People Are Pissed That Snapchat's Marie Curie Filter Adds A Full Face Of Makeup

&;Didn&;t realize eye makeup and false lashes were essential to being a baller female physicist&;&;

But some users have pointed out an odd detail in the Marie Curie filter: a smokey eye, false eyelashes, and complexion-smoothing makeup.

But some users have pointed out an odd detail in the Marie Curie filter: a smokey eye, false eyelashes, and complexion-smoothing makeup.

Twitter: @Selfies_AndCats

For anyone who isn’t familiar with Marie Curie, she was a Polish-born French physicist and chemist who won the Nobel Prize twice for research on radioactivity.

For anyone who isn't familiar with Marie Curie, she was a Polish-born French physicist and chemist who won the Nobel Prize twice for research on radioactivity.

Curie won the 1903 Nobel Prize in Physics and the 1911 Nobel Prize in Chemistry. She ultimately died due to radiation exposure from her research, and many regard her as a hero in the field of science.

mlahanas.de / Via commons.wikimedia.org

Given her work, many felt the makeup was unnecessary.

Given her work, many felt the makeup was unnecessary.

Twitter: @MarrowNator


View Entire List ›

Quelle: <a href="People Are Pissed That Snapchat&039;s Marie Curie Filter Adds A Full Face Of Makeup“>BuzzFeed

Helping PTG attendees and other developers get to the OpenStack Summit

Although the OpenStack design events have changed, developers and operators still have a critical perspective to bring to the OpenStack Summits. At the PTG, a common whisper heard in the hallways was, &;I really want to be at the Summit, but my [boss/HR/approver] doesn&;t understand why I should be there.&; To help you out, we took our original &8220;Dear Boss&8221; letter and made a few edits for the PTG crowd. If you&8217;re a contributor or developer who wasn&8217;t able to attend the PTG, with a few edits, this letter can also work for you. (Not great with words? Foundation wordsmith Anne can help you out&;anne at openstack.org)
 
Dear [Boss],
 
I would like to attend the OpenStack Summit in Boston, May 8-11, 2017. At the Pike Project Team Gathering in Atlanta (PTG), I was able to learn more about the new development event model for OpenStack. In the past I attended the Summit to participate in the Design Summit, which encapsulated the feedback and planning as well as design and development of creating OpenStack releases. One challenge was that the Design Summit did not leave enough time for “head down” work within upstream project teams (some teams ended up traveling to team-specific mid-cycle sprints to compensate for that). At the Pike PTG, we were able to kickstart the Pike cycle development, working heads down for a full week. We made great progress on both single project and OpenStack-wide goals, which will improve the software for all users, including our organization.
 
Originally, I––and many other devs––were under the impression that we no longer needed to attend the OpenStack Summit. However, after a week at the PTG, I see that I have a valuable role to play at the Summit’s “Forum” component. The Forum is where I can gather direct feedback and requirements from operators and users, and express my opinion and our organization’s about OpenStack’s future direction. The Forum will let me engage with other groups with similar challenges, project desires and solutions.
 
While our original intent may have been to send me only to the PTG, I would strongly like us to reconsider. The Summit is still an integral part of the OpenStack design process, and I think my attendance is beneficial to both my professional development and our organization. Because of my participation in the PTG, I received a free pass to the Summit, which I must redeem by March 14.      
 
Thank you for considering my request.
[Your Name]
Quelle: openstack.org

What’s new in OpenStack Ocata webinar — Q&A

The post What&;s new in OpenStack Ocata webinar &; Q&;A appeared first on Mirantis | Pure Play Open Cloud.
On February 22, my colleagues Rajat Jain, Stacy Verroneau, and Michael Tillman and I held a webinar to discuss the new features in OpenStack&8217;s latest release, Ocata. Unfortunately, we ran out of time for questions and answers, so here they are.
Q: What are the benefits of using the cells capability?
Rajat: The cells concept was introduced in the Juno release, and as some of you may recall, it was to allow a large number of nova/compute instances to share openstack services.

Therefore, Cells functionality enables you to scale an OpenStack Compute cloud in a more distributed fashion without having to use complicated technologies like database and message queue clustering. It supports very large deployments.

When this functionality is enabled, the hosts in an OpenStack Compute cloud are partitioned into groups called cells. Cells are configured as a tree. The top-level cell should have a host that runs a nova-api service, but no nova-compute services. Each child cell should run all of the typical nova-* services in a regular Compute cloud except for nova-api. You can think of cells as a normal Compute deployment in that each cell has its own database server and message queue broker. This was achieved by the nova cells and nova api services to provide the capabilities.
One of the key changes in Ocata is the upgrade to cells v2, which now only relies on the nova api service for all the synchronization across the cells.
Q: What is the placement service and how can I leverage it?
Rajat: The placement service, which was introduced in the Newton release, is now a key part of OpenStack and also mandatory in determining the optimum placement of VMs. Basically, you set up pools of resources, provide an inventory of the compute nodes, and then set up allocations for resource providers. Then you can set up policies and models for optimum placements of VMs.
Q: What is the OS profiler, and why is it useful?
Rajat: OpenStack consists of multiple projects. Each project, in turn, is  composed of multiple services. To process a request &8212; for example, to boot a virtual machine &8212; OpenStack uses multiple services from different projects. If something in this process runs slowly, it&8217;s extremely complicated to understand what exactly goes wrong and to locate the bottleneck.
To resolve this issue,  a tiny but powerful library, osprofiler, was introduced. The osprofiler library will be used by all OpenStack projects and their python clients. It provides functionality to be able to generate 1 trace per request, flowing through all involved services. This trace can then be extracted and used to build a tree of calls which can be quite handy for a variety of reasons (for example, in isolating cross-project performance issues).
Q: If I have keystone connected to a backend active directory, will i benefit from the auto-provisioning of the federated identity?
Rajat: Yes. The federated identity mapping engine now supports the ability to automatically provision projects for federated users. A role assignment will automatically be created for the user on the specified project. Prior to this, a federated user had to attempt to authenticate before an administrator could assign roles directly to their shadowed identity, resulting in a strange user experience. This is therefore a big usability enhancement for deployers leveraging the federated identity plugins.
Q: Is FWaaS really used out there?
Stacy: Yes it is, but its viability in production is debatable and going with a 3rd party with a Neutron plugin is still, IMHO, the way to go.
Q: When is Octavia GA planned to be released?
Stacy: Octavia is forecast to be GA in the Pike release.
Q: Are DragonFlow and Tricircle ready for Production?
Stacy: Those are young big tent projects but pretty sure we will see a big evolution for Pike.  
Q: What&8217;s the codename for placement service please?
Stacy: It&8217;s just called the Placement API. There&8217;s no fancy name.
Q: Does Ocata continue support for Fernet tokens?
Rajat: Yes.
Q: With federated provider,  can i integrate openstack env with my on-prem AD and allow domain users to use Openstack?
Rajat: This was always supported, and is not new to ocata. More details at https://docs.openstack.org/admin-guide/identity-integrate-with-ldap.html
What&8217;s new in this area is that the federated identity mapping engine now supports the ability to automatically provision projects for federated users. A role assignment will automatically be created for the user on the specified project. Prior to this, a federated user had to attempt to authenticate before an administrator could assign roles directly to their shadowed identity, resulting in a strange user experience.

Q: if i&8217;m using my existing domain users from AD to openstack,  how would i control their rights/role to perform specific tasks in the openstack project?
Rajat: You would first set up authentication via LDAP, then provide connection settings for AD and also set the identity driver to ldap in the keystone.conf. Next you will have to do an assignment of roles and projects to the AD users. Since Mitaka, the only option that you can use is the SQL driver for the assignment in the keystone.conf, but you will have to do the mapping. Most users prefer this approach anyway, as they want to keep the AD as read only from the OpenStack connection. You can find more details on how to configure keystone with LDAP here.
Q: What, if anything, was pushed out of the &;big tent&; and/or did not get robustly worked?
Nick:  You can get a complete view of work done on every project at Stackalytics.
Q: So when is Tricircle being released for use in production?
Stacy: Not soon enough.  Being a new Big Tent project, it needs some time to develop traction.  
Q: Do we support creation of SRIOV ports from horizon during instance creation. If not, are there any plans there?
Nick: According to the Horizon team, you can pre-create the port and assign it to an instance.
Q: Way to go warp speed Michael! Good job Rajat and Stacy. Don&8217;t worry about getting behind, I blame Nick anyway. Then again I always I always blame Nick.
Nick: Thanks Ben, I appreciate you, too.

Cognitive computing and analytics come to mobile solutions for employees

The Drum caught up with Gareth Mackown, partner and European mobile leader at IBM Global Business Services, at the Mobile World Congress this week in Barcelona to ask him about how mobile solutions are becoming more vital for not only an enterprise&;s customers, but also employees.
&;Today, organizations are really being defined by the experiences they create,&; Mackown said in an interview. &8220;Often, you think of that in terms of customers, but more and more we&8217;re seeing employee experience being a really defining factor.&8221;
IBM partnered with Apple to transform employee experiences through mobility, he said, and it&8217;s just getting started. Internet of Things (IoT) technology, cognitive computing and analytics will make those mobile solutions &8220;even more critical&8221; for people working in all kinds of different fields.
Mackown pointed to the new IBM partnership with Santander, announced at Mobile World Congress. &8220;We&8217;re helping them design and develop a suite of business apps to help them transform the employee experience they have for their business customers.&8221;
The video below includes the interview with Mackown, along with mobile business leaders from several other large companies.

Find out more in The Drum&;s full article.
The post Cognitive computing and analytics come to mobile solutions for employees appeared first on news.
Quelle: Thoughts on Cloud

6 IBM InterConnect Bootcamp labs developers shouldn’t miss

When learning about new technologies and tools, it often helps to get one’s hands just a little bit dirty and see what really makes them work.
That’s the idea behind the new Bootcamp labs at InterConnect 2017. These instructor-led labs will run three to four hours, providing enrollees the opportunity to do hands-on work with new products and technologies. Attendees can find a deeper dive in these sessions led by subject matter experts.
Here are the topics for all six Bootcamp labs:
1. Microservices-based application development mini-Bootcamp
This lab walks attendees through implementation of an application conforming to a microservice-based architecture. Attendees work with a microservice-based application in Bluemix, implement fabric components, and define backends for frontends (BFF) and API components. The application is built and deployed using a custom script to minimize errors while still allowing developers to oversee the implementation, performing a step-by-step evaluation of the architecture.
2. WebSphere 7 and 8 end-of-service and migration to WebSphere 9 and the cloud: Tools, tips and tricks lab
With the WebSphere 7 end-of-service announcement, this lab concerns what is involved in migrating applications to WebSphere 9, Liberty and the cloud. It covers best practices, steps and tools to assist in migrating the application server to WebSphere 9 and IBM Bluemix. Tools discussed include:

The WebSphere Migration Discovery tool
The WebSphere Binary scanner, which assesses complexity of applications and runtimes
The WebSphere Application Migration toolkit, which helps fix potential problems with code migration
WebSphere Configuration Migration toolkit, which is used to extract and move a configuration

3. DevOps and CSMO mini-Bootcamp
Learn how to get development and operations more tightly integrated in Bluemix by coupling Bluemix capabilities with cloud service management and operations. Explore some of the development, monitoring and operations tools in Bluemix. Better understand service-management issues for hybrid cloud applications versus private cloud applications. See why the concept of &;build to manage&; is especially relevant in a cloud environment. The intended audience is anyone who has an interest in Bluemix or Service Management and wants to see how the two worlds come together.
4. Event-driven and serverless computing with IBM Bluemix OpenWhisk
In this Bootcamp lab, attendees can explore the design and implementation of applications using event-driven and serverless technologies. They can learn compose and wire together microservice actions in response to events generated by humans as well as machines.
5. The practices of the Bluemix Garage developer: Extreme programming (for non-programmers)
This workshop immerses attendees who do not develop software into “extreme programming,” the flagship practice of the IBM Garage Method. Through group workshop activities, attendees can experience the ebb and flow of Bluemix Garage development cycles. They will try out pair programming, test-driven development, merciless refactoring and evolutionary architecture. Take away an appreciation of the rigor of extreme programming and learn why it makes the IBM Garage Method work.
6. Platform to Maximo/TRIRIGA hands-on lab
This lab offers attendees a basic understanding of how connected operations work. Use a simulated temperature sensor (a gauge meter in Maximo) to send a temperature reading to the Internet of Things (IoT) Quickstart. The message is then sent to NODERED, which parses the message. When a reading changes, it goes into a RESTAPI call that inserts the meter reading into the referenced asset meter readings. The reading updates the measure point and triggers a work order using Maximo’s inherit functionality or, if one elects to do the exercise using TRIRIGA, a work task.
You can see the detailed agenda of Bootcamp labs and enroll by using the IBM Events mobile app or InterConnect Session Expert. Attendees must enroll to secure seats in these sessions, so early enrollment is strongly suggested.
Follow @IBMCloudEdu to get the latest on boot camps and the InterConnect Hands-on Lab Center and don’t forget to register today and enroll for your Bootcamp labs.
The post 6 IBM InterConnect Bootcamp labs developers shouldn’t miss appeared first on news.
Quelle: Thoughts on Cloud

How to avoid getting clobbered when your cloud host goes down

The post How to avoid getting clobbered when your cloud host goes down appeared first on Mirantis | Pure Play Open Cloud.
Yesterday, while working on an upcoming tutorial, I was suddenly reminded how interconnected the web really is. Everything was humming along nicely, until I tried to push changes to a very large repository. That&;s when everything came to a screeching halt.
&;No problem,&; I thought.  &8220;Everybody has glitches once in a while.&8221;  So I decided I&8217;d work on a different piece of content, and pulled up another browser window for the project management system we use to get the URL. The servers, I was told, were &8220;receiving some TLC.&8221;  
OK, what about that mailing list task I was going to take care of?  Nope, that was down too.
As you probably know by now, all of these problems were due to a failure in one of Amazon Web Services&8217; S3 storage data centers.  According to the BBC, the outage even affected sites as large as Netflix, Spotify, and AirBnB.
Now, you may think I&8217;m writing this to gloat &; after all, here at Mirantis we obviously talk a lot about OpenStack, and one of the things we often hear is &8220;Oh, private cloud is too unreliable&8221; &8212; but I&8217;m not.
The thing is, public cloud isn&8217;t any more or less reliable than private cloud; it&8217;s just that you&8217;re not the one responsible for keeping it up and running.
And therein lies the problem.
If AWS S3 goes down, there is precisely zero you can do about it. Oh, it&8217;s not that there&8217;s nothing you can do to keep your application up; that&8217;s a different matter, which we&8217;ll get to in a moment.  But there&8217;s nothing that you can do to get S3 (or EC2, Google Compute Engine, or whatever public cloud service we&8217;re talking about) back up and running. Chances are you won&8217;t even know there&8217;s an issue until it starts to affect you &8212; and your customers.
A while back my colleague Amar Kapadia compared the costs of a DIY private cloud with a vendor distribution and with managed cloud service. In that calculation, he included the cost of downtime as part of the cost of DIY and vendor distribution-based private clouds. But really, as yesterday proved, no cloud &8212; even one operated by the largest public cloud in the world &8212; is beyond downtime. It&8217;s all in what you do about it.
So what can you do about it?
Have you heard the expression, &8220;The best defense is a good offense&8221;?  Well, it’s true for cloud operations too. In an ideal situation, you will know exactly what&8217;s going on in your cloud at all times, and take action to solve problems BEFORE they happen. You&8217;d want to know that the error rate for your storage is trending upwards before the data center fails, so you can troubleshoot and solve the problem. You&8217;d want to know that a server is running slow so you can find out why and potentially replace it before it dies on you, possibly taking critical workloads with it.
And while we&8217;re at it, a true cloud application should be able to weather the storm of a dying hypervisor or even a storage failure; they are designed to be fault-tolerant. Pure play open cloud is about building your cloud and applications so that they&8217;re not even vulnerable to the failure of a data center.
But what does that mean?
What is Pure Play Open Cloud?
You&8217;ll be hearing a lot more about Pure Play Open Cloud in the coming months, but for the purposes of our discussion, it means the following:
Cloud-based infrastructure that&8217;s agnostic to the hardware and underlying data center (so it can run anywhere), based on open source software such as OpenStack, Kubernetes, Ceph, networking software such as OpenContrail (so that there&8217;s no vendor lock-in, and you can move it between a hosted environment and your own) and managed as infrastructure-as-code, using CI/CD pipelines, and so on, to enable reliability and scale.
Well, that&8217;s a mouthful! What does it mean in practice?  
It means that the ideal situation is one in which you:

Are not dependent on a single vendor or cloud
Can react quickly to technical problems
Have visibility into the underlying cloud
Have support (and help) fixing issues before they become problems

Sounds great, but making it happen isn&8217;t always easy. Let&8217;s look at these things one at a time.
Not being dependant on a single vendor or cloud
Part of the impetus behind the development of OpenStack was the realization that while Amazon Web Services enabled a whole new way of working, it had one major flaw: complete dependance on AWS.  
The problems here were both technological and financial. AWS makes a point of trying to bring prices down overall, but the bigger you grow, incremental cost increases are going to happen; there&8217;s just no way around that. And once you&8217;ve decided that you need to do something else, if your entire infrastructure is built around AWS products and APIs, you&8217;re stuck.
A better situation would be to build your infrastructure and application in such a way that it&8217;s agnostic to the hardware and underlying infrastructure. If your application doesn&8217;t care if it&8217;s running on AWS or OpenStack, then you can create an OpenStack infrastructure that serves as the base for your application, and use external resources such as AWS or GCE for emergency scaling &8212; or damage control in case of emergency.
Reacting quickly to technical problems
In an ideal world, nobody would have been affected by the outage in AWS S3&8217;s us-east-1 region, because their applications would have been architected with a presence in multiple regions. That&8217;s what regions are for. Rarely, however, does this happen.
Build your applications so that they have &8212; or at the very least, CAN have &8212; a presence in multiple locations. Ideally, they&8217;re spread out by default, so if there&8217;s a problem in one &8220;place&8221;, the application keeps running. This redundancy can get expensive, though, so the next best thing would be to have it detect a problem and switch over to a fail-safe or alternate region in case of emergency. At the bare minimum, you should be able to manually change over to a different option once a problem has been detected.
Preferably, this would happen before the situation becomes critical.
Having visibility into the underlying cloud
Having visibility into the underlying cloud is one area where private or managed cloud definitely has the advantage over public cloud.  After all, one of the basic tenets of cloud is that you don&8217;t necessarily care about the specific hardware running your application, which is fine &8212; unless you&8217;re responsible for keeping it running.
In that case, using tools such as StackLight (for OpenStack) or Prometheus (for Kubernetes) can give you insight into what&8217;s going on under the covers. You can see whether a problem is brewing, and if it is, you can troubleshoot to determine whether the problem is the cloud itself, or the applications running on it.
Once you determine that you do have a problem with your cloud (as opposed to the applications running on it), you can take action immediately.
Support (and help) fixing issues before they become problems
Preventing and fixing problems is, for many people, where the rubber hits the road. With a serious shortage of cloud experts, many companies are nervous about trusting their cloud to their own internal people.
It doesn&8217;t have to be that way.
While it would seem like the least expensive way of getting into cloud is the &8220;do it yourself&8221; approach &8212; after all, the software&8217;s free, right? &8212; long term, that&8217;s not necessarily true.
The traditional answer is to use a vendor distribution and purchase support, and that&8217;s definitely a viable option.
A second option that&8217;s becoming more common is the notion of &8220;managed cloud.&8221;  In this situation, your cloud may or may not be on your premises, but the important part is that it&8217;s overseen by experts who know the signs to look for and are able to make sure that your cloud maintains a certain SLA &8212; without taking away your control.
For example, Mirantis Managed OpenStack is a service that monitors your cloud 24/7 and can literally fix problems before they happen. It involves remote monitoring, a CI/CD infrastructure, KPI reporting, and even operational support, if necessary. But Mirantis Managed OpenStack is designed on the notion of Build-Operate-Transfer; everything is built on open standards, so you&8217;re not locked in; when you&8217;re ready, you can take over and transition to a lower level of support &8212; or even take over entirely, if you want.
What matters is that you have help that keeps you running without keeping you trapped.
Taking control of your cloud destiny
The important thing here is that while it may seem easy to rely on a huge cloud vendor to do everything for you, it&8217;s not necessarily in your best interest. Take control of your cloud, and take responsibility for making sure that you have options &8212; and more importantly, that your applications have options too.
The post How to avoid getting clobbered when your cloud host goes down appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

How KPN speeds service delivery

Are you looking to transform your IT department into a self-service delivery center? Do your IT operations have the speed and control to deliver what’s needed without compromising quality?
Keep reading to find out how KPN, an IT and communications technology services provider, increased its speed to quickly deliver IT service requests, reduce costs and provide high quality cloud services.
KPN is a leader in IT services and connectivity. It offers fixed-line and mobile telephony, internet access and television services in Netherlands. The provider also operates several mobile brands in Germany and Belgium. Its subsidiary, Getronics N.V., provides services across the globe.
Data and storage have played a critical role in helping KPN deliver high quality cloud services to its clients. As rapid growth of data continues to change the game, here’s how this savvy business has used IBM Cloud to transform operations.
Cloud Orchestrator accelerates service delivery
KPN executives wanted to optimize its cloud strategy to enhance service delivery time and quality. Potential solutions would help them manage and automate storage services in-house. The goal: improve cloud management to accelerate service delivery and reduce costs without sacrificing quality.
IBM Cloud Orchestrator (ICO) is an excellent solution for managing your complex hybrid cloud environments. It provides cloud management for IT services through a user-friendly, self-service portal. It automates and integrates the infrastructure, application, storage and network into a single tool. Additionally, the self-service catalog lets users automate the deployment of data center resources, cloud-enabled business processes and other cloud services.
Business transformation through automation
With ICO, KPN automated its storage services and designed an in-house cloud management system. The solution helped KPN provision and scale cloud resources and reduce both administrator workloads and error-prone manual IT administrator tasks. As a result, KPN could accelerate service delivery times by approximately 80 percent. This significantly improved the service quality and saved resources through automation.
Watch this video to learn more about how IBM Cloud Orchestrator helped KPN accelerate its cloud service delivery:

For a more in-depth discussion, join us at InterConnect 2017 and attend the session: “How KPN leveraged IBM Cloud technologies for automation and &;insourcing&; of operations work.” And there&;s more. InterConnect will bring together more than 20,000 top cloud professionals to network, train and learn about the future of the industry. If you still haven’t signed up, be sure to register now.
The post How KPN speeds service delivery appeared first on news.
Quelle: Thoughts on Cloud