A dash of Salt(Stack): Using Salt for better OpenStack, Kubernetes, and Cloud — Q&A

The post A dash of Salt(Stack): Using Salt for better OpenStack, Kubernetes, and Cloud &; Q&;A appeared first on Mirantis | The Pure Play OpenStack Company.
On January 16, Ales Komarek presented an introduction to Salt. We covered the following topics:

The model-driven architectures behind how Salt stores topologies and workflows

How Salt provides solution adaptability for any custom workloads

Infrastructure as Code: How Salt provides not only configuration management, but entire life-cycle management

How Continuous Delivery/ Integration/ Management fits into the puzzle

How Salt manages and scales parallel cloud deployments that include OpenStack, Kubernetes and others

What we didn&;t do, however, is get to all of the questions from the audience, so here&8217;s a written version of the Q&A, including those we didn&8217;t have time for.
Q: Why Salt?
A: It&8217;s python, it has a huge and growing base of imperative modules and declarative states, and it has a good message bus.
Q: What tools are used to initially provision Salt across an infrastructure? Cobbler, Puppet, MAAS?
A: To create a new deployment, we rely on a single node, where we bootstrap the Salt master and Metal-as-a-Service (formerly based on Foreman, now Ironic). Then we control the MaaS service to deploy the physical bare-metal nodes.
Q: How broad a range of services do you already have recipes for, and how easy is it to write and drop in new ones if you need one that isn&8217;t already available?
A: The ecosystem is pretty vast. You can look at either https://github.com/tcpcloud or the formula ecosystem overview at http://openstack-salt.tcpcloud.eu/develop/extending-ecosystem.html. There are also guidelines for creating new formulas, which is very straight-forward process. A new service can be created in matter of hours, or even minutes.
Q: Can you convert your existing Puppet/Ansible scripts to Salt, and what would I search to find information about that?
A: Yes, we have reverse engineered autmation for some of these services in the past. For example we were deeply inspired by the Ansible module for Gerrit resource management.  You can find some information on creating Salt Formulas at https://docs.saltstack.com/en/latest/topics/development/conventions/formulas.html,  and we will be adding tutorial material here on this blog in the near future.
Q: Is there a NodeJS binding available?
A: If you meant the NodeJS formula to setup a NodeJS enironment, yes, there is such a formula. If you mean bindings to the system, you can use the Salt API to integrate NodeJS with Salt.
Q: Have you ever faced performance issues when storing a lot of data in pillars?
A: We have not faced performance issues with pillars that are deliverd by reclass ENC. It has been tested up to a few thousands of nodes.
Q: What front end GUI is typically used with Salt monitoring (e.g., Kibana, Grafana,&;)?
A: Salt monitoring uses Sensu or StackLight for the actual functional monitoring checks. It uses Kibana to display events stored in Elasticsearch and Grafana to visualize metrics coming from time-series databases such as Graphite or Influx.
Q: What is the name of the salt PKI manager? (Or what would I search for to learn more about using salt for infrastructure-wide PKI management?)
A: The PKI feature is well documented in the Salt docs, and is available at https://docs.saltstack.com/en/latest/ref/states/all/salt.states.x509.html.
Q: Can I practice installing and deploying SaltStack on my laptop? Can you recommend a link?
A: I&8217;d recommend you have a look at http://openstack-salt.tcpcloud.eu/develop/quickstart-vagrant.html where you can find a nice tutorial on how to setup a simple infrastructure.
Q: Thanks for the presentation! Within Heat, I&8217;ve only ever seen salt used in terms of software deployments. What we&8217;ve seen today, however, goes clear through to service, resource, and even infrastructure deployment! In this way, does Salt become a viable alternative to Heat? (I&8217;m trying to understand where the demarcation is between the two now.)
A: Think of Heat as part of the solution responsible for spinning up the harware resources such as networks, routers and servers, in a way that is similar to MaaS, Ironic or Foreman. Salt&8217;s part begins where Heat&8217;s part ends &; after the resources are started, Salt takes over and finishes the installation/configuration process.
Q: When you mention Orchestration, how does salt differentiate from Heat, or is Salt making Heat calls?
A: Heat is more for hardware resources orchestration. It has some capability to do software configuration, but rather limited. We have created heat resources that help to classify resources on fly. We also have salt heat modules capable of running a heat stack.
Q: Will you be showing any parts of SaltStack Enterprise, or only FREE Salt Open Source? Do you use Salt in Multi-Master deployment?
A: We are using the opensource version of SaltStack, the enterprise gets little gain given the pricing model. In some deployments, we use the salt master HA deployment setups.
Q: What HA engine is typically used for the Salt master?
A: We use 2 separate masters with shared storage provided by GlusterFS on which the master&8217;s and minions&8217; keys are stored.
Q: Is there a GUI ?
A: The creation of a GUI is currently under discussion.
Q: How do you enforce Role Based Administration in the Salt Master? Can you segregate users to specific job roles and limit which jobs they can execute in Salt?
A: We use the ACLs of the Salt master to limit the user&8217;s options. This also applies for the Jenkins-powered pipelines, which we also manage by Salt, both on the job and the user side.
Q: Can you show the salt files (.sls, pillar, &8230;)?
A: You can look at the github for existing formulas at https://github.com/tcpcloud and good example of pillars can be found at https://github.com/Mirantis/mk-lab-salt-model/.
Q: Is there a link for deploying Salt for Kubernetes? Any best practices guide?
A: The best place to look is the https://github.com/openstack/salt-formula-kubernetes README.
Q: Is SaltStack the same as what&8217;s on saltstack.com, or is it a different project?
A: These are the same project. Saltstack.com is company that is behind the Salt technology and provides support and enterprise versions.
Q: So far this looks like what Chef can do. Can you make a comparison or focus on the &;value add&; from Salt that Chef or Puppet don&8217;t give you?
A: The replaceability/reusability of the individual components is very easy, as all formulas are &;aware&8217; of the rest and share a common form and single dependency tree. This is a problem with community-based formulas in either of the other tools, as they are not very compatible with each other.
Q: In terms of purpose, is there any difference between SaltStack vs Openstack?
A: Apart from the fact that SaltStack can install OpenStack, it can also provide virtualization capabilities. However, Salt has very limited options, while OpenStack supports complex production level scenarios.
Q: Great webinar guys. Ansible seems to have a lot of traction as means of deploying OpenStack. Could you compare/contrast with SaltStack in this context?
A: With Salt, the OpenStack services are just part of wider ecosystem; the main advantage comes from the consistency across all services/formulas, the provision of support metadata to provide documentation or monitoring features.
Q: How is Salt better than Ansible/Puppet/Chef ?
A: The biggest difference is the message bus, which lets you control, and get data from, the infrastructure with great speed and concurrency.
Q: Can you elaborate mirantis fuel vs saltstack?
Fuel is an open source project that was (and is) designed to deploy OpenStack from a single ISO-based artifact, and to provide various lifecycle management functions once the cluster has been deployed. SaltStack is designed to be more granular, working with individual components or services.
Q: Are there plans to integrate SaltStack in to MOS?
A: The Mirantis Cloud Platform (MCP) will be powered by Salt/Reclass.
Q: Is Fuel obsolete or it will use Salt in the background instead of Puppet?
A: Fuel in its current form will continue to be used for deploying Mirantis OpenStack in the traditional manner (as a single ISO file). We are extending our portfolio of life cycle management tools to include appropriate technologies for deploying and managing open source software in MCP. For example, Fuel CCP will be used to deploy containerized OpenStack on Kubernetes. Similarly, Decapod will be used to deploy Ceph. All of these lifecycle management technologies are, in a sense, Fuel. Whether a particular tool uses Salt or Puppet will depend on what it&8217;s doing.
Q: MOS 10 release date?
A: We&8217;re still making plans on this.
Thanks for joining us, or if you missed it, please go ahead and view the webinar.
The post A dash of Salt(Stack): Using Salt for better OpenStack, Kubernetes, and Cloud &8212; Q&038;A appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Cloud Native App Developers Delight! Container Storage Just Got a Whole Lot Easier

The new Red Hat OpenShift Container Platform offers a rich user experience with dynamic provisioning of storage volumes, automation, and much more. (Republished from the original blog post by Michael Adam and Sayan Saha at redhatstorage.redhat.com) Earlier today, Red Hat announced general availability of Red Hat OpenShift Container Platform 3.4 which includes key features such [&;]
Quelle: OpenShift

Introduction to YAML: Creating a Kubernetes deployment

The post Introduction to YAML: Creating a Kubernetes deployment appeared first on Mirantis | The Pure Play OpenStack Company.
In previous articles, we&;ve been talking about how to use Kubernetes to spin up resources. So far, we&8217;ve been working exclusively on the command line, but there&8217;s an easier and more useful way to do it: creating configuration files using YAML. In this article, we&8217;ll look at how YAML works and use it to define first a Kubernetes Pod, and then a Kubernetes Deployment.
YAML Basics
It&8217;s difficult to escape YAML if you&8217;re doing anything related to many software fields &; particularly Kubernetes, SDN, and OpenStack. YAML, which stands for Yet Another Markup Language, or YAML Ain&8217;t Markup Language (depending who you ask) is a human-readable text-based format for specifying configuration-type information. For example, in this article, we&8217;ll pick apart the YAML definitions for creating first a Pod, and then a Deployment.
Using YAML for K8s definitions gives you a number of advantages, including:

Convenience: You&8217;ll no longer have to add all of your parameters to the command line
Maintenance: YAML files can be added to source control, so you can track changes
Flexibility: You&8217;ll be able to create much more complex structures using YAML than you can on the command line

YAML is a superset of JSON, which means that any valid JSON file is also a valid YAML file. So on the one hand, if you know JSON and you&8217;re only ever going to write your own YAML (as opposed to reading other people&8217;s) you&8217;re all set.  On the other hand, that&8217;s not very likely, unfortunately.  Even if you&8217;re only trying to find examples on the web, they&8217;re most likely in (non-JSON) YAML, so we might as well get used to it.  Still, there may be situations where the JSON format is more convenient, so it&8217;s good to know that it&8217;s available to you.
Fortunately, there are only two types of structures you need to know about in YAML:

Lists
Maps

That&8217;s it. You might have maps of lists and lists of maps, and so on, but if you&8217;ve got those two structures down, you&8217;re all set. That&8217;s not to say there aren&8217;t more complex things you can do, but in general, this is all you need to get started.
YAML Maps
Let&8217;s start by looking at YAML maps.  Maps let you associate name-value pairs, which of course is convenient when you&8217;re trying to set up configuration information.  For example, you might have a config file that starts like this:

apiVersion: v1
kind: Pod
The first line is a separator, and is optional unless you&8217;re trying to define multiple structures in a single file. From there, as you can see, we have two values, v1 and Pod, mapped to two keys, apiVersion and kind.
This kind of thing is pretty simple, of course, and you can think of it in terms of its JSON equivalent:
{
“apiVersion”: “v1″,
“kind”: “Pod”
}
Notice that in our YAML version, the quotation marks are optional; the processor can tell that you&8217;re looking at a string based on the formatting.
You can also specify more complicated structures by creating a key that maps to another map, rather than a string, as in:

apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
In this case, we have a key, metadata, that has as its value a map with 2 more keys, name and labels. The labels key itself has a map as its value. You can nest these as far as you want to.
The YAML processor knows how all of these pieces relate to each other because we&8217;ve indented the lines. In this example I&8217;ve used 2 spaces for readability, but the number of spaces doesn&8217;t matter &8212; as long as it&8217;s at least 1, and as long as you&8217;re CONSISTENT.  For example, name and labels are at the same indentation level, so the processor knows they&8217;re both part of the same map; it knows that app is a value for labels because it&8217;s indented further.
Quick note: NEVER use tabs in a YAML file.
So if we were to translate this to JSON, it would look like this:
{
“apiVersion”: “v1″,
“kind”: “Pod”,
“metadata”: {
“name”: “rss-site”,
“labels”: {
“app”: “web”
}
}
}
Now let&8217;s look at lists.
YAML lists
YAML lists are literally a sequence of objects.  For example:
args
 – sleep
 – “1000”
 – message
 – “Bring back Firefly!”
As you can see here, you can have virtually any number of items in a list, which is defined as items that start with a dash (-) indented from the parent.  So in JSON, this would be:
{
“args”: [“sleep”, “1000”, “message”, “Bring back Firefly!”]
}
And of course, members of the list can also be maps:

apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
spec:
 containers:
   – name: front-end
     image: nginx
     ports:
       – containerPort: 80
   – name: rss-reader
     image: nickchase/rss-php-nginx:v1
     ports:
       – containerPort: 88
So as you can see here, we have a list of containers &;objects&;, each of which consists of a name, an image, and a list of ports.  Each list item under ports is itself a map that lists the containerPort and its value.
For completeness, let&8217;s quickly look at the JSON equivalent:
{
“apiVersion”: “v1″,
“kind”: “Pod”,
“metadata”: {
“name”: “rss-site”,
“labels”: {
“app”: “web”
}
},
“spec”: {
“containers”: [{
“name”: “front-end”,
“image”: “nginx”,
“ports”: [{
“containerPort”: “80”
}]
},
{
“name”: “rss-reader”,
“image”: “nickchase/rss-php-nginx:v1″,
“ports”: [{
“containerPort”: “88”
}]
}]
}
}
As you can see, we&8217;re starting to get pretty complex, and we haven&8217;t even gotten into anything particularly complicated! No wonder YAML is replacing JSON so fast.
So let&8217;s review.  We have:

maps, which are groups of name-value pairs
lists, which are individual items
maps of maps
maps of lists
lists of lists
lists of maps

Basically, whatever structure you want to put together, you can do it with those two structures.  
Creating a Pod using YAML
OK, so now that we&8217;ve got the basics out of the way, let&8217;s look at putting this to use. We&8217;re going to first create a Pod, then a Deployment, using YAML.
If you haven&8217;t set up your cluster and kubectl, go ahead and check out this article series on setting up Kubernetes before you go on.  It&8217;s OK, we&8217;ll wait&;.

Back already?  Great!  Let&8217;s start with a Pod.
Creating the pod file
In our previous example, we described a simple Pod using YAML:
&8212;
apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
spec:
 containers:
   &; name: front-end
     image: nginx
     ports:
       &8211; containerPort: 80
   &8211; name: rss-reader
     image: nickchase/rss-php-nginx:v1
     ports:
       &8211; containerPort: 88
Taking it apart one piece at a time, we start with the API version; here it&8217;s just v1. (When we get to deployments, we&8217;ll have to specify a different version because Deployments don&8217;t exist in v1.)
Next, we&8217;re specifying that we want to create a Pod; we might specify instead a Deployment, Job, Service, and so on, depending on what we&8217;re trying to achieve.
Next we specify the metadata. Here we&8217;re specifying the name of the Pod, as well as the label we&8217;ll use to identify the pod to Kubernetes.
Finally, we&8217;ll specify the actual objects that make up the pod. The spec property includes any containers, storage volumes, or other pieces that Kubernetes needs to know about, as well as properties such as whether to restart the container if it fails. You can find a complete list of Kubernetes Pod properties in the Kubernetes API specification, but let&8217;s take a closer look at a typical container definition:
&8230;
spec:
 containers:
   &8211; name: front-end
     image: nginx
     ports:
       &8211; containerPort: 80
   &8211; name: rss-reader
&8230;
In this case, we have a simple, fairly minimal definition: a name (front-end), the image on which it&8217;s based (nginx), and one port on which the container will listen internally (80).  Of these, only the name is really required, but in general, if you want it to do anything useful, you&8217;ll need more information.
You can also specify more complex properties, such as a command to run when the container starts, arguments it should use, a working directory, or whether to pull a new copy of the image every time it&8217;s instantiated.  You can also specify even deeper information, such as the location of the container&8217;s exit log.  Here are the properties you can set for a Container:

name
image
command
args
workingDir
ports
env
resources
volumeMounts
livenessProbe
readinessProbe
livecycle
terminationMessagePath
imagePullPolicy
securityContext
stdin
stdinOnce
tty

Now let&8217;s go ahead and actually create the pod.
Creating the pod using the YAML file
The first step, of course, is to go ahead and create a text file.   Call it pod.yaml and add the following text, just as we specified it earlier:
&8212;
apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
spec:
 containers:
   &8211; name: front-end
     image: nginx
     ports:
       &8211; containerPort: 80
   &8211; name: rss-reader
     image: nickchase/rss-php-nginx:v1
     ports:
       &8211; containerPort: 88
Save the file, and tell Kubernetes to create its contents:
> kubectl create -f pod.yaml
pod “rss-site” created
As you can see, K8s references the name we gave the Pod.  You can see that if you ask for a list of the pods:
> kubectl get pods
NAME       READY     STATUS              RESTARTS   AGE
rss-site   0/2       ContainerCreating   0          6s
If you check early enough, you can see that the pod is still being created.  After a few seconds, you should see the containers running:
> kubectl get pods
NAME       READY     STATUS    RESTARTS   AGE
rss-site   2/2       Running   0          14s
From here, you can test out the Pod (just as we did in the previous article), but ultimately we want to create a Deployment, so let&8217;s go ahead and delete it so there aren&8217;t any name conflicts:
> kubectl delete pod rss-site
pod “rss-site” deleted
Troubleshooting pod creation
Sometimes, of course, things don&8217;t go as you expect. Maybe you&8217;ve got a networking issue, or you&8217;ve mistyped something in your YAML file.  You might see an error like this:
> kubectl get pods
NAME       READY     STATUS         RESTARTS   AGE
rss-site   1/2       ErrImagePull   0          9s
In this case, we can see that one of our containers started up just fine, but there was a problem with the other.  To track down the problem, we can ask Kubernetes for more information on the Pod:
> kubectl describe pod rss-site
Name:           rss-site
Namespace:      default
Node:           10.0.10.7/10.0.10.7
Start Time:     Sun, 08 Jan 2017 08:36:47 +0000
Labels:         app=web
Status:         Pending
IP:             10.200.18.2
Controllers:    <none>
Containers:
 front-end:
   Container ID:               docker://a42edaa6dfbfdf161f3df5bc6af05e740b97fd9ac3d35317a6dcda77b0310759
   Image:                      nginx
   Image ID:                   docker://sha256:01f818af747d88b4ebca7cdabd0c581e406e0e790be72678d257735fad84a15f
   Port:                       80/TCP
   State:                      Running
     Started:                  Sun, 08 Jan 2017 08:36:49 +0000
   Ready:                      True
   Restart Count:              0
   Environment Variables:      <none>
 rss-reader:
   Container ID:
   Image:                      nickchase/rss-php-nginx
   Image ID:
   Port:                       88/TCP
   State:                      Waiting
    Reason:                   ErrImagePull
   Ready:                      False
   Restart Count:              0
   Environment Variables:      <none>
Conditions:
 Type          Status
 Initialized   True
 Ready         False
 PodScheduled  True
No volumes.
QoS Tier:       BestEffort
Events:
 FirstSeen     LastSeen        Count   From                    SubobjectPath  Type             Reason                  Message
 ———     ——–        —–   —-                    ————-  ——– ——                  ——-
 45s           45s             1       {default-scheduler }                   Normal           Scheduled               Successfully assigned rss-site to 10.0.10.7
 44s           44s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Pulling                 pulling image “nginx”
 45s           43s             2       {kubelet 10.0.10.7}                    Warning          MissingClusterDNS       kubelet does not have ClusterDNS IP configured and cannot create Pod using “ClusterFirst” policy. Falling back to DNSDefault policy.
 43s           43s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Pulled                  Successfully pulled image “nginx”
 43s           43s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Created                 Created container with docker id a42edaa6dfbf
 43s           43s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Started                 Started container with docker id a42edaa6dfbf
 43s           29s             2       {kubelet 10.0.10.7}     spec.containers{rss-reader}     Normal          Pulling                 pulling image “nickchase/rss-php-nginx”
 42s           26s             2       {kubelet 10.0.10.7}     spec.containers{rss-reader}     Warning         Failed                  Failed to pull image “nickchase/rss-php-nginx”: Tag latest not found in repository docker.io/nickchase/rss-php-nginx
 42s           26s             2       {kubelet 10.0.10.7}                    Warning          FailedSync              Error syncing pod, skipping: failed to “StartContainer” for “rss-reader” with ErrImagePull: “Tag latest not found in repository docker.io/nickchase/rss-php-nginx”

 41s   12s     2       {kubelet 10.0.10.7}     spec.containers{rss-reader}    Normal   BackOff         Back-off pulling image “nickchase/rss-php-nginx”
 41s   12s     2       {kubelet 10.0.10.7}                                    Warning  FailedSync      Error syncing pod, skipping: failed to “StartContainer” for “rss-reader” with ImagePullBackOff: “Back-off pulling image “nickchase/rss-php-nginx””
As you can see, there&8217;s a lot of information here, but we&8217;re most interested in the Events &8212; specifically, once the warnings and errors start showing up.  From here I was able to quickly see that I&8217;d forgotten to add the :v1 tag to my image, so it was looking for the :latest tag, which didn&8217;t exist.  
To fix the problem, I first deleted the Pod, then fixed the YAML file and started again. Instead, I could have fixed the repo so that Kubernetes could find what it was looking for, and it would have continued on as though nothing had happened,.
Now that we&8217;ve successfully gotten a Pod running, let&8217;s look at doing the same for a Deployment.
Creating a Deployment using YAML
Finally, we&8217;re down to creating the actual Deployment.  Before we do that, though, it&8217;s worth understanding what it is we&8217;re actually doing.
K8s, remember, manages container-based resources. In the case of a Deployment, you&8217;re creating a set of resources to be managed. For example, where we created a single instance of the Pod in the previous example, we might create a Deployment to tell Kubernetes to manage a set of replicas of that Pod &8212; literally, a ReplicaSet &8212; to make sure that a certain number of them are always available.  So we might start our Deployment definition like this:
&8212;
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: rss-site
spec:
 replicas: 2
Here we&8217;re specifying the apiVersion as extensions/v1beta1 &8212; remember, Deployments aren&8217;t in v1, as Pods were &8212; and that we want a Deployment. Next we specify the name. We can also specify any other metadata we want, but let&8217;s keep things simple for now.
Finally, we get into the spec. In the Pod spec, we gave information about what actually went into the Pod; we&8217;ll do the same thing here with the Deployment. We&8217;ll start, in this case, by saying that whatever Pods we deploy, we always want to have 2 replicas. You can set this number however you like, of course, and you can also set properties such as the selector that defines the Pods affected by this Deployment, or the minimum number of seconds a pod must be up without any errors before it&8217;s considered &8220;ready&8221;.  You can find a full list of the Deployment specification properties in the Kuberenetes v1beta1 API reference.
OK, so now that we know we want 2 replicas, we need to answer the question: &8220;Replicas of what?&8221;  They&8217;re defined by templates:
&8212;
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: rss-site
spec:
 replicas: 2
 template:
   metadata:
     labels:
       app: web
   spec:
     containers:
       &8211; name: front-end
         image: nginx
         ports:
           &8211; containerPort: 80
       &8211; name: rss-reader
         image: nickchase/rss-php-nginx:v1
         ports:
           &8211; containerPort: 88
Look familiar?  It should; it&8217;s virtually identical to the Pod definition in the previous section, and that&8217;s by design. Templates are simply definitions of objects to be replicated &8212; objects that might, in other circumstances, by created on their own.
Now let&8217;s go ahead and create the deployment.  Add the YAML to a file called deployment.yaml and point Kubernetes at it:
> kubectl create -f deployment.yaml
deployment “rss-site” created
To see how it&8217;s doing, we can check on the deployments list:
> kubectl get deployments
NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
rss-site   2         2         2            1           7s
As you can see, Kubernetes has started both replicas, but only one is available. You can check the event log by describing the Deployment, as before:
> kubectl describe deployment rss-site
Name:                   rss-site
Namespace:              default
CreationTimestamp:      Mon, 09 Jan 2017 17:42:14 +0000=
Labels:                 app=web
Selector:               app=web
Replicas:               2 updated | 2 total | 1 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          rss-site-4056856218 (2/2 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–        ——                  ——-
 46s           46s             1       {deployment-controller }               Normal           ScalingReplicaSet       Scaled up replica set rss-site-4056856218 to 2
As you can see here, there&8217;s no problem, it just hasn&8217;t finished scaling up yet. Another few seconds, and we can see that both Pods are running:
> kubectl get deployments
NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
rss-site   2         2         2            2           1m
What we&8217;ve seen so far
OK, so let&8217;s review. We&8217;ve basically covered three topics:

YAML is a human-readable text-based format that let&8217;s you easily specify configuration-type information by using a combination of maps of name-value pairs and lists of items (and nested versions of each).
YAML is the most convenient way to work with Kubernetes objects, and in this article we looked at creating Pods and Deployments.
You can get more information on running (or should-be-running) objects by asking Kubernetes to describe them.

So that&8217;s our basic YAML tutorial. We&8217;re going to be tackling a great deal of Kubernetes-related content in the coming months, so if there&8217;s something specific you want to hear about, let us know in the comments, or tweet us at @MirantisIT.
The post Introduction to YAML: Creating a Kubernetes deployment appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

The 50 Worst Things On The Internet In 2016

In a year like 2016, this list might almost restore your faith in the internet&;almost&8230; WARNING: This post is obviously NSFW.

The severed toe one Tumblr user mailed her Tumblr friend to make a necklace.

The severed toe one Tumblr user mailed her Tumblr friend to make a necklace.

cummy-eyelids.tumblr.com

This whole thing.

This whole thing.

Tumblr

This British hero who expressed his opinion by sticking an EU referendum voting card into his foreskin.

This British hero who expressed his opinion by sticking an EU referendum voting card into his foreskin.

stuffinmydick.tumblr.com


View Entire List ›

Quelle: <a href="The 50 Worst Things On The Internet In 2016“>BuzzFeed

Software Engineer (.Net)

The post Software Engineer (.Net) appeared first on Mirantis | The Pure Play OpenStack Company.
Agilent Technologies is a leader in life sciences, diagnostics and applied chemical markets. The company provides laboratories worldwide with instruments, services, consumables, applications and expertise, enabling customers to gain the insights they seek. Agilent focuses its expertise on six key markets: Food, Environmental and Forensics, Pharmaceutical, diagnostics, Chemical and Energy, Research.The purpose of Agilent Research Laboratories is to power Agilent’s growth through breakthrough science and technology. To complement their product line R&D, Agilent Labs looks beyond the evolution of current products and platforms to create the technologies that will underlie tomorrow’s breakthroughs, enabling Agilent customers to answer questions at the leading edge of life science, diagnostics and the applied markets.For more details about Agilent Technologies please see: http://www.agilent.com/about/companyinfo/index.htmlToday Mirantis and Agilent Technologies are looking for an experienced Software Engineer/Senior Software Engineer, who would join our distributed team (we have engineers in California, Russia, Ukraine).Our development center works on different projects for Agilent’s Life Science department. Such as:OpenLAB Shared Services, an integration platform for different types of software for chemical analysis and chemical data processing.Content Management systems for storing of scientific data.CDS Installer for deployment, upgrade and configuration of OpenLAB Chromatography Data Systems in distributed laboratories.For more details about Agilent products please see: http://www.agilent.com/en-us/products/software-informatics/openlab-software-suiteResponsibilities:Design and develop key components of different Agilent’s products (using: C#, .Net, WPF, WCF, ASP.Net various databases);Work closely with Agilent employees from USA and Europe in a collaborative development environment;Introduce and maintain best development lifecycle practices. Such as: code-review, continuous integration, automated tests etc.;Troubleshoot problems as needed in the QA and production environments;Requirements:2+ years of experience in .Net development and testing;Clear understanding of .NET framework platform;Excellence in software engineering practices and coding;Strong background in object oriented design, data structures, algorithms;Understanding of database technologies;Experience using of Build systems  (Maven/MsBuild/Nant/&;);Proficient in written English, spoken English;ASP.NET or WPF experience would be a plus;Desired:Experience using of Source Control systems (Git);Experience with issue-tracking systems: Jira, TeamTrack;We offer:Chance to contribute to Silicon Valley software development;Modern office, comfortable work environment, the best tools;Competitive salary (after interview);Career and professional growth;20-working days paid vacation, 100% paid sick list;Medical insurance;Benefit program;Flexible schedule;Friendly atmosphere.The post Software Engineer (.Net) appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

How do I create a new Docker image for my application?

The post How do I create a new Docker image for my application? appeared first on Mirantis | The Pure Play OpenStack Company.
In our previous series, we looked at how to deploy Kubernetes and create a cluster. We also looked at how to deploy an application on the cluster and configure OpenStack instances so you can access it.  Now we&;re going to get deeper into Kubernetes development by looking at creating new Docker images so you can deploy your own applications and make them available to other people.
How Docker images work
The first thing that we need to understand is how Docker images themselves work.
The key to a Docker image is that it&8217;s alayered file system. In other words, if you start out with an image that&8217;s just the operating system (say Ubuntu) and then add an application (say Nginx), you&8217;ll wind up with something like this:

As you can see, the difference between IMAGE1 and IMAGE2 is just the application itself, and then IMAGE4 has the changes made on layers 3 and 4. So in order to create an image, you are basically starting with a base image and defining the changes to it.
Now, I hear you asking, &;But what if I want to start from scratch?&; Well, let&8217;s define &8220;from scratch&8221; for a minute. Chances are you mean you want to start with a clean operating system and go from there. Well, in most cases there&8217;s a base image for that, so you&8217;re still starting with a base image.  (If not, you can check out the instructions for creating a Docker base image.)
In general, there are two ways to create a new Docker image:

Create an image from an existing container: In this case, you start with an existing image, customize it with the changes you want, then build a new image from it.
Use a Dockerfile: In this case, you use a file of instructions &; the Dockerfile &8212; to specify the base image and the changes you want to make to it.

In this article, we&8217;re going to look at both of those methods. Let&8217;s start with creating a new image from an existing container.
Create from an existing container
In this example, we&8217;re going to start with an image that includes the nginx web application server and PHP. To that, we&8217;re going to add support for reading RSS files using an open source package called SimplePie. We&8217;ll then make a new image out of the altered container.
Create the original container
The first thing we need to do is instantiate the original base image.

The very first step is to make sure that your system has Docker installed.  If you followed our earlier series on running Kubernetes on OpenStack, you&8217;ve already got this handled.  If not, you can follow the instructions here to do just deploy Docker.
Next you&8217;ll need to get the base image. In the case of this tutorial, that&8217;s webdevops/php-nginx, which is part of the Docker Hub, so in order to &8220;pull&8221; it you&8217;ll need to have a Docker Hub ID.  If you don&8217;t have one already, go to https://hub.docker.com and create a free account.
Go to the command line where you have Docker installed and log in to the Docker hub:
# docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don’t have a Docker ID, head over to https://hub.docker.com to create one.
Username: nickchase
Password:
Login Succeeded

We&8217;re going to start with the base image.  Instantiate webdevops/php-nginx:
# docker run -dP webdevops/php-nginx
The -dP flag makes sure that the container runs in the background, and that the ports on which it listens are made available.
Make sure the container is running:
# docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                                                                    NAMES
1311034ca7dc        webdevops/php-nginx   “/opt/docker/bin/entr”   35 seconds ago      Up 34 seconds       0.0.0.0:32822->80/tcp, 0.0.0.0:32821->443/tcp, 0.0.0.0:32820->9000/tcp   small_bassi

A couple of notes here. First off, because we didn&8217;t specify a particular name for the container, Docker assigned one.  In this example, it&8217;s small_bassi.  Second, notice that there are 3 ports that are open: 80, 443, and 9000, and that they&8217;ve been mapped to other ports (in this case 32822, 32821 and 32820, respectively &8212; on your machine these ports will be different).  This makes it possible for multiple containers to be &8220;listening&8221; on the same port on the same machine.  So if we were to try and access a web page being hosted by this container, we&8217;d do it by accessing:

http://localhost:32822

So far, though, there aren&8217;t any pages to access; let&8217;s fix that.
Create a file on the container
In order for us to test this container, we need to create a sample PHP file.  We&8217;ll do that by logging into the container and creating a file.

Login to the container
# docker exec -it small_bassi /bin/bash
root@1311034ca7dc:/#
Using exec with the -it switch creates an interactive session for you to execute commands directly within the container. In this case, we&8217;re executing /bin/bash, so we can do whatever else we need.
The document root for the nginx server in this container is at /app, so go ahead and create the /app/index.php file:
vi /app/index.php

Add a simple PHP routine to the file and save it:
<?php
for ($i; $i < 10; $i++){
    echo “Item number “.$i.”n”;
}
?>

Now exit the container to go back to the main command line:
root@1311034ca7dc:/# exit

Now let&8217;s test the page.  To do that, execute a simple curl command:
# curl http://localhost:32822/index.php
Item number
Item number 1
Item number 2
Item number 3
Item number 4
Item number 5
Item number 6
Item number 7
Item number 8
Item number 9

Now that we know PHP is working, it&8217;s time to go ahead and add RSS.
Make changes to the container
Now that we know PHP is working we can go ahead and add RSS support using the SimplePie package.  To do that, we&8217;ll simply download it to the container and install it.

The first step is to log back into the container:
# docker exec -it small_bassi /bin/bash
root@1311034ca7dc:/#

Next go ahead and use curl to download the package, saving it as a zip file:
root@1311034ca7dc:/# curl https://codeload.github.com/simplepie/simplepie/zip/1.4.3 > simplepie1.4.3.zip

Now you need to install it.  To do that, unzip the package, create the appropriate directories, and copy the necessary files into them:
root@1311034ca7dc:/# unzip simplepie1.4.3.zip
root@1311034ca7dc:/# mkdir /app/php
root@1311034ca7dc:/# mkdir /app/cache
root@1311034ca7dc:/# mkdir /app/php/library
root@1311034ca7dc:/# cp -r s*/library/* /app/php/library/.
root@1311034ca7dc:/# cp s*/autoloader.php /app/php/.
root@1311034ca7dc:/# chmod 777 /app/cache

Now we just need a test page to make sure that it&8217;s working. Create a new file in the /app directory:
root@1311034ca7dc:/# vi /app/rss.php

Now add the sample file.  (This file is excerpted from the SimplePie website, but I&8217;ve cut it down for brevity&8217;s sake, since it&8217;s not really the focus of what we&8217;re doing. Please see the original version for comments, etc.)
<?php
require_once(‘php/autoloader.php’);
$feed = new SimplePie();
$feed->set_feed_url(“http://rss.cnn.com/rss/edition.rss”);
$feed->init();
$feed->handle_content_type();
?>
<html>
<head><title>Sample SimplePie Page</title></head>
<body>
<div class=”header”>
<h1><a href=”<?php echo $feed->get_permalink(); ?>”><?php echo $feed->get_title(); ?></a></h1>
<p><?php echo $feed->get_description(); ?></p>
</div>
<?php foreach ($feed->get_items() as $item): ?>
<div class=”item”>
<h2><a href=”<?php echo $item->get_permalink(); ?>”><?php echo $item->get_title(); ?></a></h2>
<p><?php echo $item->get_description(); ?></p>
<p><small>Posted on <?php echo $item->get_date(‘j F Y | g:i a’); ?></small></p>
</div>
<?php endforeach; ?>
</body>
</html>

Exit the container:
root@1311034ca7dc:/# exit

Now let&8217;s make sure it&8217;s working. Remember, we need to access the container on the alternate port (check docker ps to see what ports you need to use):
# curl http://localhost:32822/rss.php
<html>
<head><title>Sample SimplePie Page</title></head>
<body>
       <div class=”header”>
               <h1><a href=”http://www.cnn.com/intl_index.html”>CNN.com – RSS Channel – Intl Homepage – News</a></h1>
               <p>CNN.com delivers up-to-the-minute news and information on the latest top stories, weather, entertainment, politics and more.</p>
       </div>

Now that we have a working container, we can turn it into a new image.
Create the new image
Now that we have a working container, we want to turn it into an image and push it to the Docker Hub so we can use it.  The name you&8217;ll use for your container typically will have three parts:
[username]/[imagename]:[tags]
For example, my Docker Hub username is nickchase, so I am going to name version 1 of my new RSS-ified container
nickchase/rss-php-nginx:v1

Now, if when we first started talking about differences between layers you started to think about version control systems, you&8217;re right.  The first step in creating a new image is to commit the changes that we&8217;ve already made, adding a message about the changes and specifying the author, as in:
docker commit -m “Message” -a “Author Name” [containername] [imagename]
So in my case, that will be:
# docker commit -m “Added RSS” -a “Nick Chase” small_bassi nickchase/rss-php-nginx:v1
sha256:148f1dbceb292b38b40ae6cb7f12f096acf95d85bb3ead40e07d6b1621ad529e

Next we want to go ahead and push the new image to the Docker Hub so we can use it:
# docker push nickchase/rss-php-nginx:v1
The push refers to a repository [docker.io/nickchase/rss-php-nginx]
69671563c949: Pushed
3e78222b8621: Pushed
5b33e5939134: Pushed
54798bfbf935: Pushed
b8c21f8faea9: Pushed

v1: digest: sha256:48da56a77fe4ecff4917121365d8e0ce615ebbdfe31f48a996255f5592894e2b size: 3667

Now if you list the images that are available, you should see it in the list:
# docker images
REPOSITORY                TAG                 IMAGE ID            CREATED             SIZE
nickchase/rss-php-nginx   v1                  148f1dbceb29        11 minutes ago      677 MB
nginx                     latest              abf312888d13        3 days ago          181.5 MB
webdevops/php-nginx       latest              93037e4c8998        3 days ago          675.4 MB
ubuntu                    latest              e4415b714b62        2 weeks ago         128.1 MB
hello-world               latest              c54a2cc56cbb        5 months ago        1.848 kB

Now let&8217;s go ahead and test it.  We&8217;ll start by stopping and removing the original container, so we can remove the local copy of the image:
# docker stop small_bassi
# docker rm small_bassi

Now we can remove the image itself:
# docker rmi nickchase/rss-php-nginx:v1
Untagged: nickchase/rss-php-nginx:v1
Untagged: nickchase/rss-php-nginx@sha256:0a33c7a25a6d2db4b82517b039e9e21a77e5e2262206fdcac8b96f5afa64d96c
Deleted: sha256:208c4fc237bb6b2d3ef8fa16a78e105d80d00d75fe0792e1dcc77aa0835455e3
Deleted: sha256:d7de4d9c00136e2852c65e228944a3dea3712a4e7bcb477eb7393cd309be179b

If you run docker images again, you&8217;ll see that it&8217;s gone:
# docker images
REPOSITORY                TAG                 IMAGE ID            CREATED             SIZE
nginx                     latest              abf312888d13        3 days ago          181.5 MB
webdevops/php-nginx       latest              93037e4c8998        3 days ago          675.4 MB
ubuntu                    latest              e4415b714b62        2 weeks ago         128.1 MB
hello-world               latest              c54a2cc56cbb        5 months ago        1.848 kB

Now if you create a new container based on this image, you will see it get downloaded from the Docker Hub:
# docker run -dP nickchase/rss-php-nginx:v1

Finally, test the new container by getting the new port&;
# docker ps
CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS                                                                    NAMES
13a423324d80        nickchase/rss-php-nginx:v1   “/opt/docker/bin/entr”   6 seconds ago       Up 5 seconds        0.0.0.0:32825->80/tcp, 0.0.0.0:32824->443/tcp, 0.0.0.0:32823->9000/tcp   goofy_brahmagupta

&8230; and accessing the rss.php file.
curl http://localhost:32825/rss.php

You should see the same output as before.
Use a Dockerfile
Manually creating a new image from an existing container gives you a lot of control, but it does have one downside. If the base container gets updated, you&8217;re not necessarily going to have the benefits of those changes.
For example, suppose I wanted a container that always takes the latest version of the Ubuntu operating system and builds on that? The previous method doesn&8217;t give us that advantage.
Instead, we can use a method called the Dockerfile, which enables us to specify a particular version of a base image, or specify that we want to always use the latest version.  
For example, let&8217;s say we want to create a version of the rss-php-nginx container that starts with v1 but serves on port 88 (rather than the traditional 80).  To do that, we basically want to perform three steps:

Start with the desired of the base container.
Tell Nginx to listen on port 88 rather than 80.
Let Docker know that the container listens on port 88.

We&8217;ll do that by creating a local context, downloading a local copy of the configuration file, updating it, and creating a Dockerfile that includes instructions for building the new container.
Let&8217;s get that set up.

Create a working directory in which to build your new container.  What you call it is completely up to you. I called mine k8stutorial.
From the command line, In the local context, start by instantiating the image so we have something to work from:
# docker run -dP nickchase/rss-php-nginx:v1

Now get a copy of the existing vhost.conf file. In this particular container, you can find it at /opt/docker/etc/nginx/vhost.conf.  
# docker cp amazing_minksy:/opt/docker/etc/nginx/vhost.conf .
Note that I&8217;ve a new container named amazing_minsky to replace small_bassi. At this point you should have a copy of vhost.conf in your local directory, so in my case, it would be ~/k8stutorial/vhost.conf.
You now have a local copy of the vhost.conf file.  Using a text editor, open the file and specify that nginx should be listening on port 88 rather than port 80:
server {
   listen   88 default_server;
   listen 8000 default_server;
   server_name  _ *.vm docker;

Next we want to go ahead and create the Dockerfile.  You can do this in any text editor.  The file, which should be called Dockerfile, should start by specifying the base image:
FROM nickchase/rss-php-nginx:v1

Any container that is instantiated from this image is going to be listening on port 80, so we want to go ahead and overwrite that Nginx config file with the one we&8217;ve edited:
FROM nickchase/rss-php-nginx:v1
COPY vhost.conf /opt/docker/etc/nginx/vhost.conf

Finally, we need to tell Docker that the container listens on port 88:
FROM nickchase/rss-php-nginx:v1
COPY vhost.conf /opt/docker/etc/nginx/vhost.conf
EXPOSE 88

Now we need to build the actual image. To do that, we&8217;ll use the docker build command:
# docker build -t nickchase/rss-php-nginx:v2 .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM nickchase/rss-php-nginx:v1
—> 208c4fc237bb
Step 2 : EXPOSE 88
—> Running in 23408def6214
—> 93a43c3df834
Removing intermediate container 23408def6214
Successfully built 93a43c3df834
Notice that we&8217;ve specified the image name, along with a new tag (you can also create a completely new image) and the directory in which to find the Dockerfile and any supporting files.
Finally, push the new image to the hub:
# docker push nickchase/rss-php-nginx:v2

Test out your new image by instantiating it and pulling up the test page.
# docker run -dP nickchase/rss-php-nginx:v2
root@kubeclient:/home/ubuntu/tutorial# docker ps
CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS                                                                                           NAMES
04f4b384e8e2        nickchase/rss-php-nginx:v2   “/opt/docker/bin/entr”   8 seconds ago       Up 7 seconds        0.0.0.0:32829->80/tcp, 0.0.0.0:32828->88/tcp, 0.0.0.0:32827->443/tcp, 0.0.0.0:32826->9000/tcp   goofy_brahmagupta
13a423324d80        nickchase/rss-php-nginx:v1   “/opt/docker/bin/entr”   12 minutes ago      Up 12 minutes       0.0.0.0:32825->80/tcp, 0.0.0.0:32824->443/tcp, 0.0.0.0:32823->9000/tcp                          amazing_minsky

Notice that you now have a mapped port for port 88 you can call:
curl http://localhost:32828/rss.php
Other things you can do with Dockerfile
Docker defines a whole list of things you can do with a Dockerfile, such as:

.dockerignore
FROM
MAINTAINER
RUN
CMD
EXPOSE
ENV
COPY
ENTRYPOINT
VOLUME
USER
WORKDIR
ARG
ONBUILD
STOPSIGNAL
LABEL

As you can see, there&8217;s quite a bit of flexibility here.  You can see the documentation for more information, and wsargent has published a good Dockerfile cheat sheet.
Moving forward
As you can see, creating new Docker images that can be used by you or by other developers is pretty straightforward.  You have the option to manually create and commit changes, or to script them using a Dockerfile.
In our next tutorial, we&8217;ll look at using YAML to manage these containers with Kubernetes.
The post How do I create a new Docker image for my application? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Docker acquires Infinit: a new data layer for distributed applications

The short version: acquired a fantastic company called Infinit. Using their technology, we will provide secure distributed storage out of the box, making it much easier to deploy stateful services and legacy enterprise applications on Docker. This will be delivered in a very open and modular design, so operators can easily integrate their existing storage systems, tune advanced settings, or simply disable the feature altogether. Oh, and we’re going to open-source the whole thing.
The slightly longer version:
At Docker we believe that tools should adapt to the people using them, not the other way around. So we spend a lot of time searching for the most exciting and powerful software technology out there, then integrating it into simple and powerful tools. That is how we discovered a small team of distributed systems engineers based out of Paris, who were working on a next-generation distributed filesystem called Infinit. From the very first demo two things were immediately clear. First, Infinit is an incredible piece of technology with the potential to change how applications consume and produce data; Second, the Infinit and Docker teams were almost comically similar: same obsession with decentralized systems; same empathy for the needs of both developers and operators; same taste for simple and modular designs.
Today we are pleased to announce that Infinit is joining the Docker family. We will use the Infinit technology to address one of the most frequent Docker feature requests: distributed storage that “just works” out of the box, and can integrate existing storage system.
Docker users have been driving us in this direction for two reasons. The first is that application portability across any infrastructure has been a central driver for Docker usage. As developers rapidly evolve from single container applications to multi-container applications deployed on a distributed system, they want to make sure their entire application is portable across any type of infrastructure, whether on cloud or on premise, including for the stateful services it may include. Infinit will address that by providing a portable distributed storage engine, in the same way that our SocketPlane acquisition provided a portable distributed overlay networking implementation for Docker.
The second driver has been the rapid adoption of Docker to containerize stateful enterprise applications, as opposed to next-generation stateless apps. Enterprises expect their container platform to have a point of view about persistent storage, but at the same time they want the flexibility of working with their existing vendors like HPE, EMC, Nutanix etc. Infinit addresses this need as well.
With all of our acquisitions, whether it was Conductant, which enabled us to scale powerful large-scale web operations stacks or SocketPlane, we’ve focused on extending our core capabilities and providing users with modular building blocks to work with and expand. Docker is committed to open sourcing Infinit’s solution in 2017 and add it to the ever-expanding list of infrastructure plumbing projects that Docker has made available to the community, such as  InfraKit, SwarmKit and Notary.  
For those who are interested in learning more about the technology, you can watch Infinit CTO Quentin Hocquet’s presentation at Docker Distributed Systems Summit last month, and we have scheduled an online meetup where the Infinit founders will walk through the architecture and do a demo of their solution. A key aspect of the Infinit architecture is that it is completely decentralized. At Docker we believe that decentralization is the only path to creating software systems capable of scaling at Internet scale. With the help of the Infinit team, you should expect more and more decentralized designs coming out of Docker engineering.
A few Words from CEO and founder Julien Quintard &;
&;We are thrilled to join forces with Docker. Docker has changed the way developers work in order to gain in agility. Stateful applications is the natural next step in this evolution. This is where Infinit comes into play, providing the Docker community with a default storage platform for applications to reliably store their state be it for a database, logs, a website&;s media files and more.&;
A few details about the Infinit’ architecture:

Infinit&8217;s next generation storage platform has been designed to be scalable and resilient while being highly customizable for container environments. The Infinit storage platform has the following characteristics:
&8211; Software-based: can be deployed on any hardware from legacy appliances to commodity bare metal, virtual machines or even containers.
&8211; Programmatic: developers can easily automate the creation and deployment of multiple storage infrastructure, each tailored to the overlying application&8217;s needs through policy-based capabilities.
&8211; Scalable: by relying on a decentralized architecture (i.e peer-to-peer), Infinit does away with the leader/follower model, hence does not suffer from bottlenecks and single points of failure.
&8211; Self Healing: Infinit&8217;s rebalancing mechanism allows for the system to adapt to various types of failures, including Byzantine.
&8211; Multi-Purpose: the Infinit platform provides interfaces for block, object and file storage: NFS, SMB, AWS S3, OpenStack Swift, iSCSI, FUSE etc.
 
Learn More

Sign up for the next Docker Online meetup on Docker and Infinit: Modern Storage Platform for Container Environments
Read about Docker and Infinit

Docker Acquires Distributed Storage Startup @Infinit to Provide Support for Stateful Containerized&;Click To Tweet

The post Docker acquires Infinit: a new data layer for distributed applications appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/