Learn Docker with our DockerCon 2017 Hands-On Labs

We’re excited to announce that 2017 will feature a comprehensive set of hands-on labs. We first introduced hands-on labs at DockerCon EU in 2015, and they were also part of DockerCon 2016 last year in Seattle. This year we’re offering a broader range of topics that cover the interests of both developers and operations personnel on both Windows and Linux (see below for a full list)
These hands-on labs are designed to be self-paced, and are run on the attendee’s laptop. But, don’t worry, all the infrastructure will be hosted again this year on Microsoft Azure. So, all you will need is a laptop capable of instantiating a remote session over SSH (for Linux) or RDP (for Windows).

We’ll have a nice space set up in between the ecosystem expo and breakout rooms for you to work on the labs. There will be tables and stools along with power and wireless Internet access as well as lab proctors to answer questions. But, because of the way the labs are set up, you could also stop by, sign up, and take your laptop to a quiet spot and work on your own.
As you can tell, we’re pretty stoked on the labs, and we think you will be to.
See you in Austin!
DockerCon 2017 Hands-on Labs

Title

Abstract

Orchestration

In this lab you can play around with the container orchestration features of Docker. You will deploy a Dockerized application to a single host and test the application. You will then configure Docker Swarm Mode and deploy the same application across multiple hosts. You will then see how to scale the application and move the workload across different hosts easily.

Docker Networking

In this lab you will learn about key Docker Networking concepts. You will get your hands dirty by going through examples of a few basic concepts, learn about Bridge and Overlay networking, and finally learning about the Swarm Routing Mesh.

Modernize .NET Apps &; for Devs.

A developer’s guide to app migration, showing how the Docker platform lets you update a monolithic application without doing a full rebuild. You’ll start with a sample app and see how to break components out into separate units, plumbing the units together with the Docker platform and the tried-and-trusted applications available on Docker Hub.

Modernize .NET Apps &8211; for Ops.

An admin guide to migrating .NET apps to Docker images, showing how the build, ship, run workflow makes application maintenance fast and risk-free. You’ll start by migrating a sample app to Docker, and then learn how to upgrade the application, patch the Windows version the app uses, and patch the Windows version on the host &8211; all with zero downtime.

Getting Started with Docker on Windows Server 2016

Get started with Docker on Windows, and learn why the world is moving to containers. You’ll start by exploring the Windows Docker images from Microsoft, then you’ll run some simple applications, and learn how to scale apps across multiple servers running Docker in swarm mode

Building a CI / CD Pipeline in Docker Cloud

In this lab you will construct a CI / CD pipeline using Docker Cloud. You&;ll connect your GitHub account to Docker Cloud, and set up triggers so that when a change is pushed to GitHub, a new version of your Docker container is built.

Discovering and Deploying Certified Content with Docker Store

In this lab you will learn how to locate certified containers and plugins on docker store. You&8217;ll then deploy both a certified Docker image, as well as a certified Docker plugin.

Deploying Applications with Docker EE (Docker DataCenter)

In this lab you will deploy an application that takes advantage of some of the latest features of Docker EE (Docker Datacenter). The tutorial will lead you through building a compose file that can deploy a full application on UCP in one click. Capabilities that you will use in this application deployment include:

Docker services
Application scaling and failure mitigation
Layer 7 load balancing
Overlay networking
Application secrets
Application health checks
RBAC-based control and visibility with teams

Vulnerability Detection and Remediation with Docker EE (Docker Datacenter)

Application vulnerabilities are a continuous threat and must be continuously managed. In this tutorial we will show you how Docker Trusted Registry (DTR) can detect known vulnerabilities through image security scanning. You will detect a vulnerability in a running app, patch the app, and then apply a rolling update to gradually deploy the update across your cluster without causing any application downtime.

 
Learn More about DockerCon:

What’s new at DockerCon?
5 reasons to attend DockerCon
Convince your manager to send you to DockerCon
DockerCon for Windows containers practitioners 

Check out all the Docker Hands-on labs at DockerCon To Tweet

The post Learn Docker with our DockerCon 2017 Hands-On Labs appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Learn Docker with our DockerCon 2017 Hands-On Labs

We’re excited to announce that 2017 will feature a comprehensive set of hands-on labs. We first introduced hands-on labs at DockerCon EU in 2015, and they were also part of DockerCon 2016 last year in Seattle. This year we’re offering a broader range of topics that cover the interests of both developers and operations personnel on both Windows and Linux (see below for a full list)
These hands-on labs are designed to be self-paced, and are run on the attendee’s laptop. But, don’t worry, all the infrastructure will be hosted again this year on Microsoft Azure. So, all you will need is a laptop capable of instantiating a remote session over SSH (for Linux) or RDP (for Windows).

We’ll have a nice space set up in between the ecosystem expo and breakout rooms for you to work on the labs. There will be tables and stools along with power and wireless Internet access as well as lab proctors to answer questions. But, because of the way the labs are set up, you could also stop by, sign up, and take your laptop to a quiet spot and work on your own.
As you can tell, we’re pretty stoked on the labs, and we think you will be to.
See you in Austin!
DockerCon 2017 Hands-on Labs

Title

Abstract

Orchestration

In this lab you can play around with the container orchestration features of Docker. You will deploy a Dockerized application to a single host and test the application. You will then configure Docker Swarm Mode and deploy the same application across multiple hosts. You will then see how to scale the application and move the workload across different hosts easily.

Docker Networking

In this lab you will learn about key Docker Networking concepts. You will get your hands dirty by going through examples of a few basic concepts, learn about Bridge and Overlay networking, and finally learning about the Swarm Routing Mesh.

Modernize .NET Apps &; for Devs.

A developer’s guide to app migration, showing how the Docker platform lets you update a monolithic application without doing a full rebuild. You’ll start with a sample app and see how to break components out into separate units, plumbing the units together with the Docker platform and the tried-and-trusted applications available on Docker Hub.

Modernize .NET Apps &8211; for Ops.

An admin guide to migrating .NET apps to Docker images, showing how the build, ship, run workflow makes application maintenance fast and risk-free. You’ll start by migrating a sample app to Docker, and then learn how to upgrade the application, patch the Windows version the app uses, and patch the Windows version on the host &8211; all with zero downtime.

Getting Started with Docker on Windows Server 2016

Get started with Docker on Windows, and learn why the world is moving to containers. You’ll start by exploring the Windows Docker images from Microsoft, then you’ll run some simple applications, and learn how to scale apps across multiple servers running Docker in swarm mode

Building a CI / CD Pipeline in Docker Cloud

In this lab you will construct a CI / CD pipeline using Docker Cloud. You&;ll connect your GitHub account to Docker Cloud, and set up triggers so that when a change is pushed to GitHub, a new version of your Docker container is built.

Discovering and Deploying Certified Content with Docker Store

In this lab you will learn how to locate certified containers and plugins on docker store. You&8217;ll then deploy both a certified Docker image, as well as a certified Docker plugin.

Deploying Applications with Docker EE (Docker DataCenter)

In this lab you will deploy an application that takes advantage of some of the latest features of Docker EE (Docker Datacenter). The tutorial will lead you through building a compose file that can deploy a full application on UCP in one click. Capabilities that you will use in this application deployment include:
&8211; Docker services
&8211; Application scaling and failure mitigation
&8211; Layer 7 load balancing
&8211; Overlay networking
&8211; Application secrets
&8211; Application health checks
&8211; RBAC-based control and visibility with teams

Vulnerability Detection and Remediation with Docker EE (Docker Datacenter)

Application vulnerabilities are a continuous threat and must be continuously managed. In this tutorial we will show you how DTR can detect known vulnerabilities through image security scanning. You will detect a vulnerability in a running app, patch the app, and then apply a rolling update to gradually deploy the update across your cluster without causing any application downtime.

 
Learn More about DockerCon:

What’s new at DockerCon?
5 reasons to attend DockerCon
Convince your manager to send you to DockerCon
DockerCon for Windows containers practitioners 

Check out all the Docker Hands-on labs at DockerCon To Tweet

The post Learn Docker with our DockerCon 2017 Hands-On Labs appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

How To De-Authorize All Of Those Twitter Apps You Forgot About

Hundreds of Twitter accounts were hacked with a swastika through a third party app, which means it&;s probably time to check on your Twitter apps.

Earlier this morning, hackers took over hundreds of Twitter accounts and posted a message in Turkish that included swastikas and a “NaziHolland” hashtag. BBC North America, Reuters Japan, Nike Spain, and Duke University&;s Twitter accounts were some of the targets.

A statement from Twitter revealed that the source of the hack was a third party app. The company claims users don&039;t need to take any action, but now might be a good time to review which apps you&039;ve authenticated with your Twitter login details and revoke apps that you no longer use.

bravotv.com / Via realitytvgifs.tumblr.com

It’s *very* simple. Go to twitter.com/settings/applications and review all of the apps you’ve authorized.

It's *very* simple. Go to twitter.com/settings/applications and review all of the apps you've authorized.

Nicole Nguyen / BuzzFeed News

Then, click “Revoke access” (obvs).

Then, click "Revoke access" (obvs).

Nicole Nguyen / BuzzFeed News / Emoji One


View Entire List ›

Quelle: <a href="How To De-Authorize All Of Those Twitter Apps You Forgot About“>BuzzFeed

Gain confidence with Cloud Technical Engagement

Adopting and thriving on cloud will make or break many industries.
Using cloud to transform business is this generation’s professional challenge, but digital transformation doesn’t have to be confusing or daunting. Whether you are just learning about the business value of cloud, or you’re in the middle of your own transformation, there’s room to learn and gain confidence in the next step.
There are plenty of opportunities to learn more about cloud at InterConnect 2017, and Cloud Technical Engagement offers proven, technical expertise for turning cloud strategies into reality.
Planning the path ahead and getting cloud architectures and workloads right can be challenging. A common question is, “How can I achieve my business goals and leapfrog my competition while meeting security, networking and other requirements?”
My team and I hear these concerns in the hallways of the companies we work with, from cloud-native startups to Fortune 100 companies. The Cloud Technical Engagement team at IBM Cloud turns challenges into opportunities, ensuring that the companies and teams with whom we collaborate leave each engagement knowledgeable and confident in their next steps.
This year, the Cloud Technical Engagement team is bringing lessons learned from countless engagements and successes to InterConnect. With a 4,000 square-foot Cloud Confidence Center, more than 100 breakout sessions (with a third of them featuring a specific client success story), more than 100 technical hands-on labs, a full slate of cloud certification exams and a staff of 300-plus experts, my team is ready to help attendees adopt cloud and achieve its maximum value quickly.
See for yourself
The Cloud Confidence Center, at booth , is one of the largest areas in the entire concourse. It’s where attendees can come to ask questions and discuss plans about cloud and get answers from experts. From understanding to adopting, all the way getting support, the team has a solution. Start a conversation with cloud adoption leaders, technical experts tasked with spearheading complex cloud adoption scenarios, who will come to understand your individual challenges and provide personalized recommendations based on your cloud journey.
Tell us your cloud story, we’ll help you gain confidence in your next step and begin implementing a winning cloud strategy.
Attendees can also talk with technical leaders from the IBM Bluemix Garage and Cloud Professional Services, who can describe how to quickly transform like a startup or craft and implement winning strategies on cloud. Discuss the latest technologies and trends in cloud or see tried and proven implementation patterns in action with the solution architecture team. Learn how support programs are ready to help you succeed with cloud every step of the way.
Breakout sessions, labs, certifications, and more
If speaking with experts on the concourse floor is not for you, drop by one of the breakout sessions or labs. Nearly all our experts attending InterConnect will be presenting in a session, leading a boot camp, facilitating a hands-on lab or proctoring certification exams. You are bound to come across one of our experts, whether you know it or not.
Breakout sessions
From cloud adoption leaders

Innovation at speed as mainstream across an enterprise, with Bendigo and Adelaide Bank
IBM Bluemix Private Cloud for cloud service providers: Materna&;s experiences and technical insight

From the Bluemix Garage

Pixxy&8217;s startup journey: From great idea to validating an app in eight weeks
Experience IBM Design Thinking from the IBM Bluemix Garage

From Cloud Professional Services

Maximizing service management efficiency with an advanced correlation framework at Ford
How many rules? How do we estimate and plan that?: Planning for large-scale rules projects

From Solution Architecture

Top 10 performance best practices for designing and deploying enterprise applications on IBM Bluemix
IBM Cloud Architecture Center: Developed by our clients for our clients

Bootcamps and hands-on labs

Monitoring and diagnosing the performance problems of enterprise applications on IBM Bluemix
Creating open toolchains for IBM Bluemix
The practices of the Bluemix Garage developer: Extreme programming (for non-programmers)
Hands-on lab for IBM UrbanCode Deploy and IBM API Connect

Certifications

IBM Cloud Platform Solution Architect v2
IBM Cloud Platform Application Development v2
IBM Cloud Platform Advanced Application Development V1
Foundations of IBM DevOps V1
IBM API Connect v. 5.0.5 Solution Implementation
IBM WebSphere Application Server Network Deployment V9.0 Core Administration

Come talk with us
Meet the Cloud Technical Engagement team at IBM InterConnect to learn how to achieve value with cloud and get the confidence you need to transform. We look forward to seeing you, so don’t forget to register for InterConnect.
The post Gain confidence with Cloud Technical Engagement appeared first on news.
Quelle: Thoughts on Cloud

Docker to donate containerd to the Cloud Native Computing Foundation

Today, Docker announced its intention to donate the project to the Cloud Native Computing Foundation (CNCF). Back in December 2016, Docker spun out its core container runtime functionality into a standalone component, incorporating it into a separate project called containerd, and announced we would be donating it to a neutral foundation early this year. Today we took a major step forward towards delivering on our commitment to the community by following the Cloud Native Computing Foundation process and presenting a proposal to the CNCF Technical Oversight Committee (TOC) for containerd to become a CNCF project: [overview][link], [proposal][link]. Given the consensus we have been building with the community, we are hopeful to get a positive affirmation from the TOC before CloudNativeCon/KubeCon later this month.  
Over the past 4 years, the adoption of containers with Docker has triggered an unprecedented wave of innovation in our industry: we believe that donating containerd to the CNCF will unlock a whole new phase of innovation and growth across the entire container ecosystem. containerd is designed as an independent component that can be embedded in a higher level system, to provide core container capabilities. Since our December announcement, we have focused efforts on identifying the right home for containerd, and making progress in implementing it and building consensus in the community.

Why is the CNCF the right place for containerd?

Given that containerd has been the heart of the Docker platform since April 2016 when it was included in Docker 1.11, it is already deployed on millions of machines; we wanted it to continue its development under the governance of an organization where a focus on containerization is  front and center.
Docker with containerd is already a key foundation for Kubernetes, which was the original project donated to the CNCF; Kubernetes 1.5 runs with Docker 1.10.3 to 1.12.3. Moving forward, we and key stakeholders from the Kubernetes project believe that containerd 1.0 can be a great core container runtime for Kubernetes.
Strong alignment with other CNCF projects (in addition to Kubernetes): containerd exposes an API using gRPC and exposes metrics in the Prometheus format. Both projects are part of CNCF already.

Technical progress and building consensus
In the past few months, the containerd team has been active implementing Phase 1 and Phase 2 of the containerd roadmap. You can find details about progress in containerd weekly development reports posted in the Github project.
At the end of February, Docker hosted the containerd summit with more than 50 members of the community from companies including Alibaba, AWS, Google, IBM, Microsoft, Rancher, Red Hat and VMware. The group gathered to learn more about containerd, get more information on containerd’s progress and discuss its design. You can watch some of the presentations in the containerd summit recap blog post: Deep Dive Into Containerd By Michael Crosby, Stephen Day, Derek McGowan And Mickael Laventure (Docker), Driving Containerd Operations With GRPC By Phil Estes (IBM) and Containerd And CRI By Tim Hockin (Google).
Tim Hockin from Google gave the best summary of the containerd summit.

containerd @thockin containerd is all we wanted from @docker in @kubernetesio and none of what we didn&;t need: kudos to the team! pic.twitter.com/t26kRo2etJ
— chanezon (@chanezon) February 23, 2017

There is still a lot of work to finish implementing the containerd 1.0 roadmap, our target being June 2017. If you want to contribute to containerd, or embed it in your container system, you can find the project on GitHub. If you want to learn more about containerd progress, or discuss its design, join us in Berlin in March at CloudNativeCon/KubeCon 2017 (more details to follow) or Austin for DockerCon Day 4 Thursday April 20th, the Docker Internals Summit morning session will be the next containerd summit.
The Summit is a small collaborative event for container runtime and system experts who are actively maintaining, contributing or generally involved in the design and development of containerd and/or related projects. Simply submit a PR to add discussion topics to the agenda. If you have not signed up to attend the summit you can do so in this form.
Today we followed the CNCF process and presented a proposal to the CNCF Technical Oversight Committee (TOC) for containerd to become a CNCF project: [overview][link], [proposal][link]. If the CNCF TOC votes to accept our donation, we are excited for containerd to become part of the CNCF community!

@Docker to donate containerd to the @CloudNativeFdnClick To Tweet

Learn More about containerd:

Watch the containerd GitHub Repository
Follow @containerd on twitter
Sign up for the containerd summit on 4/21

The post Docker to donate containerd to the Cloud Native Computing Foundation appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Cloud-based managed firewall protects children using IBM Analytics

Many parents wouldn’t dream of letting their young children use an internet-connected device without some type of filter. It’s important to protect young web searchers from arriving at undesirable websites. An innocent enough search term could be a double entendre, leading impressionable minds to things better left unseen.
One approach for parents is to install software that restricts content on devices or a home router. Typically, if they’re not using a general rating scheme by age, parents need to know the sites they want to block ahead of time and set up their own block lists. This is time consuming, and chances are they’re going to miss something.
A firewall as a service for home use
ChildRouter from Cloud-Nanny offers an automated and intelligent way to filter web content with a  firewall-as-a-service (FaaS) solution. Parents choose which categories of sites they will allow their kids to see, and Cloud-Nanny handles almost everything else. The solution decides whether to allow or block web requests without noticeable effect on the user’s browsing experience. Using IBM dashDB, the processing check makes a request in Cloud-Nanny’s database and returns a decision is less than 40 microseconds.
ChildRouter uses machine learning algorithms running in IBM Analytics for Apache Spark together with AlchemyAPI to classify and categorize content in nearly real time. If the system is unsure about a site, it checks with the parents. Using that input, the model learns and gets better at classifying that type of site in the future.
How it works
The ChildRouter is a hardware appliance and a software appliance in one. This is one differentiator from other solutions, which reside in the browser. ChildRouter works independently of the operating system or browser.

Through a computer interface, parents can assign a device to a specific child. This means they can switch devices very easily within the family. For example, if your younger child wants to watch a movie on an adult’s iPad, parents can go to the ChildRouter interface on the iPad, set it under the younger child&;s account and all the secure settings are applied on that device. Parents can do this with a PlayStation, Xbox, Wii or any other internet-connected device. It is much like the kind of managed firewall that a company would have, but more affordable.
ChildRouter users’ security policies follow them wherever they take their devices because the managed firewall is in the cloud.
The road ahead
ChildRouter is just the tip of the iceberg. Cloud-Nanny’s FaaS solution has applications outside the home because it can also block dangerous software such as malware, adware and viruses. Phishing attempts don’t work; the system recognizes the domain name is not correlated with the IP address of the website and doesn’t let it pass.
Cloud-Nanny envisions schools and public WiFi using ChildRouter. For example, coffee shops that offer free WiFi can guarantee that there will be no risk to the user. Even Internet of Things (IoT) devices can be monitored for unwanted behavior.
Cloud-Nanny developed ChildRouter and got the solution up and running in less than one year with IBM Bluemix. Find out more about how it came together.
Read about other IBM clients who are poised for success using the IBM Cloud as their foundation here.
The post Cloud-based managed firewall protects children using IBM Analytics appeared first on news.
Quelle: Thoughts on Cloud

Online Meetup Recap: Docker Community Edition (CE) and Enterprise Edition (EE)

Last week, we announced Docker Enterprise Edition (EE) and Docker Community Edition (CE) new and renamed versions of the Docker platform. Docker EE, supported by Docker Inc., is available on certified operating systems and cloud providers and runs certified Containers and Plugins from Docker Store. For consistency, we renamed the free Docker products to Docker CE and adopted a new lifecycle and time-based versioning scheme for both Docker EE and CE.
We asked product manager and release captain, Michael Friis to introduce Docker CE + EE to our online community. The took place on Wednesday, March 8th and over 600 people RSVPed to hear Michael’s presentation live. He gave an overview of both editions and highlighted the big enhancements to the lifecycle, maintainability and upgradability of Docker.
In case you missed it, you can watch the recording and access Michael&;s slides below.

 

 
Here are additional resources:

Register for the Webinar: Docker EE
Download Docker CE from Docker Store
Try Docker EE for free and view pricing plans
Learn More about Docker Certified program
Read the docs

Missed the CE + EE Online meetup w/ @friism? Check out the video & slides here!Click To Tweet

The post Online Meetup Recap: Docker Community Edition (CE) and Enterprise Edition (EE) appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

4 trends in action at InterConnect 2017

There are so many reasons to love IBM InterConnect. Some of the brightest people in technology will come together to talk about the trends and advancements that are shaping our industry.
Each year, InterConnect serves as a barometer for where we are as cloud technologists. And this year, I can confidently say that the outlook is very good. Here are four topics I can’t wait to discuss, along with some can’t-miss sessions you should plan to attend.
1. Open technology
Those who know me are familiar with my passion for open technology. At every layer of the cloud, IBM works with its partners to create an open and interoperable cloud — from the data center to the platform to containers and beyond.
In Las Vegas, join me on March 19 at 4 PM Pacific Daylight Time (PDT) for the IBM Open Technology Summit. The summit will bring together leaders from some of the top open tech organizations, including OpenStack, Cloud Foundry, the Open Container Initiative, Linux Foundation and Cloud Native Computing Foundation.
(RELATED: Open Technology Summit returns to Las Vegas)
You can get the details on all the open tech sessions at InterConnect in the blog post Open tech @ InterConnect 2017 — Know before you go.
Featured sessions:

Open cloud architecture: Think you can out-innovate the best of the rest?
Open cloud demystified: Open source leaders tell all

2. Serverless
There’s no doubt that serverless architecture is changing the way we think about application development. But what is it? How does it work? Which workloads are suitable for a serverless architecture? If you want to keep your development teams ahead of the curve, you need to understand how serverless will shape the future. InterConnect is the perfect place to get up to speed.
Featured sessions:

Serverless architectures in banking: OpenWhisk on IBM Bluemix at Santander
Serverless, event-driven architectures and Bluemix OpenWhisk: Overview and IBM&;s technical strategy

3. DevOps
For the past few years, IBM has been leading the DevOps charge. The IBM Bluemix Garage Method has become indispensable for fueling teams’ cultural, personal and technical transformations. The IBM approach blends cutting-edge methodologies and cloud-native approaches with proven automation patterns and existing on-premises development.
I hope you’ll join me in the Cloud Theater on Monday, March 3/20 at 2 PM to discuss DevOps: The new reality for enterprise transformation. Enjoy live demos, client stories and a talk from John Comas of NBCUniversal on how his team succeeded in DevOps.
Featured sessions:

Nationwide’s DevOps transformation
A journey you can relate to: DevOps at Rosetta Stone

4. Containers, microservices and Kubernetes
As dev teams race to find success with containers, microservices and Kubernetes, it’s critical for developers to understand how to connect the dots. At InterConnect, expect to hear plenty about the Open Container Initiative, the latest with the Cloud Native Compute Foundation and the single tenant cluster for Kubernetes. You’ll hear from the developers and architects behind IBM Bluemix Container Service, which brings these technologies together.
Featured sessions:

Architecture deep-dive into Docker containers, microservices and Kubernetes
From Docker to Kubernetes to the Cloud Native Computing Foundation: Open containers and community

These are some of the many exciting topics at InterConnect. Don’t miss this opportunity to train, network and learn about the future of cloud. If you haven’t signed up yet, register today.
The post 4 trends in action at InterConnect 2017 appeared first on news.
Quelle: Thoughts on Cloud

Docker Partners with Girl Develop It and Launches Pilot Class

Yesterday marked International Women&;s Day, a global day celebrating the social, cultural, economic and political achievements of women. In that spirit, we’re thrilled to announce that we’re partnering with Girl Develop It, a national 501(c)3 nonprofit that provides affordable and judgment-free opportunities for adult women interested in learning web and software development through accessible in-person programs. Through welcoming, low-cost classes, GDI helps women of diverse backgrounds achieve their technology goals and build confidence in their careers and their everyday lives.

Girl Develop It deeply values community and supportive learning for women regardless of race, education levels, income and upbringing, and those are values we share. The Docker team is committed to ensuring that we create welcoming spaces for all members of the tech community. To proactively work towards this goal, we have launched several initiatives to strengthen the Docker community and promote diversity in the larger tech community including our DockerCon Diversity Scholarship Program, which provides mentorship and a financial scholarship to attend DockerCon. PS &; Are you a women in tech and want to attend DockerCon in Austin April 17th-20th? Use code  for 50% off your ticket! 

New program for WomeninTech at @DockerCon incl networking events, mentorship opps, etc. Use code&;Click To Tweet

Launching Pilot Class
In collaboration with the GDI curriculum team, we are developing an intro to Docker class that will introduce students to the Docker platform and take them through installing, integrating, and running it in their working environment. The pilot class will take place this spring in San Francisco and Austin.

The Intro to Docker class is fully aligned with Girl Develop It’s mission to unlock the potential of women returning to the workforce, looking for a career change, or leveling up their skills said Executive Director, Corinne Warnshuis. “A course on Docker has been requested by students and leaders in the community for some time. We&8217;re thrilled to be working with Docker to provide a valuable introduction to their platform through our in-person affordable, judgment free program.”
Want to help Docker with these initiatives?
We’re always happy to connect with others who work towards improving opportunities for women and underrepresented groups throughout the global Docker ecosystem and promote inclusion in the larger tech community.
If you or your organization are interested in getting more involved, please contact us at community@docker.com. Let’s join forces and take our impact to the next level!
 

Docker partners with @girldevelopit to launch a pilot course in San Francisco and AustinClick To Tweet

The post Docker Partners with Girl Develop It and Launches Pilot Class appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options

The post Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options appeared first on Mirantis | Pure Play Open Cloud.
As a container management tool, Kubernetes was designed to orchestrate multiple containers and replication, and in fact there are currently several ways to do it. In this article, we&;ll look at three options: Replication Controllers, Replica Sets, and Deployments.
What is Kubernetes replication for?
Before we go into how you would do replication, let&8217;s talk about why.  Typically you would want to replicate your containers (and thereby your applications) for several reasons, including:

Reliability: By having multiple versions of an application, you prevent problems if one or more fails.  This is particularly true if the system replaces any containers that fail.
Load balancing: Having multiple versions of a container enables you to easily send traffic to different instances to prevent overloading of a single instance or node. This is something that Kubernetes does out of the box, making it extremely convenient.
Scaling: When load does become too much for the number of existing instances, Kubernetes enables you to easily scale up your application, adding additional instances as needed.

Replication is appropriate for numerous use cases, including:

Microservices-based applications: In these cases, multiple small applications provide very specific functionality.
Cloud native applications: Because cloud-native applications are based on the theory that any component can fail at any time, replication is a perfect environment for implementing them, as multiple instances are baked into the architecture.
Mobile applications: Mobile applications can often be architected so that the mobile client interacts with an isolated version of the server application.

Kubernetes has multiple ways in which you can implement replication.
Types of Kubernetes replication
In this article, we&8217;ll discuss three different forms of replication: the Replication Controller, Replica Sets, and Deployments.
Replication Controller
The Replication Controller is the original form of replication in Kubernetes.  It&8217;s being replaced by Replica Sets, but it&8217;s still in wide use, so it&8217;s worth understanding what it is and how it works.

A Replication Controller is a structure that enables you to easily create multiple pods, then make sure that that number of pods always exists. If a pod does crash, the Replication Controller replaces it.

Replication Controllers also provide other benefits, such as the ability to scale the number of pods, and to update or delete multiple pods with a single command.

You can create a Replication Controller with an imperative command, or declaratively, from a file.  For example, create a new file called rc.yaml and add the following text:
apiVersion: v1
kind: ReplicationController
metadata:
 name: soaktestrc
spec:
 replicas: 3
 selector:
   app: soaktestrc
 template:
   metadata:
     name: soaktestrc
     labels:
       app: soaktestrc
   spec:
     containers:
     – name: soaktestrc
       image: nickchase/soaktest
       ports:
       – containerPort: 80
Most of this structure should look familiar from our discussion of Deployments; we&8217;ve got the name of the actual Replication Controller (soaktestrc) and we&8217;re designating that we should have 3 replicas, each of which are defined by the template.  The selector defines how we know which pods belong to this Replication Controller.

Now tell Kubernetes to create the Replication Controller based on that file:
# kubectl create -f rc.yaml
replicationcontroller “soaktestrc” created
Let&8217;s take a look at what we have using the describe command:
# kubectl describe rc soaktestrc
Name:           soaktestrc
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app=soaktestrc
Labels:         app=soaktestrc
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type   Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                  ——-
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-g5snq
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-cws05
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-ro2bl
As you can see, we&8217;ve got the Replication Controller, and there are 3 replicas, of the 3 that we wanted.  All 3 of them are currently running.  You can also see the individual pods listed underneath, along with their names.  If you ask Kubernetes to show you the pods, you can see those same names show up:
# kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
soaktestrc-cws05   1/1       Running   0          3m
soaktestrc-g5snq   1/1       Running   0          3m
soaktestrc-ro2bl   1/1       Running   0          3m
Next we&8217;ll look at Replica Sets, but first let&8217;s clean up:
# kubectl delete rc soaktestrc
replicationcontroller “soaktestrc” deleted

# kubectl get pods
As you can see, when you delete the Replication Controller, you also delete all of the pods that it created.
Replica Sets
Replica Sets are a sort of hybrid, in that they are in some ways more powerful than Replication Controllers, and in others they are less powerful.

Replica Sets are declared in essentially the same way as Replication Controllers, except that they have more options for the selector.  For example, we could create a Replica Set like this:
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
 name: soaktestrs
spec:
 replicas: 3
 selector:
   matchLabels:
     app: soaktestrs
 template:
   metadata:
     labels:
       app: soaktestrs
  environment: dev
   spec:
     containers:
     – name: soaktestrs
       image: nickchase/soaktest
       ports:
       – containerPort: 80
In this case, it&8217;s more or less the same as when we were creating the Replication Controller, except we&8217;re using matchLabels instead of label.  But we could just as easily have said:

spec:
 replicas: 3
 selector:
    matchExpressions:
     – {key: app, operator: In, values: [soaktestrs, soaktestrs, soaktest]}
     – {key: teir, operator: NotIn, values: [production]}
 template:
   metadata:

In this case, we&8217;re looking at two different conditions:

The app label must be soaktestrc, soaktestrs, or soaktest
The tier label (if it exists) must not be production

Let&8217;s go ahead and create the Replica Set and get a look at it:
# kubectl create -f replicaset.yaml
replicaset “soaktestrs” created

# kubectl describe rs soaktestrs
Name:           soaktestrs
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app in (soaktest,soaktestrs),teir notin (production)
Labels:         app=soaktestrs
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type    Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                   ——-
 1m            1m              1       {replicaset-controller }                        Normal  SuccessfulCreate Created pod: soaktestrs-it2hf
 1m            1m              1       {replicaset-controller }                       Normal  SuccessfulCreate Created pod: soaktestrs-kimmm
 1m            1m              1       {replicaset-controller }                        Normal  SuccessfulCreate Created pod: soaktestrs-8i4ra

# kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
soaktestrs-8i4ra   1/1       Running   0          1m
soaktestrs-it2hf   1/1       Running   0          1m
soaktestrs-kimmm   1/1       Running   0          1m
As you can see, the output is pretty much the same as for a Replication Controller (except for the selector), and for most intents and purposes, they are similar.  The major difference is that the rolling-update command works with Replication Controllers, but won&8217;t work with a Replica Set.  This is because Replica Sets are meant to be used as the backend for Deployments.

Let&8217;s clean up before we move on.
# kubectl delete rs soaktestrs
replicaset “soaktestrs” deleted

# kubectl get pods
Again, the pods that were created are deleted when we delete the Replica Set.
Deployments
Deployments are intended to replace Replication Controllers.  They provide the same replication functions (through Replica Sets) and also the ability to rollout changes and roll them back if necessary.

Let&8217;s create a simple Deployment using the same image we&8217;ve been using.  First create a new file, deployment.yaml, and add the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: soaktest
spec:
 replicas: 5
 template:
   metadata:
     labels:
       app: soaktest
   spec:
     containers:
     – name: soaktest
       image: nickchase/soaktest
       ports:
       – containerPort: 80
Now go ahead and create the Deployment:
# kubectl create -f deployment.yaml
deployment “soaktest” created
Now let&8217;s go ahead and describe the Deployment:
# kubectl describe deployment soaktest
Name:                   soaktest
Namespace:              default
CreationTimestamp:      Sun, 05 Mar 2017 16:21:19 +0000
Labels:                 app=soaktest
Selector:               app=soaktest
Replicas:               5 updated | 5 total | 5 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          soaktest-3914185155 (5/5 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type    Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                   ——-
 38s           38s             1       {deployment-controller }                        Normal  ScalingReplicaSet        Scaled up replica set soaktest-3914185155 to 3
 36s           36s             1       {deployment-controller }                        Normal  ScalingReplicaSet        Scaled up replica set soaktest-3914185155 to 5
As you can see, rather than listing the individual pods, Kubernetes shows us the Replica Set.  Notice that the name of the Replica Set is the Deployment name and a hash value.

A complete discussion of updates is out of scope for this article &; we&8217;ll cover it in the future &8212; but couple of interesting things here:

The StrategyType is RollingUpdate. This value can also be set to Recreate.
By default we have a minReadySeconds value of 0; we can change that value if we want pods to be up and running for a certain amount of time &8212; say, to load resources &8212; before they&8217;re truly considered &;ready&;.
The RollingUpdateStrategy shows that we have a limit of 1 maxUnavailable &8212; meaning that when we&8217;re updating the Deployment, we can have up to 1 missing pod before it&8217;s replaced, and 1 maxSurge, meaning we can have one extra pod as we scale the new pods back up.

As you can see, the Deployment is backed, in this case, by Replica Set soaktest-3914185155. If we go ahead and look at the list of actual pods&;
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3914185155-7gyja   1/1       Running   0          2m
soaktest-3914185155-lrm20   1/1       Running   0          2m
soaktest-3914185155-o28px   1/1       Running   0          2m
soaktest-3914185155-ojzn8   1/1       Running   0          2m
soaktest-3914185155-r2pt7   1/1       Running   0          2m
&8230; you can see that their names consist of the Replica Set name and an additional identifier.
Passing environment information: identifying a specific pod
Before we look at the different ways that we can affect replicas, let&8217;s set up our deployment so that we can see what pod we&8217;re actually hitting with a particular request.  To do that, the image we&8217;ve been using displays the pod name when it outputs:
<?php
$limit = $_GET[‘limit’];
if (!isset($limit)) $limit = 250;
for ($i; $i < $limit; $i++){
    $d = tan(atan(tan(atan(tan(atan(tan(atan(tan(atan(123456789.123456789))))))))));
}
echo “Pod “.$_SERVER[‘POD_NAME’].” has finished!n”;
?>
As you can see, we&8217;re displaying an environment variable, POD_NAME.  Since each container is essentially it&8217;s own server, this will display the name of the pod when we execute the PHP.

Now we just have to pass that information to the pod.

We do that through the use of the Kubernetes Downward API, which lets us pass environment variables into the containers:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: soaktest
spec:
 replicas: 3
 template:
   metadata:
     labels:
       app: soaktest
   spec:
     containers:
     – name: soaktest
       image: nickchase/soaktest
       ports:
       – containerPort: 80
       env:
       – name: POD_NAME
         valueFrom:
           fieldRef:
             fieldPath: metadata.name
As you can see, we&8217;re passing an environment variable and assigning it a value from the Deployment&8217;s metadata.  (You can find more information on metadata here.)

So let&8217;s go ahead and clean up the Deployment we created earlier&8230;
# kubectl delete deployment soaktest
deployment “soaktest” deleted

# kubectl get pods
&8230; and recreate it with the new definition:
# kubectl create -f deployment.yaml
deployment “soaktest” created
Next let&8217;s go ahead and expose the pods to outside network requests so we can call the nginx server that is inside the containers:
# kubectl expose deployment soaktest –port=80 –target-port=80 –type=NodePort
service “soaktest” exposed
Now let&8217;s describe the services we just created so we can find out what port the Deployment is listening on:
# kubectl describe services soaktest
Name:                   soaktest
Namespace:              default
Labels:                 app=soaktest
Selector:               app=soaktest
Type:                   NodePort
IP:                     11.1.32.105
Port:                   <unset> 80/TCP
NodePort:               <unset> 30800/TCP
Endpoints:              10.200.18.2:80,10.200.18.3:80,10.200.18.4:80 + 2 more…
Session Affinity:       None
No events.
As you can see, the NodePort is 30800 in this case; in your case it will be different, so make sure to check.  That means that each of the servers involved is listening on port 30800, and requests are being forwarded to port 80 of the containers.  That means we can call the PHP script with:
http://[HOST_NAME OR HOST_IP]:[PROVIDED PORT]
In my case, I&8217;ve set the IP for my Kubernetes hosts to hostnames to make my life easier, and the PHP file is the default for nginx, so I can simply call:
# curl http://kube-2:30800
Pod soaktest-3869910569-xnfme has finished!
So as you can see, this time the request was served by pod soaktest-3869910569-xnfme.
Recovering from crashes: Creating a fixed number of replicas
Now that we know everything is running, let&8217;s take a look at some replication use cases.

The first thing we think of when it comes to replication is recovering from crashes. If there are 5 (or 50, or 500) copies of an application running, and one or more crashes, it&8217;s not a catastrophe.  Kubernetes improves the situation further by ensuring that if a pod goes down, it&8217;s replaced.

Let&8217;s see this in action.  Start by refreshing our memory about the pods we&8217;ve got running:
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-qqwqc   1/1       Running   0          11m
soaktest-3869910569-qu8k7   1/1       Running   0          11m
soaktest-3869910569-uzjxu   1/1       Running   0          11m
soaktest-3869910569-x6vmp   1/1       Running   0          11m
soaktest-3869910569-xnfme   1/1       Running   0          11m
If we repeatedly call the Deployment, we can see that we get different pods on a random basis:
# curl http://kube-2:30800
Pod soaktest-3869910569-xnfme has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-x6vmp has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-uzjxu has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-x6vmp has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-uzjxu has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-qu8k7 has finished!
To simulate a pod crashing, let&8217;s go ahead and delete one:
# kubectl delete pod soaktest-3869910569-x6vmp
pod “soaktest-3869910569-x6vmp” deleted

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-516kx   1/1       Running   0          18s
soaktest-3869910569-qqwqc   1/1       Running   0          27m
soaktest-3869910569-qu8k7   1/1       Running   0          27m
soaktest-3869910569-uzjxu   1/1       Running   0          27m
soaktest-3869910569-xnfme   1/1       Running   0          27m
As you can see, pod *x6vmp is gone, and it&8217;s been replaced by *516kx.  (You can easily find the new pod by looking at the AGE column.)

If we once again call the Deployment, we can (eventually) see the new pod:
# curl http://kube-2:30800
Pod soaktest-3869910569-516kx has finished!
Now let&8217;s look at changing the number of pods.
Scaling up or down: Manually changing the number of replicas
One common task is to scale up a Deployment in response to additional load. Kubernetes has autoscaling, but we&8217;ll talk about that in another article.  For now, let&8217;s look at how to do this task manually.

The most straightforward way is to simply use the scale command:
# kubectl scale –replicas=7 deployment/soaktest
deployment “soaktest” scaled

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-2w8i6   1/1       Running   0          6s
soaktest-3869910569-516kx   1/1       Running   0          11m
soaktest-3869910569-qqwqc   1/1       Running   0          39m
soaktest-3869910569-qu8k7   1/1       Running   0          39m
soaktest-3869910569-uzjxu   1/1       Running   0          39m
soaktest-3869910569-xnfme   1/1       Running   0          39m
soaktest-3869910569-z4rx9   1/1       Running   0          6s
In this case, we specify a new number of replicas, and Kubernetes adds enough to bring it to the desired level, as you can see.

One thing to keep in mind is that Kubernetes isn&8217;t going to scale the Deployment down to be below the level at which you first started it up.  For example, if we try to scale back down to 4&8230;
# kubectl scale –replicas=4 -f deployment.yaml
deployment “soaktest” scaled

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-l5wx8   1/1       Running   0          11s
soaktest-3869910569-qqwqc   1/1       Running   0          40m
soaktest-3869910569-qu8k7   1/1       Running   0          40m
soaktest-3869910569-uzjxu   1/1       Running   0          40m
soaktest-3869910569-xnfme   1/1       Running   0          40m
&8230; Kubernetes only brings us back down to 5, because that&8217;s what was specified by the original deployment.
Deploying a new version: Replacing replicas by changing their label
Another way you can use deployments is to make use of the selector.  In other words, if a Deployment controls all the pods with a tier value of dev, changing a pod&8217;s teir label to prod will remove it from the Deployment&8217;s sphere of influence.

This mechanism enables you to selectively replace individual pods. For example, you might move pods from a dev environment to a production environment, or you might do a manual rolling update, updating the image, then removing some fraction of pods from the Deployment; when they&8217;re replaced, it will be with the new image. If you&8217;re happy with the changes, you can then replace the rest of the pods.

Let&8217;s see this in action.  As you recall, this is our Deployment:
# kubectl describe deployment soaktest
Name:                   soaktest
Namespace:              default
CreationTimestamp:      Sun, 05 Mar 2017 19:31:04 +0000
Labels:                 app=soaktest
Selector:               app=soaktest
Replicas:               3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          soaktest-3869910569 (3/3 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type              Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–  ——                  ——-
 50s           50s             1       {deployment-controller }                        Normal            ScalingReplicaSet       Scaled up replica set soaktest-3869910569 to 3
And these are our pods:
# kubectl describe replicaset soaktest-3869910569
Name:           soaktest-3869910569
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app=soaktest,pod-template-hash=3869910569
Labels:         app=soaktest
               pod-template-hash=3869910569
Replicas:       5 current / 5 desired
Pods Status:    5 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type              Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–  ——                  ——-
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-0577c
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-wje85
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-xuhwl
 1m            1m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-8cbo2
 1m            1m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-pwlm4
We can also get a list of pods by label:
# kubectl get pods -l app=soaktest
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          7m
soaktest-3869910569-8cbo2   1/1       Running   0          6m
soaktest-3869910569-pwlm4   1/1       Running   0          6m
soaktest-3869910569-wje85   1/1       Running   0          7m
soaktest-3869910569-xuhwl   1/1       Running   0          7m
So those are our original soaktest pods; what if we wanted to add a new label?  We can do that on the command line:
# kubectl label pods soaktest-3869910569-xuhwl experimental=true
pod “soaktest-3869910569-xuhwl” labeled

# kubectl get pods -l experimental=true
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-xuhwl   1/1       Running   0          14m
So now we have one experimental pod.  But since the experimental label has nothing to do with the selector for the Deployment, it doesn&8217;t affect anything.

So what if we change the value of the app label, which the Deployment is looking at?
# kubectl label pods soaktest-3869910569-wje85 app=notsoaktest –overwrite
pod “soaktest-3869910569-wje85″ labeled
In this case, we need to use the overwrite flag because the app label already exists. Now let&8217;s look at the existing pods.
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          17m
soaktest-3869910569-4cedq   1/1       Running   0          4s
soaktest-3869910569-8cbo2   1/1       Running   0          16m
soaktest-3869910569-pwlm4   1/1       Running   0          16m
soaktest-3869910569-wje85   1/1       Running   0          17m
soaktest-3869910569-xuhwl   1/1       Running   0          17m
As you can see, we now have six pods instead of five, with a new pod having been created to replace *wje85, which was removed from the deployment. We can see the changes by requesting pods by label:
# kubectl get pods -l app=soaktest
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          17m
soaktest-3869910569-4cedq   1/1       Running   0          20s
soaktest-3869910569-8cbo2   1/1       Running   0          16m
soaktest-3869910569-pwlm4   1/1       Running   0          16m
soaktest-3869910569-xuhwl   1/1       Running   0          17m
Now, there is one wrinkle that you have to take into account; because we&8217;ve removed this pod from the Deployment, the Deployment no longer manages it.  So if we were to delete the Deployment&8230;
# kubectl delete deployment soaktest
deployment “soaktest” deleted
The pod remains:
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-wje85   1/1       Running   0          19m
You can also easily replace all of the pods in a Deployment using the &;all flag, as in:
# kubectl label pods –all app=notsoaktesteither –overwrite
But remember that you&8217;ll have to delete them all manually!
Conclusion
Replication is a large part of Kubernetes&8217; purpose in life, so it&8217;s no surprise that we&8217;ve just scratched the surface of what it can do, and how to use it. It is useful for reliability purposes, for scalability, and even as a basis for your architecture.

What do you anticipate using replication for, and what would you like to know more about? Let us know in the comments!The post Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis