DockerCon 2017: Registration And CFP Now Open!

2017 tickets are now available! Take advantage of our lowest pricing today &; tickets are limited and Early Bird will sell out fast! We have extended DockerCon to a three-day conference with repeat sessions, hands-on labs and summits taking place on Thursday.
 
Register for DockerCon
 
The DockerCon 2017 Call for Proposals is open! Before you submit your cool hack or session proposals, take a look at our tips for getting selected below. We have narrowed the scope of sessions we’re looking for this year down to Cool Hacks and Use Cases. The deadline for submissions is January 14, 2017 at 11:59 PST.
Submit a talk

Proposal Dos:

Submitting a Cool Hack:
Be novel
Show us your cool hacks and wow us with the interesting ways you can push the boundaries of the Docker stack. Check out past audience favorites like Serverless Docker, In-the-air update of a drone with Docker and Resin.io and building a UI for container management with Minecraft for inspiration.
Be clear
You do not have to have your hack ready by the submission deadline, rather, plan to clearly explain your hack, what makes it cool and the technologies you will use.
 
All Sessions:
To illustrate the tips below, check out the sample proposals with comments on why they stand out.
Clarify your message
The best talks leave the audience transformed: They come into the session thinking or doing things one way, and they leave armed to think about or solve a problem differently. This means that your session must have solid take-aways that the audience can apply to their use case. We ask for your three key take-aways in the CFP. Make sure to be specific about your audience transformation, i.e. instead of listing “the talk covers orchestration,” instead write, “the talk will go through a step-by-step process for setting up swarm mode, providing the audience with an live example of how easy it is to use.” This is also a great place to highlight what you will leave them with, i.e. “Attendees will have full unrestricted access to all the code I’m going to write and open-source for the talk.”
Keep in line with the theme of the conference
Conferences are organized around a narrative and DockerCon is a user conference. That means we&;re looking for proposals that will inform and delight attendees on the following topics:
Using Docker
Has Docker technology made you better at what you do? Is Docker an integral part of your company’s tech stack? Do you use Docker to do big things? Infuse your proposal with concrete, first-hand examples about your Docker usage, challenges and what you learned along the way, and inspire us on how to use Docker to accomplish real tasks.
Deep Dives
Propose code and demo heavy deep-dive sessions on what you have been able to transform with your use of the Docker stack. Entice your audience by going deeply technical and teach them how to do something they haven’t done.
Get specific
While you should submit a topic that is broad enough to cover a range of interests, sessions are a maximum of 40 minutes, so don’t try to boil the ocean. Stay focused on content that support your take-aways so you can deliver a clear and compelling story.
Inspire us
Expand the conversation beyond technical details and inspire attendees to explore new uses. Past examples include Dockerizing CS50: From Cluster to Cloud to Appliance to Container, Shipping Manifests, Bill of Lading and Docker &8211; Metadata for Containers and Stop Being Lazy and Test Your Software.
Be open
Has your company built tools used in production and/or testing? Remember the buzz around Netflix&8217;s Chaos Monkey and the excitement around it when it was released? If you have such a tool, revealing the recipe for your secret sauce is a great way to get your talk on the radar of DockerCon 2017 attendees.
Show that you are engaging
Having a great topic and talk is important, but equally important is execution and delivery. In the CFP, you have the opportunity to provide as much information as you can about presentations you have given. Videos, reviews, and slide decks will add to your credibility as an entertaining speaker.
 
Proposal Don&8217;ts
These items are surefire ways of not getting past the initial review.
Sales pitches
No, just don&8217;t. It&8217;s acceptable to mention your company&8217;s product during a presentation but it should never be the focus of your talk.
Bulk submissions
If your proposal reads as generic talk that has been submitted to a number of conferences, it will not pass the initial review. Granted that a talk can be a polished version of earlier talk, but the proposal should be tailored for DockerCon 2017.
Jargon
If the proposal contains jargon, it&8217;s very likely that the presentation will also contain jargon. Although DockerCon 2017 is a technology conference, we value the ability to explain and make your points with clear and easy to follow language.
So, what happens next?
After a proposal is submitted, it will be reviewed initially for content and format. Once past the initial review, a committee of reviewers from Docker and the industry will read the proposals and select the best ones. There are a limited number of speaking slots and we work to achieve a balance of presentations that will interest the Docker community.
The deadline for proposal submission is January 14, 2017 at 11:59 PST.
We&8217;re looking forward to reading your proposals!
Submit a talk

DockerCon CFP is now open! Let us know how you’re using To Tweet

The post DockerCon 2017: Registration And CFP Now Open! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Linux and Windows, living together, total chaos! (OK, Kubernetes 1.5)

Linux and Windows, living together, total chaos! (OK, Kubernetes 1.5)

The post Linux and Windows, living together, total chaos! (OK, Kubernetes 1.5) appeared first on Mirantis | The Pure Play OpenStack Company.
There&;s Linux, and there&8217;s Windows. Windows apps don&8217;t run on Linux. Linux apps don&8217;t run on Windows. We&8217;re told that. A lot. In fact, when Docker brought containers into prominence as a way to pack up your application&8217;s dependencies and ship it &;anywhere&;, the definition of &8220;anywhere&8221; was quick to include &8220;Linux&8221;. Sure, there were Windows containers, but getting everything to work together was not particularly practical.
With today&8217;s release of Kubernetes 1.5, that all changes.
Kubernetes 1.5 includes alpha support for both Windows Server Containers, a shared kernel model similar to Docker, and Hyper-V Containers, a single-kernel model that provides better isolation for multi-tenant environments (at the cost of greater latency). The end result is the ability to create a single Kubernetes cluster that includes not just Linux nodes running Linux containers or Windows nodes running Windows containers, but both side by side, for a truly hybrid experience. For example, a single service can have PODs using Windows Server Containers and other PODs using Linux containers.
Though it appears fully functional, there do appear to be some limitations in this early release, including:

The Kubernetes master must still run on Linux due to dependencies in how it&8217;s written. It&8217;s possible to port to Windows, but for the moment the team feels it&8217;s better to focus their efforts on the client components.
There is no native support for network overlays for containers in windows, so networking is limited to L3. (There are other solutions, but they&8217;re not natively available.) The Kubernetes Windows SIG is working with Microsoft to solve these problems, however, and they hope to have made progress by Kubernetes 1.6&8217;s release early next year.
Networking between Windows containers is more complicated because each container gets its own network namespace, so it&8217;s recommended that you use single-container pods for now.
Applications running in Windows Server Containers can run in any language supported by Windows. You CAN run .NET applications in Linux containers, but only if they&8217;re written in .NET Core. .NET core is also supported by the Nano Server operating system, which can be deployed on Windows Server Containers.  

This release also includes support for IIS (which still runs 11.4% of the internet) and ASP.NET.
The development effort, which was led by Apprenda, was aimed at providing enterprises the means for making use of their existing Windows investments while still getting the advantages of Kubernetes. “Our strategy is to give our customers an enterprise hardened, broad Kubernetes solution. That isn’t possible without Windows support. We promised that we would drive support for Kubernetes on Windows Server 2016 in March and now we have reached the first milestone with the 1.5 release.” said Sinclair Schuller, CEO of Apprenda. “We will deliver full parity to Linux in orchestrating Windows Server Containers and Hyper-v containers so that organizations get a single control plane for their distributed apps.”
You can see a demo of Apprenda&8217;s Senior Director of Products, Michael Michael, explaining the functionality here:  <iframe width=&8221;560&; height=&8221;315&8243; src=&8221;https://www.youtube.com/embed/Tbrckccvxwg&8221; frameborder=&8221;0&8243; allowfullscreen></iframe>
Other features in Kubernetes 1.5
Kubernetes 1.5 also includes beta support for StatefulSets (formerly known as PetSets). Most of the objects that Kubernetes manages, such as ReplicaSets and Pods, are meant to be stateless, and thus &8220;disposable&8221; if they go down or become otherwise unreachable. In some situations, however, such as databases, cluster software (such as RabbitMQ clusters), or other traditionally stateful objects, this might not be feasible. StatefulSets provide a means for more concretely identifying resources so that connections can be maintained.
Kubernetes 1.5 also includes early work on making it possible for Kubernetes to deploy OCI-compliant containers.
The post Linux and Windows, living together, total chaos! (OK, Kubernetes 1.5) appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

With DevOps, NBCUniversal massively reduces app release times

When you think about companies that employ DevOps practices, what comes to mind? Many people think of born-on-the-cloud startups built for market disruption and innovation, such as Uber.
Increasingly, long-standing enterprises are eyeing DevOps, too. Take, for example, NBCUniversal, which uses DevOps to streamline time to market for new applications, increase application code quality, and reduce development, testing and deployment costs.
The media conglomerate is known for leading the entertainment industry with network, mobile and web content, but the journey to consistent application delivery on a variety of devices hasn&;t always been easy due to the many practices for tooling and processes.
This is why NBCUniversal sought to engage a DevOps approach to application development across its complex enterprise of 17 business units. The goals were to improve code quality, streamline development and lower costs.
Scaling DevOps across a large, multi-speed enterprise
Embracing DevOps and supporting it with automation using the IBM UrbanCode tool suite has enabled NBCUniversal to shift from its previous state of chaotic culture toward a standardized, unified way of working.
Today, NBCUniversal uses IBM UrbanCode Build, UrbanCode Deploy, and IBM Development and Test Environment Services as the engine of DevOps, combining continuous integration, delivery, testing, feedback and monitoring into one automated workflow. By doing so, the company bridges process, culture and technology across the organization.
NBCUniversal Platform DevOps Manager John Comas explains in this recent webinar how introducing the concept of DevOps and supporting it with automation through the IBM UrbanCode tool suite has helped his company achieve benefits such as a 75 percent reduction in time required for new application releases. Being able to satisfy business requirements also helps the company more rapidly improve code quality.
“Because we’ve shown that the capabilities are flexible and expandable and that they allow us to deliver on a very tight timeline we’ve seen a six-fold increase in project volume, from 10 to more than 60 applications,” Comas said. “We’re providing a path to application production that engenders a level of confidence we’ve never had before.”
Want more details about NBC Universal’s dramatic DevOps transformation? Download this case study to learn more.
The post With DevOps, NBCUniversal massively reduces app release times appeared first on news.
Quelle: Thoughts on Cloud

“Dear Boss, I want to attend the OpenStack Summit”

Want to attend the OpenStack Summit Boston but need help with the right words for getting your trip approved? While we won&;t write the whole thing for you, here&8217;s a template to get you going. It&8217;s up to you to decide how the Summit will help your team, but with free workshops and trainings, technical sessions, strategy talks and the opportunity to meet thousands of likeminded Stackers, we don&8217;t think you&8217;ll have a hard time finding an answer.
 
Dear [Boss],
All I want for the holidays is to attend the OpenStack Summit in Boston, May 8-11, 2017. The OpenStack Summit is the largest open source conference in North America, and the only one where I can get free OpenStack training, learn how to contribute code upstream to the project, and meet with other users to learn how they’ve been using OpenStack in production. The Summit is an opportunity for me to bring back knowledge about [Why you want to attend! What are you hoping to learn? What would benefit your team?] and share it with our team, while helping us get to know similar OpenStack-minded teams around the world (think 60+ countries and nearly 1,200 companies represented).
If I register before mid-March, I get early bird pricing&;$600 USD for 4 days (plus an optional day of training). Early registration also allows me to RSVP for trainings and workshops as soon as they open (they always sell out!), or sign up to take the Certified OpenStack Administrator exam onsite.
At the OpenStack Summit Austin last year, over 7,800 attendees heard case studies from Superusers like AT&T and China Mobile, learned how teams are using containers and container orchestration like Kubernetes with OpenStack, and gave feedback to Project Teams about user needs for the upcoming software release. You can browse past Summit content at openstack.org/videos to see a sample of the conference talks.
The OpenStack Summit is the opportunity for me to expand my OpenStack knowledge, network and skills. Thanks for considering my request.
[Your Name]
Quelle: openstack.org

Learn Docker with More Hands-On Labs

Docker Labs is a rich resource for technical folks from any background to learn Docker. Since the last update on the Docker Blog, three new labs have been published covering , SQL Server and running a Registry on Windows. The self-paced, hands-on labs are a popular way for people to learn how to use Docker for specific scenarios, and it&;s a resource which is growing with the help of the community.

New Labs

Ruby FAQ. You can Dockerize Ruby and Ruby on Rails apps, but there are considerations around versioning, dependency management and the server runtimes. The Ruby FAQ walks through some of the challenges in moving Ruby apps to Docker and proposes solutions. This lab is just beginning, we would love to have your contributions.
SQL Server Lab. Microsoft maintain a SQL Server Express image on Docker Hub that runs in a Windows container. That image lets you attach an existing database to the container, but this lab walks you through a full development and deployment process, building a Docker image that packages your own database schema into an image.
Registry Windows Lab. Docker Registry is an open-source registry server for storing Docker images, which you can run in your own network. There&8217;s already an official registry image for Linux, and this lab shows how to build and run a registry server in a Docker container on Windows.

Highlights
Some of the existing labs are worth calling out for the amount of information they provide. There are hours of learning here:

Docker Networking. Walks through a reference architecture for container networks, covering all the major networking concepts in detail, together with tutorials that demonstrate the concepts in action.
Swarm Mode. A beginner tutorial for native clustering which came in Docker 1.12. Explains how to run services, how Docker load-balances with the Routing Mesh, how to scale up and down, and how to safely remove nodes from the swarm.

Fun Facts
In November, the labs repo on GitHub was viewed over 35,000 times. The most popular lab right now is Windows Containers.
The repo contains 244 commits, has been forked 296 times and starred by 1,388 GitHub users. The labs are the work of 35 contributors so far &; including members of the community, Docker Captains and folks at Docker, Inc.
Among the labs there are 14 Dockerfiles and 102 pages of documentation, totalling over 77,000 words of Docker learning. It would take around 10 hours to read aloud all the labs!
How to Contribute
If you want to join the contributors, we&8217;d love to add your work to the hands-on labs. Contributing is super easy. The documentation is written in GitHub flavored markdown and there&8217;s no mandated structure, just make your lab easy to follow and learn from.
Whether you want to add a new lab or update an existing one, the process is the same:

fork the docker/labs repo on GitHub;
clone your forked repo onto your machine;
add your awesome lab, or change an existing lab to make it even more awesome;
commit your changes (and make sure to sign your work);
submit a pull request &8211; the labs maintainers will review, feed back and publish!

with hands-on labs, now with and Ruby!Click To Tweet

The post Learn Docker with More Hands-On Labs appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Bailian: From Brick & Mortar to Brick & Click using OpenStack, DevOps

The post Bailian: From Brick &; Mortar to Brick &038; Click using OpenStack, DevOps appeared first on Mirantis | The Pure Play OpenStack Company.
Being an established player in a market can definitely have its advantages. If you&;re big enough, there are advantages of scale and barriers to entry that can make it possible to get comfortable in your market.
But what happens when the market flips on its ear?
This was the situation in which Shanghai-based Bailian group found itself in several years ago. China&8217;s largest retailer, the chain of more than 6000 grocery and department stores was spread all over the country.
Many of the brick-and-mortar company&8217;s online competitors, such as JD.com, Suning, and Taobao were introducing new sites and campaigns, and other traditional enterprises were moving to a multi-channel strategy.  In 2014, Bailian decided to join them.
Chinese consumers bought close to $600 billion in online goods during 2015, a 33 percent increase from the prior year. The company knew that if it were going to survive, it had to solve several major problems:

Lack of agility: Some applications were not cloud native and took months to update, and waiting for a new server could take weeks, slowing development of new applications to a crawl.
Server underutilization: As much hardware as Bailian was using, there was still a huge amount of unused capacity that represented wasted money. It had to be streamlined and simplified.

The company set out to create the largest offline to online commerce platform in the industry &; and to do that, they had to replace their existing IT infrastructure.
Choosing a platform
“Our transition from traditional brick and mortar to omni-channel business presented a great opportunity but an equally large challenge,” says Lu Qichuan, Director of IaaS and Cloud Integration Architecture, Bailian Group. “We needed a large scale IT platform that would enable our innovation and growth.” Thinking big, Lu and his team outlined four guiding principles for their new platform — fast development, dynamic scaling, uncompromised availability, and low cost of operations. These guidelines would support aggressive online growth targets through 2020.
And it wasn&8217;t as though Bailian was a stranger to online commerce. The company was already running a Shanghai grocery delivery service, on its existing IT platforms. But it knew that its existing applications, which were not yet cloud-ready, weren&8217;t just complex to support; they also required long development cycles. Add to this the desire to not just port legacy applications such as supply chain logistics and data management to the new, more flexible infrastructure, but also to reclaim applications running on public cloud, and the way forward was clear: private cloud was what Bailian needed.
But which? The company had already zeroed in on many of the advantages of OpenStack. In particular, Bailian Group was impressed by the platform’s continuous innovation, with rich new feature sets every six months.  The IT team also valued OpenStack’s lower licensing and maintenance cost, flexible architecture, and its complete elimination of vendor lock in.
Finally, Bailian Group is a state-owned enterprise, so when China&8217;s Ministry of Industry and Information (MIIT) officially declared its support for the OpenStack ecosystem, the decision was straightforward.
Bailian Group then selected the OpenStack managed services of UMCloud, the Shanghai-based joint venture between Mirantis and UCloud, China’s largest independent public cloud provider. UMCloud’s charter to accelerate OpenStack adoption and embrace China’s “Internet Plus” national policy closely matched Bailian Group’s platform strategy. “We found OpenStack to be the most open and flexible cloud technology, and Mirantis and UMCloud to be the best partners to help us launch our new omni-channel commerce platform,” says Lu.
Start small, think big, scale fast
Bailian Group’s IT leaders worked with Mirantis and UMCloud to quickly build a 20-node MVP (minimum viable product) using the latest OpenStack distribution and Fuel software to deploy and manage all cloud components. The architecture included Ceph distributed storage, Neutron and OVS software defined networking, KVM virtualization, F5 load balancers, and the StackLight logging, monitoring and alerting (LMA) toolchain.

With this early success, the team quickly added capacity and will soon reach 300 nodes and 5000 VMs in this first phase of a three phase, five-year plan. Already a handful of applications are in production on the new platform, including one that manages offline-to-online store advertisement images using distributed Ceph storage. The team has also added new cloud application development tools and processes that foster a CI/CD and DevOps culture and increase innovation and time-to-market. This development environment includes a PaaS platform powered by the Murano application catalog and Sahara for data analysis.  
For phase two, the IT team anticipates expanding the OpenStack platform to 500 nodes across two data centers and more than 10,000 applications by the end of 2018. Phase two will also add a Services Oriented Architecture (SOA), microservices, and dynamic energy savings.
Embracing the strategy of starting small, thinking big, and scaling fast, phase three will extend to 3000 nodes and over 10 million virtual machines and applications by the end of 2020. Phase three will also add an industry cloud and SaaS services that drive prosperity of the retail business and show other retailers the processes and benefits of cloud platform innovation and offline to online digital transformation.
Interested in more information about how Bailian Group is making the most of OpenStack to solve its agility problems? Get the full case study.
The post Bailian: From Brick &038; Mortar to Brick &038; Click using OpenStack, DevOps appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

How do I create a new Docker image for my application?

The post How do I create a new Docker image for my application? appeared first on Mirantis | The Pure Play OpenStack Company.
In our previous series, we looked at how to deploy Kubernetes and create a cluster. We also looked at how to deploy an application on the cluster and configure OpenStack instances so you can access it.  Now we&;re going to get deeper into Kubernetes development by looking at creating new Docker images so you can deploy your own applications and make them available to other people.
How Docker images work
The first thing that we need to understand is how Docker images themselves work.
The key to a Docker image is that it&8217;s alayered file system. In other words, if you start out with an image that&8217;s just the operating system (say Ubuntu) and then add an application (say Nginx), you&8217;ll wind up with something like this:

As you can see, the difference between IMAGE1 and IMAGE2 is just the application itself, and then IMAGE4 has the changes made on layers 3 and 4. So in order to create an image, you are basically starting with a base image and defining the changes to it.
Now, I hear you asking, &;But what if I want to start from scratch?&; Well, let&8217;s define &8220;from scratch&8221; for a minute. Chances are you mean you want to start with a clean operating system and go from there. Well, in most cases there&8217;s a base image for that, so you&8217;re still starting with a base image.  (If not, you can check out the instructions for creating a Docker base image.)
In general, there are two ways to create a new Docker image:

Create an image from an existing container: In this case, you start with an existing image, customize it with the changes you want, then build a new image from it.
Use a Dockerfile: In this case, you use a file of instructions &; the Dockerfile &8212; to specify the base image and the changes you want to make to it.

In this article, we&8217;re going to look at both of those methods. Let&8217;s start with creating a new image from an existing container.
Create from an existing container
In this example, we&8217;re going to start with an image that includes the nginx web application server and PHP. To that, we&8217;re going to add support for reading RSS files using an open source package called SimplePie. We&8217;ll then make a new image out of the altered container.
Create the original container
The first thing we need to do is instantiate the original base image.

The very first step is to make sure that your system has Docker installed.  If you followed our earlier series on running Kubernetes on OpenStack, you&8217;ve already got this handled.  If not, you can follow the instructions here to do just deploy Docker.
Next you&8217;ll need to get the base image. In the case of this tutorial, that&8217;s webdevops/php-nginx, which is part of the Docker Hub, so in order to &8220;pull&8221; it you&8217;ll need to have a Docker Hub ID.  If you don&8217;t have one already, go to https://hub.docker.com and create a free account.
Go to the command line where you have Docker installed and log in to the Docker hub:
# docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don’t have a Docker ID, head over to https://hub.docker.com to create one.
Username: nickchase
Password:
Login Succeeded

We&8217;re going to start with the base image.  Instantiate webdevops/php-nginx:
# docker run -dP webdevops/php-nginx
The -dP flag makes sure that the container runs in the background, and that the ports on which it listens are made available.
Make sure the container is running:
# docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                                                                    NAMES
1311034ca7dc        webdevops/php-nginx   “/opt/docker/bin/entr”   35 seconds ago      Up 34 seconds       0.0.0.0:32822->80/tcp, 0.0.0.0:32821->443/tcp, 0.0.0.0:32820->9000/tcp   small_bassi

A couple of notes here. First off, because we didn&8217;t specify a particular name for the container, Docker assigned one.  In this example, it&8217;s small_bassi.  Second, notice that there are 3 ports that are open: 80, 443, and 9000, and that they&8217;ve been mapped to other ports (in this case 32822, 32821 and 32820, respectively &8212; on your machine these ports will be different).  This makes it possible for multiple containers to be &8220;listening&8221; on the same port on the same machine.  So if we were to try and access a web page being hosted by this container, we&8217;d do it by accessing:

http://localhost:32822

So far, though, there aren&8217;t any pages to access; let&8217;s fix that.
Create a file on the container
In order for us to test this container, we need to create a sample PHP file.  We&8217;ll do that by logging into the container and creating a file.

Login to the container
# docker exec -it small_bassi /bin/bash
root@1311034ca7dc:/#
Using exec with the -it switch creates an interactive session for you to execute commands directly within the container. In this case, we&8217;re executing /bin/bash, so we can do whatever else we need.
The document root for the nginx server in this container is at /app, so go ahead and create the /app/index.php file:
vi /app/index.php

Add a simple PHP routine to the file and save it:
<?php
for ($i; $i < 10; $i++){
    echo “Item number “.$i.”n”;
}
?>

Now exit the container to go back to the main command line:
root@1311034ca7dc:/# exit

Now let&8217;s test the page.  To do that, execute a simple curl command:
# curl http://localhost:32822/index.php
Item number
Item number 1
Item number 2
Item number 3
Item number 4
Item number 5
Item number 6
Item number 7
Item number 8
Item number 9

Now that we know PHP is working, it&8217;s time to go ahead and add RSS.
Make changes to the container
Now that we know PHP is working we can go ahead and add RSS support using the SimplePie package.  To do that, we&8217;ll simply download it to the container and install it.

The first step is to log back into the container:
# docker exec -it small_bassi /bin/bash
root@1311034ca7dc:/#

Next go ahead and use curl to download the package, saving it as a zip file:
root@1311034ca7dc:/# curl https://codeload.github.com/simplepie/simplepie/zip/1.4.3 > simplepie1.4.3.zip

Now you need to install it.  To do that, unzip the package, create the appropriate directories, and copy the necessary files into them:
root@1311034ca7dc:/# unzip simplepie1.4.3.zip
root@1311034ca7dc:/# mkdir /app/php
root@1311034ca7dc:/# mkdir /app/cache
root@1311034ca7dc:/# mkdir /app/php/library
root@1311034ca7dc:/# cp -r s*/library/* /app/php/library/.
root@1311034ca7dc:/# cp s*/autoloader.php /app/php/.
root@1311034ca7dc:/# chmod 777 /app/cache

Now we just need a test page to make sure that it&8217;s working. Create a new file in the /app directory:
root@1311034ca7dc:/# vi /app/rss.php

Now add the sample file.  (This file is excerpted from the SimplePie website, but I&8217;ve cut it down for brevity&8217;s sake, since it&8217;s not really the focus of what we&8217;re doing. Please see the original version for comments, etc.)
<?php
require_once(‘php/autoloader.php’);
$feed = new SimplePie();
$feed->set_feed_url(“http://rss.cnn.com/rss/edition.rss”);
$feed->init();
$feed->handle_content_type();
?>
<html>
<head><title>Sample SimplePie Page</title></head>
<body>
<div class=”header”>
<h1><a href=”<?php echo $feed->get_permalink(); ?>”><?php echo $feed->get_title(); ?></a></h1>
<p><?php echo $feed->get_description(); ?></p>
</div>
<?php foreach ($feed->get_items() as $item): ?>
<div class=”item”>
<h2><a href=”<?php echo $item->get_permalink(); ?>”><?php echo $item->get_title(); ?></a></h2>
<p><?php echo $item->get_description(); ?></p>
<p><small>Posted on <?php echo $item->get_date(‘j F Y | g:i a’); ?></small></p>
</div>
<?php endforeach; ?>
</body>
</html>

Exit the container:
root@1311034ca7dc:/# exit

Now let&8217;s make sure it&8217;s working. Remember, we need to access the container on the alternate port (check docker ps to see what ports you need to use):
# curl http://localhost:32822/rss.php
<html>
<head><title>Sample SimplePie Page</title></head>
<body>
       <div class=”header”>
               <h1><a href=”http://www.cnn.com/intl_index.html”>CNN.com – RSS Channel – Intl Homepage – News</a></h1>
               <p>CNN.com delivers up-to-the-minute news and information on the latest top stories, weather, entertainment, politics and more.</p>
       </div>

Now that we have a working container, we can turn it into a new image.
Create the new image
Now that we have a working container, we want to turn it into an image and push it to the Docker Hub so we can use it.  The name you&8217;ll use for your container typically will have three parts:
[username]/[imagename]:[tags]
For example, my Docker Hub username is nickchase, so I am going to name version 1 of my new RSS-ified container
nickchase/rss-php-nginx:v1

Now, if when we first started talking about differences between layers you started to think about version control systems, you&8217;re right.  The first step in creating a new image is to commit the changes that we&8217;ve already made, adding a message about the changes and specifying the author, as in:
docker commit -m “Message” -a “Author Name” [containername] [imagename]
So in my case, that will be:
# docker commit -m “Added RSS” -a “Nick Chase” small_bassi nickchase/rss-php-nginx:v1
sha256:148f1dbceb292b38b40ae6cb7f12f096acf95d85bb3ead40e07d6b1621ad529e

Next we want to go ahead and push the new image to the Docker Hub so we can use it:
# docker push nickchase/rss-php-nginx:v1
The push refers to a repository [docker.io/nickchase/rss-php-nginx]
69671563c949: Pushed
3e78222b8621: Pushed
5b33e5939134: Pushed
54798bfbf935: Pushed
b8c21f8faea9: Pushed

v1: digest: sha256:48da56a77fe4ecff4917121365d8e0ce615ebbdfe31f48a996255f5592894e2b size: 3667

Now if you list the images that are available, you should see it in the list:
# docker images
REPOSITORY                TAG                 IMAGE ID            CREATED             SIZE
nickchase/rss-php-nginx   v1                  148f1dbceb29        11 minutes ago      677 MB
nginx                     latest              abf312888d13        3 days ago          181.5 MB
webdevops/php-nginx       latest              93037e4c8998        3 days ago          675.4 MB
ubuntu                    latest              e4415b714b62        2 weeks ago         128.1 MB
hello-world               latest              c54a2cc56cbb        5 months ago        1.848 kB

Now let&8217;s go ahead and test it.  We&8217;ll start by stopping and removing the original container, so we can remove the local copy of the image:
# docker stop small_bassi
# docker rm small_bassi

Now we can remove the image itself:
# docker rmi nickchase/rss-php-nginx:v1
Untagged: nickchase/rss-php-nginx:v1
Untagged: nickchase/rss-php-nginx@sha256:0a33c7a25a6d2db4b82517b039e9e21a77e5e2262206fdcac8b96f5afa64d96c
Deleted: sha256:208c4fc237bb6b2d3ef8fa16a78e105d80d00d75fe0792e1dcc77aa0835455e3
Deleted: sha256:d7de4d9c00136e2852c65e228944a3dea3712a4e7bcb477eb7393cd309be179b

If you run docker images again, you&8217;ll see that it&8217;s gone:
# docker images
REPOSITORY                TAG                 IMAGE ID            CREATED             SIZE
nginx                     latest              abf312888d13        3 days ago          181.5 MB
webdevops/php-nginx       latest              93037e4c8998        3 days ago          675.4 MB
ubuntu                    latest              e4415b714b62        2 weeks ago         128.1 MB
hello-world               latest              c54a2cc56cbb        5 months ago        1.848 kB

Now if you create a new container based on this image, you will see it get downloaded from the Docker Hub:
# docker run -dP nickchase/rss-php-nginx:v1

Finally, test the new container by getting the new port&;
# docker ps
CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS                                                                    NAMES
13a423324d80        nickchase/rss-php-nginx:v1   “/opt/docker/bin/entr”   6 seconds ago       Up 5 seconds        0.0.0.0:32825->80/tcp, 0.0.0.0:32824->443/tcp, 0.0.0.0:32823->9000/tcp   goofy_brahmagupta

&8230; and accessing the rss.php file.
curl http://localhost:32825/rss.php

You should see the same output as before.
Use a Dockerfile
Manually creating a new image from an existing container gives you a lot of control, but it does have one downside. If the base container gets updated, you&8217;re not necessarily going to have the benefits of those changes.
For example, suppose I wanted a container that always takes the latest version of the Ubuntu operating system and builds on that? The previous method doesn&8217;t give us that advantage.
Instead, we can use a method called the Dockerfile, which enables us to specify a particular version of a base image, or specify that we want to always use the latest version.  
For example, let&8217;s say we want to create a version of the rss-php-nginx container that starts with v1 but serves on port 88 (rather than the traditional 80).  To do that, we basically want to perform three steps:

Start with the desired of the base container.
Tell Nginx to listen on port 88 rather than 80.
Let Docker know that the container listens on port 88.

We&8217;ll do that by creating a local context, downloading a local copy of the configuration file, updating it, and creating a Dockerfile that includes instructions for building the new container.
Let&8217;s get that set up.

Create a working directory in which to build your new container.  What you call it is completely up to you. I called mine k8stutorial.
From the command line, In the local context, start by instantiating the image so we have something to work from:
# docker run -dP nickchase/rss-php-nginx:v1

Now get a copy of the existing vhost.conf file. In this particular container, you can find it at /opt/docker/etc/nginx/vhost.conf.  
# docker cp amazing_minksy:/opt/docker/etc/nginx/vhost.conf .
Note that I&8217;ve a new container named amazing_minsky to replace small_bassi. At this point you should have a copy of vhost.conf in your local directory, so in my case, it would be ~/k8stutorial/vhost.conf.
You now have a local copy of the vhost.conf file.  Using a text editor, open the file and specify that nginx should be listening on port 88 rather than port 80:
server {
   listen   88 default_server;
   listen 8000 default_server;
   server_name  _ *.vm docker;

Next we want to go ahead and create the Dockerfile.  You can do this in any text editor.  The file, which should be called Dockerfile, should start by specifying the base image:
FROM nickchase/rss-php-nginx:v1

Any container that is instantiated from this image is going to be listening on port 80, so we want to go ahead and overwrite that Nginx config file with the one we&8217;ve edited:
FROM nickchase/rss-php-nginx:v1
COPY vhost.conf /opt/docker/etc/nginx/vhost.conf

Finally, we need to tell Docker that the container listens on port 88:
FROM nickchase/rss-php-nginx:v1
COPY vhost.conf /opt/docker/etc/nginx/vhost.conf
EXPOSE 88

Now we need to build the actual image. To do that, we&8217;ll use the docker build command:
# docker build -t nickchase/rss-php-nginx:v2 .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM nickchase/rss-php-nginx:v1
—> 208c4fc237bb
Step 2 : EXPOSE 88
—> Running in 23408def6214
—> 93a43c3df834
Removing intermediate container 23408def6214
Successfully built 93a43c3df834
Notice that we&8217;ve specified the image name, along with a new tag (you can also create a completely new image) and the directory in which to find the Dockerfile and any supporting files.
Finally, push the new image to the hub:
# docker push nickchase/rss-php-nginx:v2

Test out your new image by instantiating it and pulling up the test page.
# docker run -dP nickchase/rss-php-nginx:v2
root@kubeclient:/home/ubuntu/tutorial# docker ps
CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS                                                                                           NAMES
04f4b384e8e2        nickchase/rss-php-nginx:v2   “/opt/docker/bin/entr”   8 seconds ago       Up 7 seconds        0.0.0.0:32829->80/tcp, 0.0.0.0:32828->88/tcp, 0.0.0.0:32827->443/tcp, 0.0.0.0:32826->9000/tcp   goofy_brahmagupta
13a423324d80        nickchase/rss-php-nginx:v1   “/opt/docker/bin/entr”   12 minutes ago      Up 12 minutes       0.0.0.0:32825->80/tcp, 0.0.0.0:32824->443/tcp, 0.0.0.0:32823->9000/tcp                          amazing_minsky

Notice that you now have a mapped port for port 88 you can call:
curl http://localhost:32828/rss.php
Other things you can do with Dockerfile
Docker defines a whole list of things you can do with a Dockerfile, such as:

.dockerignore
FROM
MAINTAINER
RUN
CMD
EXPOSE
ENV
COPY
ENTRYPOINT
VOLUME
USER
WORKDIR
ARG
ONBUILD
STOPSIGNAL
LABEL

As you can see, there&8217;s quite a bit of flexibility here.  You can see the documentation for more information, and wsargent has published a good Dockerfile cheat sheet.
Moving forward
As you can see, creating new Docker images that can be used by you or by other developers is pretty straightforward.  You have the option to manually create and commit changes, or to script them using a Dockerfile.
In our next tutorial, we&8217;ll look at using YAML to manage these containers with Kubernetes.
The post How do I create a new Docker image for my application? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Docker acquires Infinit: a new data layer for distributed applications

The short version: acquired a fantastic company called Infinit. Using their technology, we will provide secure distributed storage out of the box, making it much easier to deploy stateful services and legacy enterprise applications on Docker. This will be delivered in a very open and modular design, so operators can easily integrate their existing storage systems, tune advanced settings, or simply disable the feature altogether. Oh, and we’re going to open-source the whole thing.
The slightly longer version:
At Docker we believe that tools should adapt to the people using them, not the other way around. So we spend a lot of time searching for the most exciting and powerful software technology out there, then integrating it into simple and powerful tools. That is how we discovered a small team of distributed systems engineers based out of Paris, who were working on a next-generation distributed filesystem called Infinit. From the very first demo two things were immediately clear. First, Infinit is an incredible piece of technology with the potential to change how applications consume and produce data; Second, the Infinit and Docker teams were almost comically similar: same obsession with decentralized systems; same empathy for the needs of both developers and operators; same taste for simple and modular designs.
Today we are pleased to announce that Infinit is joining the Docker family. We will use the Infinit technology to address one of the most frequent Docker feature requests: distributed storage that “just works” out of the box, and can integrate existing storage system.
Docker users have been driving us in this direction for two reasons. The first is that application portability across any infrastructure has been a central driver for Docker usage. As developers rapidly evolve from single container applications to multi-container applications deployed on a distributed system, they want to make sure their entire application is portable across any type of infrastructure, whether on cloud or on premise, including for the stateful services it may include. Infinit will address that by providing a portable distributed storage engine, in the same way that our SocketPlane acquisition provided a portable distributed overlay networking implementation for Docker.
The second driver has been the rapid adoption of Docker to containerize stateful enterprise applications, as opposed to next-generation stateless apps. Enterprises expect their container platform to have a point of view about persistent storage, but at the same time they want the flexibility of working with their existing vendors like HPE, EMC, Nutanix etc. Infinit addresses this need as well.
With all of our acquisitions, whether it was Conductant, which enabled us to scale powerful large-scale web operations stacks or SocketPlane, we’ve focused on extending our core capabilities and providing users with modular building blocks to work with and expand. Docker is committed to open sourcing Infinit’s solution in 2017 and add it to the ever-expanding list of infrastructure plumbing projects that Docker has made available to the community, such as  InfraKit, SwarmKit and Notary.  
For those who are interested in learning more about the technology, you can watch Infinit CTO Quentin Hocquet’s presentation at Docker Distributed Systems Summit last month, and we have scheduled an online meetup where the Infinit founders will walk through the architecture and do a demo of their solution. A key aspect of the Infinit architecture is that it is completely decentralized. At Docker we believe that decentralization is the only path to creating software systems capable of scaling at Internet scale. With the help of the Infinit team, you should expect more and more decentralized designs coming out of Docker engineering.
A few Words from CEO and founder Julien Quintard &;
&;We are thrilled to join forces with Docker. Docker has changed the way developers work in order to gain in agility. Stateful applications is the natural next step in this evolution. This is where Infinit comes into play, providing the Docker community with a default storage platform for applications to reliably store their state be it for a database, logs, a website&;s media files and more.&;
A few details about the Infinit’ architecture:

Infinit&8217;s next generation storage platform has been designed to be scalable and resilient while being highly customizable for container environments. The Infinit storage platform has the following characteristics:
&8211; Software-based: can be deployed on any hardware from legacy appliances to commodity bare metal, virtual machines or even containers.
&8211; Programmatic: developers can easily automate the creation and deployment of multiple storage infrastructure, each tailored to the overlying application&8217;s needs through policy-based capabilities.
&8211; Scalable: by relying on a decentralized architecture (i.e peer-to-peer), Infinit does away with the leader/follower model, hence does not suffer from bottlenecks and single points of failure.
&8211; Self Healing: Infinit&8217;s rebalancing mechanism allows for the system to adapt to various types of failures, including Byzantine.
&8211; Multi-Purpose: the Infinit platform provides interfaces for block, object and file storage: NFS, SMB, AWS S3, OpenStack Swift, iSCSI, FUSE etc.
 
Learn More

Sign up for the next Docker Online meetup on Docker and Infinit: Modern Storage Platform for Container Environments
Read about Docker and Infinit

Docker Acquires Distributed Storage Startup @Infinit to Provide Support for Stateful Containerized&;Click To Tweet

The post Docker acquires Infinit: a new data layer for distributed applications appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Global Mentor Week: Thank you Docker Community!

Danke, рақмет сізге, tak, धन्यवाद, cảm ơn bạn, شكرا, mulțumesc, Gracias, merci, asante, ευχαριστώ, thank you community for an incredible Docker Global Mentor Week! From Tokyo to Sao Paulo, Kisimu to Copenhagen and Ottowa to Manila, it was so awesome to see the energy from the community coming together to celebrate and learn about Docker!

Over 7,500 people registered to attend one of the 110 mentor week events across 5 continents! A huge thank you to all the Docker meetup organizers who worked hard to make these special events happen and offer Docker beginners and intermediate users an opportunity to participate in Docker courses.
None of this would have been possible without the support (and expertise!) of the 500+ advanced Docker users who signed up as mentors to help newcomers .
Whether it was mentors helping attendees, newcomers pushing their first image to Docker Hub or attendees mingling and having a good time, everyone came together to make mentor week a success as you can see on social media and the Facebook photo album.
Here are some of our favorite tweets from the meetups:
 

@Docker LearnDocker at Grenoble France 17Nov2016 @HPE_FR pic.twitter.com/8RSxXUWa4k
— Stephane Bureau (@SBUCloud) November 18, 2016

Awesome turnout at tonight&;s @DockerNYC learndocker event! We will be hosting more of these &; Keep tabs on meetup: https://t.co/dT99EOs4C9 pic.twitter.com/9lZocCjMPb
— Luisa M. Morales (@luisamariethm) November 18, 2016

And finally&; &;Tada&; Docker Mentor Weeklearndocker pic.twitter.com/6kzedIoGyB
— Károly Kass (@karolykassjr) November 17, 2016

 
Learn Docker
In case you weren’t able to attend a local event, the five courses are now available to everyone online here: https://training.docker.com/instructor-led-training
Docker for Developers Courses
Developer &8211; Beginner Linux Containers
This tutorial will guide you through the steps involved in setting up your computer, running your first containers, deploying a web application with Docker and running a multi-container voting app with Docker Compose.
Developer &8211; Beginner Windows Containers
This tutorial will walk you through setting up your environment, running basic containers and creating a Docker Compose multi-container application using Windows containers.
Developer &8211; Intermediate (both Linux and Windows)
This tutorial teaches you how to network your containers, how you can manage data inside and between your containers and how to use Docker Cloud to build your image from source and use developer tools and programming languages with Docker.
Docker for Operations courses
This courses are step-by-step guides where you will build your own Docker cluster, and use it to deploy a sample application. We have two solutions for you to create your own cluster.

Using play-with-docker

Play With Docker is a Docker playground that was built by two amazing Docker captains: Marcos Nils and Jonathan Leibiusky during the Docker Distributed Systems Summit in Berlin last October.
Play with Docker (aka PWD) gives you the experience of having a free Alpine Linux Virtual Machine in the cloud where you can build and run Docker containers and even create clusters with Docker features like Swarm Mode.
Under the hood DIND or Docker-in-Docker is used to give the effect of multiple VMs/PCs.
To get started, go to http://play-with-docker.com/ and click on ADD NEW INSTANCE five times. You will get five &8220;docker-in-docker&8221; containers, all on a private network. These are your five nodes for the workshop!
When the instructions in the slides tell you to &8220;SSH on node X&8221;, just go to the tab corresponding to that node.
The nodes are not directly reachable from outside; so when the slides tell you to &8220;connect to the IP address of your node on port XYZ&8221; you will have to use a different method.
We suggest to use &8220;supergrok&8221;, a container offering a NGINX+ngrok combo to expose your services. To use it, just start (on any of your nodes) the jpetazzo/supergrok image. The image will output further instructions:
docker run –name supergrok -d jpetazzo/supergrok
docker logs –follow supergrok
The logs of the container will give you a tunnel address and explain you how to connected to exposed services. That&8217;s all you need to do!
You can also view this excellent video by Docker Brussels Meetup organizer Nils de Moor who walks you through the steps to build a Docker Swarm cluster in a matter of seconds through the new play-with-docker tool.

 
Note that the instances provided by Play-With-Docker have a short lifespan (a few hours only), so if you want to do the workshop over multiple sessions, you will have to start over each time &8230; Or create your own cluster with option below.

Using Docker Machine to create your own cluster

This method requires a bit more work to get started, but you get a permanent cluster, with less limitations.
You will need Docker Machine (if you have Docker Mac, Docker Windows, or the Docker Toolbox, you&8217;re all set already). You will also need:

credentials for a cloud provider (e.g. API keys or tokens),
or a local install of VirtualBox or VMware (or anything supported by Docker Machine).

Full instructions are in the prepare-machine subdirectory.
Once you have decided what option to choose to create your swarm cluster, you ready to get started with one of the operations course below:
Operations &8211; Beginner
The beginner part of the Ops tutorial will teach you how to set up a swarm, how to use it to host your own registry, how to build your app container images and how to deploy and scale a distributed application called Dockercoins.
Operations &8211; Intermediate
From global container scheduling, overlay networks troubleshooting, dealing with stateful services and node management, this tutorial will show you how to operate your swarm cluster at scale and take you on a swarm mode deep dive.

Danke, Gracias, Merci, Asante, ευχαριστώ, thank you Docker community for an amazing&8230;Click To Tweet

The post Global Mentor Week: Thank you Docker Community! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/