IDC stacks up top object storage vendors

If you’ve been thinking about object storage for just backup and archive, you’ve missed a turn. In a digital transformation journey, like many that I’ve seen in enterprises, managing unstructured content is key.
The latest &;MarketScape: Worldwide Object-Based Storage 2016 Vendor Assessment&; from IDC reminds us that:
Digital assets are the new IP and many businesses are actively trying to create new sources of revenue streams through it. For example, media streaming, the Internet of Things (IoT), and web 2.0, are some of the ways businesses are generating revenue in today&;s digitized world. IT buyers are looking for newer storage technologies that are built not just for unprecedented scale while reducing complexities and costs but also to support traditional (current-generation) and next-generation workloads.

Businesses need to not just be able to store and access data, but also to do something with that data to create value. The type and volume of stored data is rapidly changing, and businesses must look at storage approaches that support today’s storage needs and offer the flexibility needed for future requirements.
In its assessment, IDC placed IBM and IBM Cloud Object Storage (featuring technology from the acquisition of Cleversafe in 2015) in the “leader” category.
As a vendor, I personally cannot be happier or prouder.
Object storage solutions provide the scale and resiliency necessary to efficiently support a set of unstructured content (audio, video, images, scans, documents and so forth)  that are ever-growing in size and volume. Yet not all object storage solutions are the same. One key consideration is the platform that the vendor employs and the flexibility a vendor offers when it comes to deployment options.
Business processes are increasingly hybrid. There will be processes and applications that must run inside your data center, managed by your staff and on your servers. Others can run in the public cloud and even be optimized for pure public cloud deployment, while still other elements might be a mix of the two.
If you look at the vendors in the leader category, IBM Cloud Object Storage is the solution that provides proven, deployment dexterity: on premises, on the public cloud and in any mix of the two. The public cloud we run on is designed from the ground up, with the enterprise in mind. With over 50 IBM Cloud data centers around the world, support for open and industry standards, and the innovation that IBM Watson and IBM Bluemix enable, IBM Cloud Object Storage stands out from the pack. That’s not to say that the other leaders aren’t worth considering, and IDC makes it clear.
With data slated to hit 44 zettabytes by 2020, and 80 percent of that unstructured, according to IDC’s object storage forecast for 2016 to 2020, getting ahead of this dynamic is imperative. Doing it with a leader in object storage just makes business sense.
Try it for yourself. Provision your free tier of object storage on IBM Bluemix, learn more about the overall IBM Cloud Object Storage family and read the full IDC report on IBM.
Read the press release.
Learn more about IBM Cloud Object Storage.
The post IDC stacks up top object storage vendors appeared first on news.
Quelle: Thoughts on Cloud

Convert ASP.NET Web Servers to Docker with Image2Docker

A major update to Image2Docker was released last week, which adds ASP.NET support to the tool. Now you can take a virtualized web server in Hyper-V and extract a image for each website in the VM &; including ASP.NET WebForms, MVC and WebApi apps. 

Image2Docker is a PowerShell module which extracts applications from a Windows Virtual Machine image into a Dockerfile. You can use it as a first pass to take workloads from existing servers and move them to Docker containers on Windows.
The tool was first released in September 2016, and we&;ve had some great work on it from PowerShell gurus like Docker Captain Trevor Sullivan and Microsoft MVP Ryan Yates. The latest version has enhanced functionality for inspecting IIS &8211; you can now extract ASP.NET websites straight into Dockerfiles.
In Brief
If you have a Virtual Machine disk image (VHD, VHDX or WIM), you can extract all the IIS websites from it by installing Image2Docker and running ConvertTo-Dockerfile like this:
Install-Module Image2Docker
Import-Module Image2Docker
ConvertTo-Dockerfile -ImagePath C:win-2016-iis.vhd -Artifact IIS -OutputPath c:i2d2iis
That will produce a Dockerfile which you can build into a Windows container image, using docker build.
How It Works
The Image2Docker tool (also called &;I2D2&;) works offline, you don&8217;t need to have a running VM to connect to. It inspects a Virtual Machine disk image &8211; in Hyper-V VHD, VHDX format, or Windows Imaging WIM format. It looks at the disk for known artifacts, compiles a list of all the artifacts installed on the VM and generates a Dockerfile to package the artifacts.
The Dockerfile uses the microsoft/windowsservercore base image and installs all the artifacts the tool found on the VM disk. The artifacts which Image2Docker scans for are:

IIS & ASP.NET apps
MSMQ
DNS
DHCP
Apache
SQL Server

Some artifacts are more feature-complete than others. Right now (as of version 1.7.1) the IIS artifact is the most complete, so you can use Image2Docker to extract Docker images from your Hyper-V web servers.
Installation
I2D2 is on the PowerShell Gallery, so to use the latest stable version just install and import the module:
Install-Module Image2Docker
Import-Module Image2Docker
If you don&8217;t have the prerequisites to install from the gallery, PowerShell will prompt you to install them.
Alternatively, if you want to use the latest source code (and hopefully contribute to the project), then you need to install the dependencies:
Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201
Install-Module -Name Pester,PSScriptAnalyzer,PowerShellGet
Then you can clone the repo and import the module from local source:
mkdir docker
cd docker
git clone https://github.com/sixeyed/communitytools-image2docker-win.git
cd communitytools-image2docker-win
Import-Module .Image2Docker.psm1
Running Image2Docker
The module contains one cmdlet that does the extraction: ConvertTo-Dockerfile. The help text gives you all the details about the parameters, but here are the main ones:

ImagePath &8211; path to the VHD | VHDX | WIM file to use as the source
Artifact &8211; specify one artifact to inspect, otherwise all known artifacts are used
ArtifactParam &8211; supply a parameter to the artifact inspector, e.g. for IIS you can specify a single website
OutputPath &8211; location to store the generated Dockerfile and associated artifacts

You can also run in Verbose mode to have Image2Docker tell you what it finds, and how it&8217;s building the Dockerfile.
Walkthrough &8211; Extracting All IIS Websites
This is a Windows Server 2016 VM with five websites configured in IIS, all using different ports:

Image2Docker also supports Windows Server 2012, with support for 2008 and 2003 on its way. The websites on this VM are a mixture of technologies &8211; ASP.NET WebForms, ASP.NET MVC, ASP.NET WebApi, together with a static HTML website.
I took a copy of the VHD, and ran Image2Docker to generate a Dockerfile for all the IIS websites:
ConvertTo-Dockerfile -ImagePath C:i2d2win-2016-iis.vhd -Artifact IIS -Verbose -OutputPath c:i2d2iis
In verbose mode there&8217;s a whole lot of output, but here are some of the key lines &8211; where Image2Docker has found IIS and ASP.NET, and is extracting website details:
VERBOSE: IIS service is present on the system
VERBOSE: ASP.NET is present on the system
VERBOSE: Finished discovering IIS artifact
VERBOSE: Generating Dockerfile based on discovered artifacts in
:C:UserseltonAppDataLocalTemp865115-6dbb-40e8-b88a-c0142922d954-mount
VERBOSE: Generating result for IIS component
VERBOSE: Copying IIS configuration files
VERBOSE: Writing instruction to install IIS
VERBOSE: Writing instruction to install ASP.NET
VERBOSE: Copying website files from
C:UserseltonAppDataLocalTemp865115-6dbb-40e8-b88a-c0142922d954-mountwebsitesaspnet-mvc to
C:i2d2iis
VERBOSE: Writing instruction to copy files for -mvc site
VERBOSE: Writing instruction to create site aspnet-mvc
VERBOSE: Writing instruction to expose port for site aspnet-mvc
When it completes, the cmdlet generates a Dockerfile which turns that web server into a Docker image. The Dockerfile has instructions to installs IIS and ASP.NET, copy in the website content, and create the sites in IIS.
Here&8217;s a snippet of the Dockerfile &8211; if you&8217;re not familiar with Dockerfile syntax but you know some PowerShell, then it should be pretty clear what&8217;s happening:
# Install Windows features for IIS
RUN Add-WindowsFeature Web-server, NET-Framework-45-ASPNET, Web-Asp-Net45
RUN Enable-WindowsOptionalFeature -Online -FeatureName IIS-ApplicationDevelopment,IIS-ASPNET45,IIS-BasicAuthentication…

# Set up website: aspnet-mvc
COPY aspnet-mvc /websites/aspnet-mvc
RUN New-Website -Name ‘aspnet-mvc’ -PhysicalPath “C:websitesaspnet-mvc” -Port 8081 -Force
EXPOSE 8081
# Set up website: aspnet-webapi
COPY aspnet-webapi /websites/aspnet-webapi
RUN New-Website -Name ‘aspnet-webapi’ -PhysicalPath “C:websitesaspnet-webapi” -Port 8082 -Force
EXPOSE 8082
You can build that Dockerfile into a Docker image, run a container from the image and you&8217;ll have all five websites running in a Docker container on Windows. But that&8217;s not the best use of Docker.
When you run applications in containers, each container should have a single responsibility &8211; that makes it easier to deploy, manage, scale and upgrade your applications independently. Image2Docker support that approach too.
Walkthrough &8211; Extracting a Single IIS Website
The IIS artifact in Image2Docker uses the ArtifactParam flag to specify a single IIS website to extract into a Dockerfile. That gives us a much better way to extract a workload from a VM into a Docker Image:
ConvertTo-Dockerfile -ImagePath C:i2d2win-2016-iis.vhd -Artifact IIS -ArtifactParam aspnet-webforms -Verbose -OutputPath c:i2d2aspnet-webforms
That produces a much neater Dockerfile, with instructions to set up a single website:
# escape=`
FROM microsoft/windowsservercore
SHELL [“powershell”, “-Command”, “$ErrorActionPreference = ‘Stop';”]

# Wait-Service is a tool from Microsoft for monitoring a Windows Service
ADD https://raw.githubusercontent.com/Microsoft/Virtualization-Documentation/live/windows-server-container-tools/Wait-Service/Wait-Service.ps1 /

# Install Windows features for IIS
RUN Add-WindowsFeature Web-server, NET-Framework-45-ASPNET, Web-Asp-Net45
RUN Enable-WindowsOptionalFeature -Online -FeatureName IIS-ApplicationDevelopment,IIS-ASPNET45,IIS-BasicAuthentication,IIS-CommonHttpFeatures,IIS-DefaultDocument,IIS-DirectoryBrowsing

# Set up website: aspnet-webforms
COPY aspnet-webforms /websites/aspnet-webforms
RUN New-Website -Name ‘aspnet-webforms’ -PhysicalPath “C:websitesaspnet-webforms” -Port 8083 -Force
EXPOSE 8083

CMD /Wait-Service.ps1 -ServiceName W3SVC -AllowServiceRestart
Note &8211; I2D2 checks which optional IIS features are installed on the VM and includes them all in the generated Dockerfile. You can use the Dockerfile as-is to build an image, or you can review it and remove any features your site doesn&8217;t need, which may have been installed in the VM but aren&8217;t used.
To build that Dockerfile into an image, run:
docker build -t i2d2/aspnet-webforms .
When the build completes, I can run a container to start my ASP.NET WebForms site. I know the site uses a non-standard port, but I don&8217;t need to hunt through the app documentation to find out which one, it&8217;s right there in the Dockerfile: EXPOSE 8083.
This command runs a container in the background, exposes the app port, and stores the ID of the container:
$id = docker run -d -p 8083:8083 i2d2/aspnet-webforms
When the site starts, you&8217;ll see in the container logs that the IIS Service (W3SVC) is running:
> docker logs $id
The Service ‘W3SVC’ is in the ‘Running’ state.
Now you can browse to the site running in IIS in the container, but because published ports on Windows containers don&8217;t do loopback yet, if you&8217;re on the machine running the Docker container, you need to use the container&8217;s IP address:
$ip = docker inspect –format ‘{{ .NetworkSettings.Networks.nat.IPAddress }}’ $id
start “http://$($ip):8083″
That will launch your browser and you&8217;ll see your ASP.NET Web Forms application running in IIS, in Windows Server Core, in a Docker container:

Converting Each Website to Docker
You can extract all the websites from a VM into their own Dockerfiles and build images for them all, by following the same process &8211; or scripting it &8211; using the website name as the ArtifactParam:
$websites = @(“aspnet-mvc”, “aspnet-webapi”, “aspnet-webforms”, “static”)
foreach ($website in $websites) {
    ConvertTo-Dockerfile -ImagePath C:i2d2win-2016-iis.vhd -Artifact IIS -ArtifactParam $website -Verbose -OutputPath “c:i2d2$website” -Force
    cd “c:i2d2$website”
    docker build -t “i2d2/$website” .
}
Note. The Force parameter tells Image2Docker to overwrite the contents of the output path, if the directory already exists.
If you run that script, you&8217;ll see from the second image onwards the docker build commands run much more quickly. That&8217;s because of how Docker images are built from layers. Each Dockerfile starts with the same instructions to install IIS and ASP.NET, so once those instructions are built into image layers, the layers get cached and reused.
When the build finish I have four i2d2 Docker images:
> docker images
REPOSITORY                                    TAG                 IMAGE ID            CREATED              SIZE
i2d2/static                                   latest              cd014b51da19        7 seconds ago        9.93 GB
i2d2/aspnet-webapi                            latest              1215366cc47d        About a minute ago   9.94 GB
i2d2/aspnet-mvc                               latest              0f886c27c93d        3 minutes ago        9.94 GB
i2d2/aspnet-webforms                          latest              bd691e57a537        47 minutes ago       9.94 GB
microsoft/windowsservercore                   latest              f49a4ea104f1        5 weeks ago          9.2 GB
Each of my images has a size of about 10GB but that&8217;s the virtual image size, which doesn&8217;t account for cached layers. The microsoft/windowsservercore image is 9.2GB, and the i2d2 images all share the layers which install IIS and ASP.NET (which you can see by checking the image with docker history).
The physical storage for all five images (four websites and the Windows base image) is actually around 10.5GB. The original VM was 14GB. If you split each website into its own VM, you&8217;d be looking at over 50GB of storage, with disk files which take a long time to ship.
The Benefits of Dockerized IIS Applications
With our Dockerized websites we get increased isolation with a much lower storage cost. But that&8217;s not the main attraction &8211; what we have here are a set of deployable packages that each encapsulate a single workload.
You can run a container on a Docker host from one of those images, and the website will start up and be ready to serve requests in seconds. You could have a Docker Swarm with several Windows hosts, and create a service from a website image which you can scale up or down across many nodes in seconds.
And you have different web applications which all have the same shape, so you can manage them in the same way. You can build new versions of the apps into images which you can store in a Windows registry, so you can run an instance of any version of any app. And when Docker Datacenter comes to Windows, you&8217;ll be able to secure the management of those web applications and any other Dockerized apps with role-based access control, and content trust.
Next Steps
Image2Docker is a new tool with a lot of potential. So far the work has been focused on IIS and ASP.NET, and the current version does a good job of extracting websites from VM disks to Docker images. For many deployments, I2D2 will give you a working Dockerfile that you can use to build an image and start working with Docker on Windows straight away.
We&8217;d love to get your feedback on the tool &8211; submit an issue on GitHub if you find a problem, or if you have ideas for enhancements. And of course it&8217;s open source so you can contribute too.
Additional Resources

Image2Docker: A New Tool For Prototyping Windows VM Conversions
Containerize Windows Workloads With Image2Docker
Run IIS + ASP.NET on Windows 10 with Docker
Awesome Docker &8211; Where to Start on Windows

Convert @Windows aspnet VMs to Docker with Image2DockerClick To Tweet

The post Convert ASP.NET Web Servers to Docker with Image2Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Linux and Windows, living together, total chaos! (OK, Kubernetes 1.5)

Linux and Windows, living together, total chaos! (OK, Kubernetes 1.5)

The post Linux and Windows, living together, total chaos! (OK, Kubernetes 1.5) appeared first on Mirantis | The Pure Play OpenStack Company.
There&;s Linux, and there&8217;s Windows. Windows apps don&8217;t run on Linux. Linux apps don&8217;t run on Windows. We&8217;re told that. A lot. In fact, when Docker brought containers into prominence as a way to pack up your application&8217;s dependencies and ship it &;anywhere&;, the definition of &8220;anywhere&8221; was quick to include &8220;Linux&8221;. Sure, there were Windows containers, but getting everything to work together was not particularly practical.
With today&8217;s release of Kubernetes 1.5, that all changes.
Kubernetes 1.5 includes alpha support for both Windows Server Containers, a shared kernel model similar to Docker, and Hyper-V Containers, a single-kernel model that provides better isolation for multi-tenant environments (at the cost of greater latency). The end result is the ability to create a single Kubernetes cluster that includes not just Linux nodes running Linux containers or Windows nodes running Windows containers, but both side by side, for a truly hybrid experience. For example, a single service can have PODs using Windows Server Containers and other PODs using Linux containers.
Though it appears fully functional, there do appear to be some limitations in this early release, including:

The Kubernetes master must still run on Linux due to dependencies in how it&8217;s written. It&8217;s possible to port to Windows, but for the moment the team feels it&8217;s better to focus their efforts on the client components.
There is no native support for network overlays for containers in windows, so networking is limited to L3. (There are other solutions, but they&8217;re not natively available.) The Kubernetes Windows SIG is working with Microsoft to solve these problems, however, and they hope to have made progress by Kubernetes 1.6&8217;s release early next year.
Networking between Windows containers is more complicated because each container gets its own network namespace, so it&8217;s recommended that you use single-container pods for now.
Applications running in Windows Server Containers can run in any language supported by Windows. You CAN run .NET applications in Linux containers, but only if they&8217;re written in .NET Core. .NET core is also supported by the Nano Server operating system, which can be deployed on Windows Server Containers.  

This release also includes support for IIS (which still runs 11.4% of the internet) and ASP.NET.
The development effort, which was led by Apprenda, was aimed at providing enterprises the means for making use of their existing Windows investments while still getting the advantages of Kubernetes. “Our strategy is to give our customers an enterprise hardened, broad Kubernetes solution. That isn’t possible without Windows support. We promised that we would drive support for Kubernetes on Windows Server 2016 in March and now we have reached the first milestone with the 1.5 release.” said Sinclair Schuller, CEO of Apprenda. “We will deliver full parity to Linux in orchestrating Windows Server Containers and Hyper-v containers so that organizations get a single control plane for their distributed apps.”
You can see a demo of Apprenda&8217;s Senior Director of Products, Michael Michael, explaining the functionality here:  <iframe width=&8221;560&; height=&8221;315&8243; src=&8221;https://www.youtube.com/embed/Tbrckccvxwg&8221; frameborder=&8221;0&8243; allowfullscreen></iframe>
Other features in Kubernetes 1.5
Kubernetes 1.5 also includes beta support for StatefulSets (formerly known as PetSets). Most of the objects that Kubernetes manages, such as ReplicaSets and Pods, are meant to be stateless, and thus &8220;disposable&8221; if they go down or become otherwise unreachable. In some situations, however, such as databases, cluster software (such as RabbitMQ clusters), or other traditionally stateful objects, this might not be feasible. StatefulSets provide a means for more concretely identifying resources so that connections can be maintained.
Kubernetes 1.5 also includes early work on making it possible for Kubernetes to deploy OCI-compliant containers.
The post Linux and Windows, living together, total chaos! (OK, Kubernetes 1.5) appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

How do I create a new Docker image for my application?

The post How do I create a new Docker image for my application? appeared first on Mirantis | The Pure Play OpenStack Company.
In our previous series, we looked at how to deploy Kubernetes and create a cluster. We also looked at how to deploy an application on the cluster and configure OpenStack instances so you can access it.  Now we&;re going to get deeper into Kubernetes development by looking at creating new Docker images so you can deploy your own applications and make them available to other people.
How Docker images work
The first thing that we need to understand is how Docker images themselves work.
The key to a Docker image is that it&8217;s alayered file system. In other words, if you start out with an image that&8217;s just the operating system (say Ubuntu) and then add an application (say Nginx), you&8217;ll wind up with something like this:

As you can see, the difference between IMAGE1 and IMAGE2 is just the application itself, and then IMAGE4 has the changes made on layers 3 and 4. So in order to create an image, you are basically starting with a base image and defining the changes to it.
Now, I hear you asking, &;But what if I want to start from scratch?&; Well, let&8217;s define &8220;from scratch&8221; for a minute. Chances are you mean you want to start with a clean operating system and go from there. Well, in most cases there&8217;s a base image for that, so you&8217;re still starting with a base image.  (If not, you can check out the instructions for creating a Docker base image.)
In general, there are two ways to create a new Docker image:

Create an image from an existing container: In this case, you start with an existing image, customize it with the changes you want, then build a new image from it.
Use a Dockerfile: In this case, you use a file of instructions &; the Dockerfile &8212; to specify the base image and the changes you want to make to it.

In this article, we&8217;re going to look at both of those methods. Let&8217;s start with creating a new image from an existing container.
Create from an existing container
In this example, we&8217;re going to start with an image that includes the nginx web application server and PHP. To that, we&8217;re going to add support for reading RSS files using an open source package called SimplePie. We&8217;ll then make a new image out of the altered container.
Create the original container
The first thing we need to do is instantiate the original base image.

The very first step is to make sure that your system has Docker installed.  If you followed our earlier series on running Kubernetes on OpenStack, you&8217;ve already got this handled.  If not, you can follow the instructions here to do just deploy Docker.
Next you&8217;ll need to get the base image. In the case of this tutorial, that&8217;s webdevops/php-nginx, which is part of the Docker Hub, so in order to &8220;pull&8221; it you&8217;ll need to have a Docker Hub ID.  If you don&8217;t have one already, go to https://hub.docker.com and create a free account.
Go to the command line where you have Docker installed and log in to the Docker hub:
# docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don’t have a Docker ID, head over to https://hub.docker.com to create one.
Username: nickchase
Password:
Login Succeeded

We&8217;re going to start with the base image.  Instantiate webdevops/php-nginx:
# docker run -dP webdevops/php-nginx
The -dP flag makes sure that the container runs in the background, and that the ports on which it listens are made available.
Make sure the container is running:
# docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                                                                    NAMES
1311034ca7dc        webdevops/php-nginx   “/opt/docker/bin/entr”   35 seconds ago      Up 34 seconds       0.0.0.0:32822->80/tcp, 0.0.0.0:32821->443/tcp, 0.0.0.0:32820->9000/tcp   small_bassi

A couple of notes here. First off, because we didn&8217;t specify a particular name for the container, Docker assigned one.  In this example, it&8217;s small_bassi.  Second, notice that there are 3 ports that are open: 80, 443, and 9000, and that they&8217;ve been mapped to other ports (in this case 32822, 32821 and 32820, respectively &8212; on your machine these ports will be different).  This makes it possible for multiple containers to be &8220;listening&8221; on the same port on the same machine.  So if we were to try and access a web page being hosted by this container, we&8217;d do it by accessing:

http://localhost:32822

So far, though, there aren&8217;t any pages to access; let&8217;s fix that.
Create a file on the container
In order for us to test this container, we need to create a sample PHP file.  We&8217;ll do that by logging into the container and creating a file.

Login to the container
# docker exec -it small_bassi /bin/bash
root@1311034ca7dc:/#
Using exec with the -it switch creates an interactive session for you to execute commands directly within the container. In this case, we&8217;re executing /bin/bash, so we can do whatever else we need.
The document root for the nginx server in this container is at /app, so go ahead and create the /app/index.php file:
vi /app/index.php

Add a simple PHP routine to the file and save it:
<?php
for ($i; $i < 10; $i++){
    echo “Item number “.$i.”n”;
}
?>

Now exit the container to go back to the main command line:
root@1311034ca7dc:/# exit

Now let&8217;s test the page.  To do that, execute a simple curl command:
# curl http://localhost:32822/index.php
Item number
Item number 1
Item number 2
Item number 3
Item number 4
Item number 5
Item number 6
Item number 7
Item number 8
Item number 9

Now that we know PHP is working, it&8217;s time to go ahead and add RSS.
Make changes to the container
Now that we know PHP is working we can go ahead and add RSS support using the SimplePie package.  To do that, we&8217;ll simply download it to the container and install it.

The first step is to log back into the container:
# docker exec -it small_bassi /bin/bash
root@1311034ca7dc:/#

Next go ahead and use curl to download the package, saving it as a zip file:
root@1311034ca7dc:/# curl https://codeload.github.com/simplepie/simplepie/zip/1.4.3 > simplepie1.4.3.zip

Now you need to install it.  To do that, unzip the package, create the appropriate directories, and copy the necessary files into them:
root@1311034ca7dc:/# unzip simplepie1.4.3.zip
root@1311034ca7dc:/# mkdir /app/php
root@1311034ca7dc:/# mkdir /app/cache
root@1311034ca7dc:/# mkdir /app/php/library
root@1311034ca7dc:/# cp -r s*/library/* /app/php/library/.
root@1311034ca7dc:/# cp s*/autoloader.php /app/php/.
root@1311034ca7dc:/# chmod 777 /app/cache

Now we just need a test page to make sure that it&8217;s working. Create a new file in the /app directory:
root@1311034ca7dc:/# vi /app/rss.php

Now add the sample file.  (This file is excerpted from the SimplePie website, but I&8217;ve cut it down for brevity&8217;s sake, since it&8217;s not really the focus of what we&8217;re doing. Please see the original version for comments, etc.)
<?php
require_once(‘php/autoloader.php’);
$feed = new SimplePie();
$feed->set_feed_url(“http://rss.cnn.com/rss/edition.rss”);
$feed->init();
$feed->handle_content_type();
?>
<html>
<head><title>Sample SimplePie Page</title></head>
<body>
<div class=”header”>
<h1><a href=”<?php echo $feed->get_permalink(); ?>”><?php echo $feed->get_title(); ?></a></h1>
<p><?php echo $feed->get_description(); ?></p>
</div>
<?php foreach ($feed->get_items() as $item): ?>
<div class=”item”>
<h2><a href=”<?php echo $item->get_permalink(); ?>”><?php echo $item->get_title(); ?></a></h2>
<p><?php echo $item->get_description(); ?></p>
<p><small>Posted on <?php echo $item->get_date(‘j F Y | g:i a’); ?></small></p>
</div>
<?php endforeach; ?>
</body>
</html>

Exit the container:
root@1311034ca7dc:/# exit

Now let&8217;s make sure it&8217;s working. Remember, we need to access the container on the alternate port (check docker ps to see what ports you need to use):
# curl http://localhost:32822/rss.php
<html>
<head><title>Sample SimplePie Page</title></head>
<body>
       <div class=”header”>
               <h1><a href=”http://www.cnn.com/intl_index.html”>CNN.com – RSS Channel – Intl Homepage – News</a></h1>
               <p>CNN.com delivers up-to-the-minute news and information on the latest top stories, weather, entertainment, politics and more.</p>
       </div>

Now that we have a working container, we can turn it into a new image.
Create the new image
Now that we have a working container, we want to turn it into an image and push it to the Docker Hub so we can use it.  The name you&8217;ll use for your container typically will have three parts:
[username]/[imagename]:[tags]
For example, my Docker Hub username is nickchase, so I am going to name version 1 of my new RSS-ified container
nickchase/rss-php-nginx:v1

Now, if when we first started talking about differences between layers you started to think about version control systems, you&8217;re right.  The first step in creating a new image is to commit the changes that we&8217;ve already made, adding a message about the changes and specifying the author, as in:
docker commit -m “Message” -a “Author Name” [containername] [imagename]
So in my case, that will be:
# docker commit -m “Added RSS” -a “Nick Chase” small_bassi nickchase/rss-php-nginx:v1
sha256:148f1dbceb292b38b40ae6cb7f12f096acf95d85bb3ead40e07d6b1621ad529e

Next we want to go ahead and push the new image to the Docker Hub so we can use it:
# docker push nickchase/rss-php-nginx:v1
The push refers to a repository [docker.io/nickchase/rss-php-nginx]
69671563c949: Pushed
3e78222b8621: Pushed
5b33e5939134: Pushed
54798bfbf935: Pushed
b8c21f8faea9: Pushed

v1: digest: sha256:48da56a77fe4ecff4917121365d8e0ce615ebbdfe31f48a996255f5592894e2b size: 3667

Now if you list the images that are available, you should see it in the list:
# docker images
REPOSITORY                TAG                 IMAGE ID            CREATED             SIZE
nickchase/rss-php-nginx   v1                  148f1dbceb29        11 minutes ago      677 MB
nginx                     latest              abf312888d13        3 days ago          181.5 MB
webdevops/php-nginx       latest              93037e4c8998        3 days ago          675.4 MB
ubuntu                    latest              e4415b714b62        2 weeks ago         128.1 MB
hello-world               latest              c54a2cc56cbb        5 months ago        1.848 kB

Now let&8217;s go ahead and test it.  We&8217;ll start by stopping and removing the original container, so we can remove the local copy of the image:
# docker stop small_bassi
# docker rm small_bassi

Now we can remove the image itself:
# docker rmi nickchase/rss-php-nginx:v1
Untagged: nickchase/rss-php-nginx:v1
Untagged: nickchase/rss-php-nginx@sha256:0a33c7a25a6d2db4b82517b039e9e21a77e5e2262206fdcac8b96f5afa64d96c
Deleted: sha256:208c4fc237bb6b2d3ef8fa16a78e105d80d00d75fe0792e1dcc77aa0835455e3
Deleted: sha256:d7de4d9c00136e2852c65e228944a3dea3712a4e7bcb477eb7393cd309be179b

If you run docker images again, you&8217;ll see that it&8217;s gone:
# docker images
REPOSITORY                TAG                 IMAGE ID            CREATED             SIZE
nginx                     latest              abf312888d13        3 days ago          181.5 MB
webdevops/php-nginx       latest              93037e4c8998        3 days ago          675.4 MB
ubuntu                    latest              e4415b714b62        2 weeks ago         128.1 MB
hello-world               latest              c54a2cc56cbb        5 months ago        1.848 kB

Now if you create a new container based on this image, you will see it get downloaded from the Docker Hub:
# docker run -dP nickchase/rss-php-nginx:v1

Finally, test the new container by getting the new port&;
# docker ps
CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS                                                                    NAMES
13a423324d80        nickchase/rss-php-nginx:v1   “/opt/docker/bin/entr”   6 seconds ago       Up 5 seconds        0.0.0.0:32825->80/tcp, 0.0.0.0:32824->443/tcp, 0.0.0.0:32823->9000/tcp   goofy_brahmagupta

&8230; and accessing the rss.php file.
curl http://localhost:32825/rss.php

You should see the same output as before.
Use a Dockerfile
Manually creating a new image from an existing container gives you a lot of control, but it does have one downside. If the base container gets updated, you&8217;re not necessarily going to have the benefits of those changes.
For example, suppose I wanted a container that always takes the latest version of the Ubuntu operating system and builds on that? The previous method doesn&8217;t give us that advantage.
Instead, we can use a method called the Dockerfile, which enables us to specify a particular version of a base image, or specify that we want to always use the latest version.  
For example, let&8217;s say we want to create a version of the rss-php-nginx container that starts with v1 but serves on port 88 (rather than the traditional 80).  To do that, we basically want to perform three steps:

Start with the desired of the base container.
Tell Nginx to listen on port 88 rather than 80.
Let Docker know that the container listens on port 88.

We&8217;ll do that by creating a local context, downloading a local copy of the configuration file, updating it, and creating a Dockerfile that includes instructions for building the new container.
Let&8217;s get that set up.

Create a working directory in which to build your new container.  What you call it is completely up to you. I called mine k8stutorial.
From the command line, In the local context, start by instantiating the image so we have something to work from:
# docker run -dP nickchase/rss-php-nginx:v1

Now get a copy of the existing vhost.conf file. In this particular container, you can find it at /opt/docker/etc/nginx/vhost.conf.  
# docker cp amazing_minksy:/opt/docker/etc/nginx/vhost.conf .
Note that I&8217;ve a new container named amazing_minsky to replace small_bassi. At this point you should have a copy of vhost.conf in your local directory, so in my case, it would be ~/k8stutorial/vhost.conf.
You now have a local copy of the vhost.conf file.  Using a text editor, open the file and specify that nginx should be listening on port 88 rather than port 80:
server {
   listen   88 default_server;
   listen 8000 default_server;
   server_name  _ *.vm docker;

Next we want to go ahead and create the Dockerfile.  You can do this in any text editor.  The file, which should be called Dockerfile, should start by specifying the base image:
FROM nickchase/rss-php-nginx:v1

Any container that is instantiated from this image is going to be listening on port 80, so we want to go ahead and overwrite that Nginx config file with the one we&8217;ve edited:
FROM nickchase/rss-php-nginx:v1
COPY vhost.conf /opt/docker/etc/nginx/vhost.conf

Finally, we need to tell Docker that the container listens on port 88:
FROM nickchase/rss-php-nginx:v1
COPY vhost.conf /opt/docker/etc/nginx/vhost.conf
EXPOSE 88

Now we need to build the actual image. To do that, we&8217;ll use the docker build command:
# docker build -t nickchase/rss-php-nginx:v2 .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM nickchase/rss-php-nginx:v1
—> 208c4fc237bb
Step 2 : EXPOSE 88
—> Running in 23408def6214
—> 93a43c3df834
Removing intermediate container 23408def6214
Successfully built 93a43c3df834
Notice that we&8217;ve specified the image name, along with a new tag (you can also create a completely new image) and the directory in which to find the Dockerfile and any supporting files.
Finally, push the new image to the hub:
# docker push nickchase/rss-php-nginx:v2

Test out your new image by instantiating it and pulling up the test page.
# docker run -dP nickchase/rss-php-nginx:v2
root@kubeclient:/home/ubuntu/tutorial# docker ps
CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS                                                                                           NAMES
04f4b384e8e2        nickchase/rss-php-nginx:v2   “/opt/docker/bin/entr”   8 seconds ago       Up 7 seconds        0.0.0.0:32829->80/tcp, 0.0.0.0:32828->88/tcp, 0.0.0.0:32827->443/tcp, 0.0.0.0:32826->9000/tcp   goofy_brahmagupta
13a423324d80        nickchase/rss-php-nginx:v1   “/opt/docker/bin/entr”   12 minutes ago      Up 12 minutes       0.0.0.0:32825->80/tcp, 0.0.0.0:32824->443/tcp, 0.0.0.0:32823->9000/tcp                          amazing_minsky

Notice that you now have a mapped port for port 88 you can call:
curl http://localhost:32828/rss.php
Other things you can do with Dockerfile
Docker defines a whole list of things you can do with a Dockerfile, such as:

.dockerignore
FROM
MAINTAINER
RUN
CMD
EXPOSE
ENV
COPY
ENTRYPOINT
VOLUME
USER
WORKDIR
ARG
ONBUILD
STOPSIGNAL
LABEL

As you can see, there&8217;s quite a bit of flexibility here.  You can see the documentation for more information, and wsargent has published a good Dockerfile cheat sheet.
Moving forward
As you can see, creating new Docker images that can be used by you or by other developers is pretty straightforward.  You have the option to manually create and commit changes, or to script them using a Dockerfile.
In our next tutorial, we&8217;ll look at using YAML to manage these containers with Kubernetes.
The post How do I create a new Docker image for my application? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Docker acquires Infinit: a new data layer for distributed applications

The short version: acquired a fantastic company called Infinit. Using their technology, we will provide secure distributed storage out of the box, making it much easier to deploy stateful services and legacy enterprise applications on Docker. This will be delivered in a very open and modular design, so operators can easily integrate their existing storage systems, tune advanced settings, or simply disable the feature altogether. Oh, and we’re going to open-source the whole thing.
The slightly longer version:
At Docker we believe that tools should adapt to the people using them, not the other way around. So we spend a lot of time searching for the most exciting and powerful software technology out there, then integrating it into simple and powerful tools. That is how we discovered a small team of distributed systems engineers based out of Paris, who were working on a next-generation distributed filesystem called Infinit. From the very first demo two things were immediately clear. First, Infinit is an incredible piece of technology with the potential to change how applications consume and produce data; Second, the Infinit and Docker teams were almost comically similar: same obsession with decentralized systems; same empathy for the needs of both developers and operators; same taste for simple and modular designs.
Today we are pleased to announce that Infinit is joining the Docker family. We will use the Infinit technology to address one of the most frequent Docker feature requests: distributed storage that “just works” out of the box, and can integrate existing storage system.
Docker users have been driving us in this direction for two reasons. The first is that application portability across any infrastructure has been a central driver for Docker usage. As developers rapidly evolve from single container applications to multi-container applications deployed on a distributed system, they want to make sure their entire application is portable across any type of infrastructure, whether on cloud or on premise, including for the stateful services it may include. Infinit will address that by providing a portable distributed storage engine, in the same way that our SocketPlane acquisition provided a portable distributed overlay networking implementation for Docker.
The second driver has been the rapid adoption of Docker to containerize stateful enterprise applications, as opposed to next-generation stateless apps. Enterprises expect their container platform to have a point of view about persistent storage, but at the same time they want the flexibility of working with their existing vendors like HPE, EMC, Nutanix etc. Infinit addresses this need as well.
With all of our acquisitions, whether it was Conductant, which enabled us to scale powerful large-scale web operations stacks or SocketPlane, we’ve focused on extending our core capabilities and providing users with modular building blocks to work with and expand. Docker is committed to open sourcing Infinit’s solution in 2017 and add it to the ever-expanding list of infrastructure plumbing projects that Docker has made available to the community, such as  InfraKit, SwarmKit and Notary.  
For those who are interested in learning more about the technology, you can watch Infinit CTO Quentin Hocquet’s presentation at Docker Distributed Systems Summit last month, and we have scheduled an online meetup where the Infinit founders will walk through the architecture and do a demo of their solution. A key aspect of the Infinit architecture is that it is completely decentralized. At Docker we believe that decentralization is the only path to creating software systems capable of scaling at Internet scale. With the help of the Infinit team, you should expect more and more decentralized designs coming out of Docker engineering.
A few Words from CEO and founder Julien Quintard &;
&;We are thrilled to join forces with Docker. Docker has changed the way developers work in order to gain in agility. Stateful applications is the natural next step in this evolution. This is where Infinit comes into play, providing the Docker community with a default storage platform for applications to reliably store their state be it for a database, logs, a website&;s media files and more.&;
A few details about the Infinit’ architecture:

Infinit&8217;s next generation storage platform has been designed to be scalable and resilient while being highly customizable for container environments. The Infinit storage platform has the following characteristics:
&8211; Software-based: can be deployed on any hardware from legacy appliances to commodity bare metal, virtual machines or even containers.
&8211; Programmatic: developers can easily automate the creation and deployment of multiple storage infrastructure, each tailored to the overlying application&8217;s needs through policy-based capabilities.
&8211; Scalable: by relying on a decentralized architecture (i.e peer-to-peer), Infinit does away with the leader/follower model, hence does not suffer from bottlenecks and single points of failure.
&8211; Self Healing: Infinit&8217;s rebalancing mechanism allows for the system to adapt to various types of failures, including Byzantine.
&8211; Multi-Purpose: the Infinit platform provides interfaces for block, object and file storage: NFS, SMB, AWS S3, OpenStack Swift, iSCSI, FUSE etc.
 
Learn More

Sign up for the next Docker Online meetup on Docker and Infinit: Modern Storage Platform for Container Environments
Read about Docker and Infinit

Docker Acquires Distributed Storage Startup @Infinit to Provide Support for Stateful Containerized&;Click To Tweet

The post Docker acquires Infinit: a new data layer for distributed applications appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Global Mentor Week: Thank you Docker Community!

Danke, рақмет сізге, tak, धन्यवाद, cảm ơn bạn, شكرا, mulțumesc, Gracias, merci, asante, ευχαριστώ, thank you community for an incredible Docker Global Mentor Week! From Tokyo to Sao Paulo, Kisimu to Copenhagen and Ottowa to Manila, it was so awesome to see the energy from the community coming together to celebrate and learn about Docker!

Over 7,500 people registered to attend one of the 110 mentor week events across 5 continents! A huge thank you to all the Docker meetup organizers who worked hard to make these special events happen and offer Docker beginners and intermediate users an opportunity to participate in Docker courses.
None of this would have been possible without the support (and expertise!) of the 500+ advanced Docker users who signed up as mentors to help newcomers .
Whether it was mentors helping attendees, newcomers pushing their first image to Docker Hub or attendees mingling and having a good time, everyone came together to make mentor week a success as you can see on social media and the Facebook photo album.
Here are some of our favorite tweets from the meetups:
 

@Docker LearnDocker at Grenoble France 17Nov2016 @HPE_FR pic.twitter.com/8RSxXUWa4k
— Stephane Bureau (@SBUCloud) November 18, 2016

Awesome turnout at tonight&;s @DockerNYC learndocker event! We will be hosting more of these &; Keep tabs on meetup: https://t.co/dT99EOs4C9 pic.twitter.com/9lZocCjMPb
— Luisa M. Morales (@luisamariethm) November 18, 2016

And finally&; &;Tada&; Docker Mentor Weeklearndocker pic.twitter.com/6kzedIoGyB
— Károly Kass (@karolykassjr) November 17, 2016

 
Learn Docker
In case you weren’t able to attend a local event, the five courses are now available to everyone online here: https://training.docker.com/instructor-led-training
Docker for Developers Courses
Developer &8211; Beginner Linux Containers
This tutorial will guide you through the steps involved in setting up your computer, running your first containers, deploying a web application with Docker and running a multi-container voting app with Docker Compose.
Developer &8211; Beginner Windows Containers
This tutorial will walk you through setting up your environment, running basic containers and creating a Docker Compose multi-container application using Windows containers.
Developer &8211; Intermediate (both Linux and Windows)
This tutorial teaches you how to network your containers, how you can manage data inside and between your containers and how to use Docker Cloud to build your image from source and use developer tools and programming languages with Docker.
Docker for Operations courses
This courses are step-by-step guides where you will build your own Docker cluster, and use it to deploy a sample application. We have two solutions for you to create your own cluster.

Using play-with-docker

Play With Docker is a Docker playground that was built by two amazing Docker captains: Marcos Nils and Jonathan Leibiusky during the Docker Distributed Systems Summit in Berlin last October.
Play with Docker (aka PWD) gives you the experience of having a free Alpine Linux Virtual Machine in the cloud where you can build and run Docker containers and even create clusters with Docker features like Swarm Mode.
Under the hood DIND or Docker-in-Docker is used to give the effect of multiple VMs/PCs.
To get started, go to http://play-with-docker.com/ and click on ADD NEW INSTANCE five times. You will get five &8220;docker-in-docker&8221; containers, all on a private network. These are your five nodes for the workshop!
When the instructions in the slides tell you to &8220;SSH on node X&8221;, just go to the tab corresponding to that node.
The nodes are not directly reachable from outside; so when the slides tell you to &8220;connect to the IP address of your node on port XYZ&8221; you will have to use a different method.
We suggest to use &8220;supergrok&8221;, a container offering a NGINX+ngrok combo to expose your services. To use it, just start (on any of your nodes) the jpetazzo/supergrok image. The image will output further instructions:
docker run –name supergrok -d jpetazzo/supergrok
docker logs –follow supergrok
The logs of the container will give you a tunnel address and explain you how to connected to exposed services. That&8217;s all you need to do!
You can also view this excellent video by Docker Brussels Meetup organizer Nils de Moor who walks you through the steps to build a Docker Swarm cluster in a matter of seconds through the new play-with-docker tool.

 
Note that the instances provided by Play-With-Docker have a short lifespan (a few hours only), so if you want to do the workshop over multiple sessions, you will have to start over each time &8230; Or create your own cluster with option below.

Using Docker Machine to create your own cluster

This method requires a bit more work to get started, but you get a permanent cluster, with less limitations.
You will need Docker Machine (if you have Docker Mac, Docker Windows, or the Docker Toolbox, you&8217;re all set already). You will also need:

credentials for a cloud provider (e.g. API keys or tokens),
or a local install of VirtualBox or VMware (or anything supported by Docker Machine).

Full instructions are in the prepare-machine subdirectory.
Once you have decided what option to choose to create your swarm cluster, you ready to get started with one of the operations course below:
Operations &8211; Beginner
The beginner part of the Ops tutorial will teach you how to set up a swarm, how to use it to host your own registry, how to build your app container images and how to deploy and scale a distributed application called Dockercoins.
Operations &8211; Intermediate
From global container scheduling, overlay networks troubleshooting, dealing with stateful services and node management, this tutorial will show you how to operate your swarm cluster at scale and take you on a swarm mode deep dive.

Danke, Gracias, Merci, Asante, ευχαριστώ, thank you Docker community for an amazing&8230;Click To Tweet

The post Global Mentor Week: Thank you Docker Community! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Why can’t all cloud providers deliver adequate security?

IT security is a top priority for most CIOs. After all, gaps in infrastructure could leave their companies and customers vulnerable to attacks.
So when evaluating a cloud managed services provider, asking the right security questions can be critical in determining if the solution is a good fit. Choosing a cloud solution that meets a company’s unique requirements can help reduce operational costs and drive innovation while enhancing security.
With this in mind,  our IBM cloud security experts highlight six question in this short webcast that, when asked, can help you decide whether a cloud service provider can meet your security requirements.
A focus on security
Source: Redefining Connections: Insights from the Global C-Suite Study – The CIO Perspective, IBM Institute of Business Value, 2016
A recent study conducted by IBM found that 76 percent of CIOs consider IT security their biggest risk. It was far and away the top response.
To avoid potential problems, a cloud managed services provider should incorporate built-in security layers at every level from the data center to the operating system, delivering a fully-configured solution with industry-leading physical security and regular vulnerability scans performed by highly-skilled specialists.
Questions to ask
When deciding whether a cloud managed services provider can meet your security requirements, start with these questions:
1. Who is responsible for security?
The answer may not be as obvious as you think.
Some cloud managed services providers might not take the full responsibility of maintaining a security-rich environment for your data. After they provide the hardware, the security and compliance responsibilities could rest with you. Also, some providers may require an agreement stipulating that your company is responsible for anything you do on your system that might affect your “neighbors” on that same cloud infrastructure.
Choose a cloud managed services provider capable of taking full responsibility for the security of the infrastructure rather than placing the onus on  your company or a third party.
Be certain that your data is managed with the same tools, standards and processes that the provider uses for its own systems.  To avoid confusion that can lead to serious issues later on, make sure this division of responsibility is clearly defined in your agreement with the provider.
2. How do I know security is adequate?
Your cloud solution should be able to help you manage regulatory compliance standards. While some providers may use certifications as a way of demonstrating security, it’s important to know what you’re looking at. Some certifications may cover only certain services or locations.
Choose a cloud managed services provider that covers the security of the entire infrastructure as well as policies and procedures. The security section of the IBM Cloud Managed Services Comparison Guide includes a list of certifications you may want to look for when evaluating cloud providers.
3. What if something goes wrong?
Quick recovery after a disaster is crucial to your business operations. Failure to properly handle outages can lead to lost revenue, productivity challenges and a damaged reputation with your customers.
Choose a managed cloud hosting solution that includes offsite disaster recovery options to help you get back online quickly.  Make sure your agreement includes production-level service level agreements (SLAs) and regular testing of emergency backup options.
To learn more about what to ask and listen for when deciding whether a cloud service provider can meet your security requirements, register for the webcast, &;Six questions every CIO should ask about cloud security.&;
The post Why can’t all cloud providers deliver adequate security? appeared first on news.
Quelle: Thoughts on Cloud

OpenStack Developer Mailing List Digest November 26 – December 2

Updates

Nova Resource Providers update [2]
Nova blueprints update [16]
OpenStack-Ansible deploy guide live! [6]

The Future of OpenStack Needs You [1]

Need more mentors to help run Upstream Trainings at the summits
Interested in doing an abridged version at smaller more local events
Contact ildikov or diablo_rojo on IRC if interested

New project: Nimble [3]

Interesting chat about bare metal management
The project name is likely to change
(Will this lead to some discussions about whether or not allow some parallel experiments in the OpenStack Big Tent?)

Community goals for Pike [4]

As Ocata is a short cycle it’s time to think about goals for Pike [7]
Or give feedback on what’s already started [8]

Exposing project team&;s metadata in README files (Cont.) [9]

Amrith agrees with the value of Flavio’s proposal that a short summary would be good for new contributors
Will need a small API that will generate the list of badges

Done- as a part of governance
Just a graphical representation of what’s in the governance repo
Do what you want with the badges in README files

Patches have been pushed to the projects initiating this change

Allowing Teams Based on Vendor-specific Drivers [10]

Option 1: https://review.openstack.org/403834 &; Proprietary driver dev is unlevel
Option 2: https://review.openstack.org/403836 &8211; Driver development can be level
Option 3: https://review.openstack.org/403839 &8211; Level playing fields, except drivers
Option 4:  https://review.openstack.org/403829 &8211; establish a new &;driver team&; concept
Option 5: https://review.openstack.org/403830 &8211; add resolution requiring teams to accept driver contributions

Thierry prefers this option
One of Flavio’s preferred options

Option 6: https://review.openstack.org/403826 &8211; add a resolution allowing teams based on vendor-specific drivers

Flavio’s other preferred option

Cirros Images to Change Default Password [11]

New password: gocubsgo
Not ‘cubswin:)’ anymore

Destructive/HA/Fail-over scenarios

Discussion started about adding end-user focused test suits to test OpenStack clusters beyond what’s already available in Tempest [12]
Feedback is needed from users and operators on what preferred scenarios they would like to see in the test suite [5]
You can read more in the spec for High Availability testing [13] and the user story describing destructive testing [14] which are both on review

Events discussion [15]

Efforts to remove duplicated functionality from OpenStack in the sense of providing event information to end-users (Zaqar, Aodh)
It is also pointed out that the information in events can be sensitive which needs to be handled carefully

 
[1] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108084.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107982.html
[3] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107961.html
[4] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108167.html
[5] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108062.html
[6] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108200.html
[7] https://etherpad.openstack.org/p/community-goals
[8] https://etherpad.openstack.org/p/community-goals-ocata-feedback
[9] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107966.html
[10] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108074.html
[11] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108118.html
[12] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108062.html
[13] https://review.openstack.org/#/c/399618/
[14] https://review.openstack.org/#/c/396142
[15] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108070.html
[16] http://lists.openstack.org/pipermail/openstack-dev/2016-November/108089.html
Quelle: openstack.org

Enterprise cloud strategy: Applications and data in a multi-cloud environment

Say you&;ve decided to hedge your IT bets with a multi-cloud environment. That’s excellent, except, what&8217;s your applications and data strategy?
That&8217;s not an idle question. The hard reality is that if you don&8217;t coordinate your cloud environments, innovative applications will struggle to integrate with traditional systems. Cost management, security and compliance — like organizational swords of Damocles — will hover over your entire operation.
In working with clients who effectively manage multiple clouds, I see five key elements of applications and data strategy:
Data residency and locality
Data residency (sometimes called data sovereignty) defines where a company’s data physically resides, with rules for how it&8217;s handled and transferred, including backup and disaster recovery scenarios. It&8217;s often governed by countries or regions such as the European Union.
Data locality, on the other hand, determines how and where data should be stored for processing.
Taken together, data residency and locality affect your applications and your efforts to globally digitize more than anything else. Different cloud providers allow various levels of control over data placement. They also provide the tools to verify and ensure compliance with residency laws. In this regard, it&8217;s crucial to have a common set of tools and processes.
Data backup and restoration across clouds are necessities. Your cloud services provider (CSP) must be able to handle this, telling you exactly where it places the data in its cloud. Likewise, you should know where the CSP stores copies of the data so you can replicate them to another location in case of a disaster or audit.
Security and compliance
You need a common set of security policies and implementations across your multi-cloud environment. This includes rules for identity management, authentication, vulnerability assessment, intrusion detection and other security areas.
In an environment with high compliance requirements, customer-managed encryption keys are also essential. You should pay attention to how and where they&8217;re stored, as well as who has access to decrypted data, particularly CSP personnel.
Additionally, your CSP&8217;s platform capabilities must securely manage infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), software-as-a-service (SaaS), business-process-as-a-service (BPaaS) and database-as-a-service (DBaaS) deployment models.
Also, the CSP&8217;s cloud will invariably house multiple tenants. Data should be segregated from them with top-level access policies, segmentation and isolation.
Integration: APIs and API management
APIs are the connective tissue of your applications. They need effective lifecycle management across traditional, private cloud and public cloud applications.
Your CSP should provide an API lifecycle solution that includes &;create, run, secure, manage&; actions in a single offering. That solution should also have flexible deployment options — multi-cloud and on premises — managed from a single control pane. That gives you the ability to manage APIs, products, policies and users through one view across your cloud environments.
In assessing a CSP, it’s also worth knowing whether it can integrate PaaS services through a common gateway. Its API platform should be a distributed model to implement security and traffic policies, as well as proactively monitor APIs.
Portability and migration
When taking applications to a multi-cloud environment, you must choose among three migration models. You can &8220;lift and shift&8221; with a direct port of the application to the cloud, perform a full refactor that completely customizes the application, or choose a partial refactor in which only parts of the application are customized.
A lot rides on your CSP&8217;s ability to support these models. Since legacy applications depend on infrastructure resiliency to satisfy uptime, you may not be able to fit them to the CSP’s deployment standards. In fact, such modifications may delay cloud benefits. To address this problem, consider containers for new applications deployed across your different clouds.
Some enterprises tackle migration by installing a physical appliance on the CSP’s premises or in co- located facilities, then integrating it with their applications. If you go this route, understand what the options are, particularly the technical limits with respect to data volumes, scale and latency.
New applications and tooling
To ensure efficiency, operations such as building, testing, and deploying applications should be linked together to create a continuous integration/continuous deployment (CI/CD) pipeline. These tool chains often require customizations when part of a multi-cloud environment. One common error: new applications are designed for scalability in public cloud IaaS or PaaS scenarios, but their performance service-level agreements (SLAs) are not addressed early enough in the cycle. Understanding your CSP&8217;s SLAs, along with designing and testing for performance, are crucial for successful global deployment.
For more information, read IBM Optimizes Multicloud Strategies for Enterprise Digital Transformation.
The post Enterprise cloud strategy: Applications and data in a multi-cloud environment appeared first on news.
Quelle: Thoughts on Cloud

Your Docker Agenda for December 2016

Thank you community for your amazing Global Mentor Week Events last month! In November, the community organized over 110 Docker Global Mentor Week events and more than 8,000 people enrolled in at least one of the courses for 1000+ course completions and counting! The five self-paced courses are now available for everyone free online. Check them out here!
As you gear up for the holidays, make sure to check out all the great events that are scheduled this month in Docker communities all over the world! From webinars to workshops, to conference talks, check out our list of events that are coming up in December.
Official Docker Training Courses
View the full schedule of instructor led training courses here!
 
Introduction to Docker:
This is a two-day, on-site or classroom-based training course which introduces you to the Docker platform and takes you through installing, integrating, and running it in your working environment.
Dec 7-8: Introduction to Docker with AKRA Hamburg City, Germany
 
Docker Administration and Operations:
The Docker Administration and Operations course consists of both the Introduction to Docker course, followed by the Advanced Docker Topics course, held over four consecutive days.
Dec 5-8 Docker Administration and Operations with Amazic &; London, United Kingdom
Dec 6-9: Docker Administration and Operations with Vizuri &8211; Atlanta, GA
Dec 12-15: Docker Administration and Operations with Docker Captain, Luis Herrera &8211; Madrid, Spain
Dec 12-15: Docker Administration and Operations with Kiratech &8211; Milan, Italy
Dec 13-16: Docker Administration and Operations with TREEPTIK &8211; Aix en Provence, France
Dec 19-22: Docker Administration and Operations with TREEPTIK &8211; Paris, France
 
Advanced Docker Operations:
This two day course is designed to help new and experienced systems administrators learn to use Docker to control the Docker daemon, security, Docker Machine, Swarm Mode, and Compose.
Dec 7-8: Advanced Docker Operations with Amazic &8211; London, United Kingdom
Dec 15-16: Advanced Docker Operations with Docker Captain, Benjamin Wootton &8211; London, United Kingdom
North America 
Dec 3rd: DOCKER MEETUP AT VISA &8211; Reston, VA
Visa is hosting this month’s meetup! A talk entitled &;Docker UCP 2.0 and DTR 2.1 GA&; by Ben Grissinger (from Docker) followed by &8216;Docker security&8217; by Paul Novarese (from Docker).
Dec 3rd: DOCKER MEETUP IN HAVANA &8211; Havana, Cuba
Join Docker Havana for their 1st ever meetup! Work through the training materials from Docker’s Global Mentor Week series and !
Dec 4th: GDG DEVFEST 2016 &8211; Los Angeles, CA
Docker&8217;s Mano Marks with be keynoting DevFest LA.
Dec 7th: DOCKER MEETUP AT MELTMEDIA &8211; Phoenix, AZ
Join Docker Phoenix for a &8216;Year in Review and Usage Roundtable&8217;. 2016 was a big year for Docker, let&8217;s talk about it!
Dec 13th: DOCKER MEETUP AT TORCHED HOP BREWING &8211; Atlanta, GA
This month we&8217;re going to have a social event without a presentation in combination with the Go and Kubernetes Meetups at Torched Hop Brewing.Come hang out and have a drink or food with us!
Dec 13th: DOCKER MEETUP AT GOOGLE &8211; Seattle, WA
Tiffany Jernigan will do a talk Docker Orchestration (Docker Swarm Mode) and Metrics Collection and then Tsvi Korren will follow with a talk on securing your container environment.
Dec 14th: DOCKER MEETUP AT PUPPET LABS &8211; Portland, OR
A talk by Nan Liu from Intel entitled, &8216;Trust but verify. Testing docker containers.&8217;
Dec 14th: DOCKER MEETUP AT DOCKER HQ &8211; San Francisco, CA
Docker is joining forces with the Prometheus meetup group for a holiday mega-meetup with talks on using Docker with Prometheus and OpenTracing. As a special holiday gift we will be giving away a free DockerCon 2017 ticket to one lucky attendee! Don’t miss out &8211; RSVP now!
 
Dec 15th: DOCKER MEETUP AT GOGO &8211; Chicago, Il
We will be welcoming Loris Degioanni of sysdig as he takes us through monitoring containers. The good, the bad.. and best practice!
 
Europe
Dec 5th: DEVOPSCON MUNICH &8211; Munich, Germany
Docker Captains Philipp Garbe, Gianluca Arbezzano, Viktor Farcic and Dieter Reuter will all be speaking at DevOpsCon.
Dec 6th: DOCKER MEETUP AT FOO CAFE STOCKHOLM &8211; Stockholm, Sweden
In this session, you’ll learn about the container technology built natively into Windows Server 2016 and how you can reuse your knowledge, skills and tools from Docker on Linux. This session will be a mix of presentations, giving you an overview of the technology, and hands-on experiences, so make sure to bring your laptop.
Dec 6th: D cubed: Decision Trees, Docker and Data Science in the Cloud &8211; London, United Kingdom
Steve Poole, DevOps practitioner (leading a team of engineers on cutting edge DevOps exploration) and a long time IBM Java developer, leader and evangelist, will explain what Docker is, and how it works.
Dec 8th: Docker Meetup at Pentalog Romania &8211; Brasov, Romania
Come for a full overview of DockerCon 2016        !
Dec 8th: DOCKER FOR .NET DEVELOPERS AND AZURE MACHINE LEARNING &8211; Copenhagen, Denmark
For this meetup we get a visit from Ben Hall who will talk about Docker for .NET applications, and Barbara Fusińska who will talk about Azure Machine Learning.
Dec 8th: Introduction to Docker for Java Developers &8211; Brussels, Belgium
Join us for the last session of 2016 and discover what Docker has to offer you!
Dec 14th: DOCKER MEETUP AT LA CANTINE NUMERIQUE &8211; Tours, France
What&8217;s new in the Docker ecosystem plus a few more talks on Docker compose and Swarm Mode.
Dec 15th: Docker Meetup at Stylight HQ &8211; Munich, Germany
Join us for our end of the year holiday meetup! Check event page for more details.
Dec 15th: Docker Meetup at ENSEIRB &8211; Bordeaux, France
Jeremiah Monsinjob and Florian Garcia will talk about Docker under dynamic platform and microservices.
Dec 16th: Thessaloniki .NET Meetup about Docker &8211; Thessaloniki, Greece
Byron Papadopoulos will talk about the following: What is the Docker technology, in which cases used, security, scaling, monitoring. What are the tools we use Docker. (Docker Engine and Docker Compose). Container Orchestrator Engines, Docker in Azure (show Docker Swarm Mode). Docker for Devops, and Docker for developers.
Dec 19th: Modern Microservices Architecture using Docker &8211; Herzliyya, Israel
Microservices are all the rage these days. Docker is a tool which makes managing Microservices a whole lot easier. But what do Microservices really mean? What are the best practices of composing your application with Microservices? How can you leverage Docker and the public cloud to help you build a more agile DevOps process? How does the Azure Container Service fit in? Join us in order to find out the answers.
Dec 21st: Docker Meetup at Campus Madrid &8211; Madrid, Spain
Two talks. First talk by Diego Martínez Gil: Dockerized apps running on Windows.
Diego will present the new features available in Windows 10 and Windows Server 2016 to run dockerized applications. Second talk is by Pablo Chico de Guzmán: Docker 1.13. Pablo will demo some of the features available in Docker 1.13.
 
Asia
Dec 10th: DOCKER MEETUP AT MANGALORE INFOTECH &8211; Mangaluru, India
We are hosting the Mangalore edition of &;The Docker Global Mentor Week.&; Our goal is to provide easy paced self learning courses that will take you through the basics of Docker and make you well acquainted with most aspects of application delivery using Docker.
Dec 10th: BIMONTHLY MEETUP 2016 &8211; DOCKER FOR PHP DEVELOPERS &8211; Pune, India
If you are aching to get started with docker, but not sure how to, this meetup is right platform. In this meetup, we will first start by explaining basic docker concepts like what docker is, its benefits, images, registry, containers, docker files etc, followed by an optional workshop for some practical.
Dec 12th: DOCKER MEETUP AT MICROSOFT &8211; Singapore, Singapore
Join us for our next meetup event!
Dec 20th: DOCKER METUP AT MICROSOFT &8211; Riyadh, Saudi Arabia
Join us for a deep dive into Docker technology and how Microsoft and Docker work together. Learn about Azure IaaS and how to run Docker on Microsoft Azure.
Oceania
Dec 5th: DOCKER MEETUP AT CATALYST IT &8211; Wellington, New Zealand
Join us for our next meetup!
Dec 5th: DOCKER MEETUP AT VERSENT PTY LTD &8211; Melbourne, Australia
Yoav Landman, the CTO of JFrog, will talk to us about how new tools often introduce new paradigms. Yoav will examine the patterns and the anti-patterns for Docker image management, and what impact the new tools have on the battle-proven paradigms of the software development lifecycle.
Dec 13th: Action Cable & Docker &8211; Wellington, New Zealand
Come check out a live demo of adding Docker to a rails app.
Africa
Dec 16th: Docker Meetup at Skylabase Inc. &8211; Buea, Cameroon
Join us for a Docker Study Jam!

Check out the list of docker events, meetups, workshops, trainings for the month of December!Click To Tweet

The post Your Docker Agenda for December 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/