Announcing DockerCon 2017

The Docker Team is excited to announce the next DockerCon will be in held in Austin, Texas from April 17-20. For anyone not in an event planning role, finding a venue is always an adventure. Finding a venue for a unique event such as DockerCon adds an extra layer of complexity. After inquiring on over 15 venues and visiting 3 cities, we are confident that we have chosen a great venue for DockerCon 2017 and the Docker community.
DockerCon US 2017: Austin
April 17-20, 2017
Between the lively tech community, amazing restaurants and culture, Austin will be a natural fit for DockerCon. A diverse range of companies such as Dell, Whole Foods Market, Rackspace, HomeAway and many more of the hottest IT startups call Austin home. We can’t wait to welcome back many returning DockerCon alumni as well as open the DockerCon doors to so many new attendees and companies in the Austin area.
One of the most exciting additions to the DockerCon program is an extra day of content! We reviewed every attendee survey from Seattle in June, debriefed with Docker Captains and others in the community and came to the overwhelming conclusion that two days was not enough time to get the most value out of the jampacked DockerCon agenda. In 2017, we will introduce a third day of content that will repeat the top voted sessions, give more time to complete Hands-on Labs and allow more time for other learning opportunities that are in the works.
Let’s get this party started!
Save the dates:

Monday April 17: Paid training, afternoon workshops and evening welcome reception
Tuesday April 18: DockerCon Day 1, After Party
Wednesday April 19: DockerCon Day 2
Thursday April 20: DockerCon Day 3 &; half day of repeat top sessions, Hands-on Labs and workshops

Pre-register now for early bird pricing and we’ll send you an additional $50 discount code once DockerCon registration launches.
 
Pre-register for DockerCon
 
Calling all speakers!
We’re excited to hear about all of the interesting ways you’re using Docker. We’re looking for a variety of talks such as cool and unique use cases and Docker hack projects, advanced technical talks, or maybe you have a great talk on tech culture. Check out our sample CFP proposals for DockerCon for more information on what the program committee is looking for when reviewing a proposal, our tips for getting a proposal accepted, and our previous talks from DockerCon 2016. Our Call for Proposals will be open November 17, 2016 &8211; January 7, 2017.
Are you interested in learning more about sponsorship opportunities at DockerCon? Please sign up here to be among the first to receive the sponsorship prospectus.
 
Sponsor DockerCon
 
So, by now you’ve read this entire blog post and are now shouting, “What about DockerCon Europe?!” The truth is that we have spent many months searching for an available venue and we were unable to secure a site for this year. The reality is that the conference industry is incredibly competitive and we need to lock in venues farther in advance. For this reason we are now working on bringing DockerCon back to Europe in 2017. We will update the community as soon as we concrete details.
 
About DockerCon
DockerCon 2017 is a three day, Docker-centric conference organized by Docker. This year’s US edition will take place in Austin, TX and continue to build on the success of previous events as it grows and reflects Docker’s established ecosystem and ever growing community. DockerCon will feature topics and content covering all aspects of Docker and will be suitable for Developers, DevOps, Ops, System Administrators and C­-level executives. You will have ample opportunities to connect and learn about how others are using Docker. We&;re confident that no matter your level of expertise with Docker or your company size, you&8217;ll meet and learn from other attendees who share the same use cases and overcame the same challenges using Docker.

Save the date for @DockerCon 2017 in Austin April 17-20 ! we hope to see you all at To Tweet

The post Announcing DockerCon 2017 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Develop Cloud Applications for OpenStack on Murano, Day 5: Uploading and troubleshooting the app

The post Develop Cloud Applications for OpenStack on Murano, Day 5: Uploading and troubleshooting the app appeared first on Mirantis | The Pure Play OpenStack Company.
We&;re in the home stretch! So far, we’ve explained what Murano is, created an OpenStack cluster with Murano, built the main script that will install our application, and packaged it as a Murano app. We&8217;re finally ready to deploy the app to Murano.
Now let’s upload the PloneServerApp package to Murano.
Add the Murano app to the OpenStack Application Catalog
To upload an application to the cloud:

Log into the OpenStack Horizon dashboard.
Navigate to Applications > Manage > Packages.
Click the Import Package button.

Select the zip package that we created yesterday and click Next.
In the pop-up window you can see the information that we added to the manifest.yaml file earlier. Also, we’ve got a notification message that Glance has started retrieving Ubuntu image mentioned in image.lst.  (This only happens if the image doesn&8217;t already exist in Glance.)

Now we just have to wait for the image to finish saving so we can move on to try out the app.  To check on that, go to Projects > Images. Wait for the status to be listed as Active rather than Saving.

Deploy the new app
Now that we&8217;ve created the app, it&8217;s time to test it out in all its glory.

Navigate to Applications > Catalog > Browse.

You will find that the Plone Server has appeared with the icon from our logo.png file. Click Quick deploy and you’ll see the configuration wizard appear, with all of the information we added to the ui.yaml file in the appConfiguration form:

Click on Assign Floating IP and click Next.
You’ll then see the instanceConfiguration form we mentioned in the ui.yaml file:

Choose an appropriate instance flavor. In my case I used a “m1.small” flavor and edited it to have: 1 CPU, 1GB RAM, and 20GB disk space. I also, shut down the Compute node VM and gave it more RAM in VirtualBox, with 2GB instead of 1GB. You can edit flavors by navigating from Admin > System > Flavors.
Be aware: that if you select a flavor that requires more hardware than your Compute node really has then you take an error during spawning an instance.
Choose the instance image that we mentioned in image.lst. If no images appear in the drop-down menu check that your image has finished uploading.
Choose a Key Pair or import it instantly by clicking the “+” button:

Click Next.
Set the Application Name and click Create:

The Plone Server application has now been successfully added to the newly created quick-env-1 environment. Click the Deploy This Environment button to start the deployment:

It may take some time for the environment to deploy:

Wait until the status has changed from Deploying to Ready:

Once it does, go to the Plone home page at http://172.16.0.134:8080 from your Host OS browser, this is, outside your OpenStack Cloud:

You should see the Plone home page. If you don&8217;t, you&8217;ll need to do some troubleshooting.
Debugging and Troubleshooting Your Murano App
While deploying your Murano App you may have encountered a number of errors. Some of them could be related to spawning a VM, others may have occured during runDeployPlone.sh execution.
For information on errors relating to spawning the VM, check the Horizon UI. Navigate Catalog > Environments then click the environment and open the Deployment History page. Click on the Show Details button located at the corresponding deployment row of the table and then go to the Logs tab. From there you can see the steps of deployment and ones that have failed will have a red color.
Several of the most frequently occurring errors, as well as their suggested solutions, are described in  Murano documentation.
The other type of errors relates to the app installing script runDeployPlone.sh. As you remember, we collect all output from this file in a special log-file, /var/log/runPloneDeploy.log to help you track any possible issues. By knowing the floating IP-address of the newly created VM for the Plone Server, we can access the log-file via an ssh-connection.
It&8217;s important to note, though, that because we applied a special Ubuntu image from the repository during the environment deployment, the login process has a security limitation. By default, the password authentication mechanism is turned off and the only way to connect to your VM is to use an access key pair. You can find out more about how to create and set this up here.
First log in to the VM as the default user, ubuntu:
$ ssh -i <private_key> ubuntu@<floating IP address>
You can then read the log:
$ less /var/log/runPloneDeploy.log
Now it’s possible to fix the errors that have appeared and polish the installation process.
Remember, when encountering issues with your Murano App, you can always contact the Murano team, or any other OpenStack related teams, through IRC. You can find the list of IRC channels here: IRC. Feel free to ask any questions.
Summary
In this series, we outlined the creation process of a Murano App for the ultimate enterprise CMS &; Plone. We also saw how to easily build a Murano App from the ground up and showed how it didn’t require you to be an OpenStack or Linux guru.
Murano is a great OpenStack service that provides application lifecycle management and dramatically simplifies the introduction of new software to the OpenStack community.
Moreover, it provides other great features not mentioned in this tutorial, such as High-Availability mode, Auto-Scaling or application dependencies management.
Try it out for yourself and get excited by how easy it is. Next time, we&8217;ll look at the steps needed to publish your Murano App in the OpenStack application catalog at http://apps.openstack.org.
Thanks for joining us!
The post Develop Cloud Applications for OpenStack on Murano, Day 5: Uploading and troubleshooting the app appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Top 5 Docker Questions from Microsoft Ignite

Last week was busy for the team at Microsoft Ignite in Atlanta. With the exciting announcement about the next evolution of the Docker and Microsoft relationship, the availability of Docker for Windows Server 2016 workloads, the show floor, general session, keynotes, and breakout sessions were all abuzz about Docker for Windows. Whether you were attended or not we want to make sure you didn’t miss a thing, here are the key announcements at this year’s Microsoft Ignite:

Docker Doubles Container Market with Support for Windows Workloads
Availability of Docker For Windows Server 2016
Docker Commercially Supported Docker Engine available in Windows Server 2016

Cool @VisualStudio and @docker integration being demoed by @shanselman at auto creation of Dockerfiles & debug inside containers. pic.twitter.com/HVDHKmwRrL
— Marcus Robinson (@techdiction) September 26, 2016

Wow @Docker engine included with all Server 2016 deployments. MSIgnite
— Joe Kelly (@_JoeKelly_) September 26, 2016

 
Here our top 5 questions heard in the Docker booth:

What are containers?

While container technology had been around for more than a decade. However, as the leader in the containerization market, .Docker has made the technology usable and accessible to all developers and sysadmins. . Containers allow developers and IT Pros to package an application into a standardized unit for software development, making them highly portable and able to run across any operating system. Each container contains a complete filesystem with everything needed to run: code, runtime, system tools, system libraries –essentially,  anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment, without having to make any changes to the underlying code. Docker containers were previously only available to the Linux community and with the announcement of Docker for Windows Server 2016, Docker containers are now available for Windows workloads addressing 98% of enterprise workloads.
 

How is this different than App-V Application Virtualization?

Those in the Windows OS world are familiar with Microsoft App-V or ThinApp and naturally there were questions about comparing them to Docker containers. Application virtualization is used to package a full application with the relevant OS libraries into a single executable. Docker is a set of tooling used to build server based applications.  A single application could be comprised of one or hundreds of containers connected together. App-V is used for desktop applications and are not designed for server based applications. The most common example is packaging browsers with extensions so they can access custom web apps.  Each App-V package can reside on a laptop with different extensions/plugins, etc. To learn more about Application Virtualization and Docker, read our blog: There’s Application Virtualization and There’s Docker
 

How do I get started with Docker for Windows Server?

Integrating Visual Studio Tools for Docker and Docker for Windows provides desktop development environments for building Dockerized Windows apps. Getting started is easy and we have the tools you need to get started in a few easy steps:

Pick your tool:

The latest Anniversary update for Windows 10 offers containerization support for the Windows 10 kernel.
To run Windows containers in production at scale, download a free evaluation version Windows Server 2016 and install it on bare metal or in a VM running on Hyper-V, VirtualBox or similar.

Install a Windows Docker Engine on your system with Docker for Windows public beta on your system.
Run your first Windows Container in just a few steps with the instructions listed on the “Getting Started with Docker for Windows” webpage.
Create your own Dockerfile with our Image2Docker tool, a Powershell module that points at a virtual hard disk image, scans for common Windows components and suggest a Dockerfile. Read the blog to learn more and get started.

For a complete list of instructions read our blog post &; Build And Run Your First Docker Windows Server Container & view Windows Server container base images and applications on Docker Hub from Microsoft.
 

How do I manage containers?

Docker Datacenter is the integrated container orchestration and management platform for IT Pros. Today Docker Datacenter is available on Azure to manage Linux application environments. With the availability of Windows Server 2016 and Docker Engine, we are planning for a beta in Q4 2016 of Docker Datacenter management for Windows Server based applications.  Sign up here to be notified of the beta.
 

Where can I learn more?

There are lots of great resources and sessions to help you learn more. Whether you attended the conference or watched online here’s a wrap up of the top five session from Microsoft Ignite:

General Session with Scott Guthrie, EVP Cloud and Enterprise at Microsoft and Daryll Fogal CTO at Tyco

 

Keynote: “Reinvent IT infrastructure for business agility” with Jason Zander, CVP Microsoft Azure and Ben Golub, CEO of Docker

 

Breakout sessions:

Walk the path to containerization – transforming workloads into containers
Accelerate application delivery with Docker Containers and Windows Server 2016
Dive into the new world of Windows Server and Hyper-V Containers

Top 5 Docker questions from MSIgnite &8211; Answers hereClick To Tweet

Resources

Learn more about Docker on Windows Server
Sign up to be notified of GA and the Docker Datacenter for Windows Beta
Register for a webinar: Docker for Windows Server
Learn more about the Docker and Microsoft partnership

The post Top 5 Docker Questions from Microsoft Ignite appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Develop Cloud Applications for OpenStack on Murano, Day 4: The application, part 2: Creating the Murano App

The post Develop Cloud Applications for OpenStack on Murano, Day 4: The application, part 2: Creating the Murano App appeared first on Mirantis | The Pure Play OpenStack Company.
So far in this series, we&;ve explained what Murano is, created an OpenStack cluster with Murano, and built the main script that will install our application. Now it&8217;s time to actually package PloneServerApp up for Murano.
In this series, we&8217;re looking at a very basic example, and we&8217;ll tell you all you need to make it work, but there are some great tutorials and references that describe this process (and more) in detail.  You can find them in the official Murano documentation:

Murano package structure
Create Murano application step-by-step
Murano Programming Language Reference

So before we move on, let&8217;s just distill that down to the basics.
What we&8217;re ultimately trying to do
When we&8217;re all finished, what we want is basically a *.zip file structured in a way that Murano expects, with files that provide all of the information that it needs. There&8217;s nothing really magical about this process, it&8217;s just a matter of creating the various resources.  In general, the structure of a Murano application looks something like this:
..
|_  Classes
|   |_  PloneServer.yaml
|
|_  Resources
|   |_  scripts
|       |_ runPloneDeploy.sh
|   |_  DeployPloneServer.template
|
|_  UI
|   |_  ui.yaml
|
|_  logo.png
|
|_  manifest.yaml
Obviously the filenames (and content!) will depend on your specific application, but you get the idea. (If you&8217;d like to see the finished version of this application, you can get it from GitHub.)
When we&8217;ve assembled all of these pieces, we&8217;ll zip them up and they&8217;ll be ready to import into Murano.
Let&8217;s take a look at the individual pieces.
The individual files in a Murano package
Each of the individual files we&8217;re working with is basically just a text file.
The Manifest file
The manifest.yaml file contains the main application’s information. For our PloneServerApp, that means the following:
1. #  Plone uses GPL version 2 as its license. As of summer 2009, there are
2. #  no active plans to upgrade to GPL version 3.
3. #  You may obtain a copy of the License at
4. #
5. #       http://www.gnu.org
6. #
7.
8. Format: 1.3
9. Type: Application
10. FullName: org.openstack.apps.plone.PloneServer
11. Name: Plone CMS
12. Description: |
13. The Ultimate Open Source Enterprise CMS.
14. The Plone CMS is one of the most secure
15. website systems available. This installer
16. lets you deploy Plone in standalone mode.
17. Requires Ubuntu 14.04 image with
18. preinstalled murano-agent.
19. Author: ‘Evgeniy Mashkin’
20. Tags: [CMS, WCM]
21. Classes:
22. org.openstack.apps.plone.PloneServer: PloneServer.yaml
Let’s start at Line 8:
Format: 1.3
The versioning of the manifest format is directly connected with YAQL and the version of Murano itself. See the short description of format versions and choose the format version according to the OpenStack release you going to develop your application for. In our case, we&8217;re using Mirantis OpenStack 9.0, which is built on the Mitaka OpenStack release, so I chose the 1.3 version that corresponds to Mitaka.
Now let’s move to Line 10:
FullName: org.openstack.apps.plone.PloneServer
Here you&8217;re adding a fully qualified name for your application, including the namespace if your choice.
IMPORTANT: Don&8217;t use the io.murano namespace for your apps; it&8217;s being used for the Murano Core Library.
Lines 11 through 20 show the Name, Description, Author and Tags, which will be shown in the UI:
Name: Plone CMS

Description: |
The Ultimate Open Source Enterprise CMS.
The Plone CMS is one of the most secure
website systems available. This installer
lets you deploy Plone in standalone mode.
Requires Ubuntu 14.04 image with
preinstalled murano-agent.
Author: ‘Evgeniy Mashkin’
Tags: [CMS, WCM]
Finally, on lines 21 and 22, you&8217;ll point to your application class file (which we&8217;ll build later). This file should be in the Classes directory of the package.
Classes:
org.openstack.apps.plone.PloneServer: PloneServer.yaml
Make sure to double check all of your references, filenames, and whitespaces as errors with these can cause errors when you upload your application package to Murano.
Execution Plan Template
The execution plan template &; DeployPloneServer.template &8212; describes the installation process of the Plone Server on a virtual machine and contains instructions to the murano-agent on what should be executed to deploy the application. Essentially, it tells Murano how to handle the runPloneDeploy.sh script we created yesterday.
Here&8217;s the DeployPloneServer.template listing for our PloneServerApp:
1. #  Plone uses GPL version 2 as its license. As of summer 2009, there are
2. #  no active plans to upgrade to GPL version 3.
3. #  You may obtain a copy of the License at
4. #
5. #       http://www.gnu.org
6. #
7. FormatVersion: 2.0.0
8. Version: 1.0.0
9. Name: Deploy Plone
10. Parameters:
11.  pathname: $pathname
12.  password: $password
13.  port: $port
14. Body: |
15.  return ploneDeploy(‘{0} {1} {2}’.format(args.pathname, args.password, args.port)).stdout
16. Scripts:
17.  ploneDeploy:
18.    Type: Application
19.    Version: 1.0.0
20.    EntryPoint: runPloneDeploy.sh
21.    Files: []
22.    Options:
23.      captureStdout: true
24.      captureStderr: true
Starting with lines 12 through 15, you can see that we&8217;re defining our parameters &; the installation path, administrative password, and TCP port. Just as we added them on the command line yesterday, we need to tell Murano to ask the user for them.
Parameters:
 pathname: $pathname
 password: $password
 port: $port
In the Body section we have a string that describes the Python statement to execute, and how it will be executed by the Murano agent on the virtual machine:
Body: |
return ploneDeploy(‘{0} {1} {2}’.format(args.pathname, args.password, args.port)).stdout
Scripts defined in the Scripts section are invoked from here, so, we need to keep the order of arguments consistent with the runPloneDeploy.sh script that we developed yesterday.
Also, double check all filenames, whitespaces, and brackets. Mistakes here can cause the Murano agent to experience errors when it tries to run our installation script. If you do experience errors in this case, after  an error has occurred, connect to the spawned VM via SSH and check the runPloneDeploy.log file we added for just this purpose.
Dynamic UI form definition
In order for the user to be able to add parameters such as the administrative password, we need to make sure that the user interface is set up correctly.  We do this with the UI.yaml file, which contains the UI forms description that will be shown to users and tells users where they can set available installation options. The ui.yaml file for our PloneServerApp reads as follows:
1. #  Plone uses GPL version 2 as its license. As of summer 2009, there are
2. #  no active plans to upgrade to GPL version 3.
3. #  You may obtain a copy of the License at
4. #
5. #       http://www.gnu.org
6. #
7. Version: 2.3
8. Application:
9.  ?:
10.    type: org.openstack.apps.plone.PloneServer
11.  pathname: $.appConfiguration.pathname
12.  password: $.appConfiguration.password
13.  port: $.appConfiguration.port
14.  instance:
15.    ?:
16.      type: io.murano.resources.LinuxMuranoInstance
17.    name: generateHostname($.instanceConfiguration.unitNamingPattern, 1)
18.    flavor: $.instanceConfiguration.flavor
19.    image: $.instanceConfiguration.osImage
20.    keyname: $.instanceConfiguration.keyPair
21.    availabilityZone: $.instanceConfiguration.availabilityZone
22.    assignFloatingIp: $.appConfiguration.assignFloatingIP
23. Forms:
24.  – appConfiguration:
25.      fields:
26.        – name: license
27.          type: string
28.          description: GPL License, Version 2
29.          hidden: true
30.          required: false
31.        – name: pathname
32.          type: string
33.          label: Installation pathname
34.          required: false
35.          initial: ‘/opt/plone/’
36.          description: >-
37.            Use to specify the top-level path for installation.
38.        – name: password
39.          type: string
40.          label: Admin password
41.          required: false
42.          initial: ‘admin’
43.          description: >-
44.            Enter administrative password for Plone.
45.        – name: port
46.          type: string
47.          label: Port
48.          required: false
49.          initial: ‘8080’
50.          description: >-
51.            Specify the port that Plone will listen to
52.            on available network interfaces.
53.        – name: assignFloatingIP
54.          type: boolean
55.          label: Assign Floating IP
56.          description: >-
57.             Select to true to assign floating IP automatically.
58.          initial: false
59.          required: false
60.        – name: dcInstances
61.          type: integer
62.          hidden: true
63.          initial: 1
64.  – instanceConfiguration:
65,      fields:
66.        – name: title
67.          type: string
68.          required: false
69.          hidden: true
70.          description: Specify some instance parameters on which the application would be created
71.        – name: flavor
72.          type: flavor
73.          label: Instance flavor
74.          description: >-
75.            Select registered in Openstack flavor. Consider that
76.            application performance depends on this parameter
77.          requirements:
78.            min_vcpus: 1
79.            min_memory_mb: 256
80.          required: false
81.        – name: minrequirements
82.          type: string
83.          label: Minumum requirements
84.          description: |
85.            – Minimum 256 MB RAM and 512 MB of swap space per Plone site
86.            – Minimum 512 MB hard disk space
87.          hidden: true
88.          required: false
89.        – name: recrequirements
90.          type: string
91.          label: Recommended
92.          description: |
93.            – 2 GB or more RAM per Plone site
94.            – 40 GB or more hard disk space
95.          hidden: true
96.          required: false
97.        – name: osImage
98.          type: image
99.          imageType: linux
100.          label: Instance image
101.          description: >-
102.            Select a valid image for the application. The image
103.            should already be prepared and registered in Glance
104.        – name: keyPair
105.          type: keypair
106.          label: Key Pair
107.          description: >-
108.            Select the Key Pair to control access to instances. You can login to
109.            instances using this KeyPair after the deployment of application.
110.          required: false
111.        – name: availabilityZone
112.          type: azone
113.          label: Availability zone
114.          description: Select availability zone where the application would be installed.
115.          required: false
116.        – name: unitNamingPattern
117.          type: string
118.          label: Instance Naming Pattern
119.          required: false
120.          maxLength: 64
121.          regexpValidator: ‘^[a-zA-z][-_w]*$’
122.          errorMessages:
123.            invalid: Just letters, numbers, underscores and hyphens are allowed.
124.          helpText: Just letters, numbers, underscores and hyphens are allowed.
125.          description: >-
126.            Specify a string, that will be used in instance hostname.
127.            Just A-Z, a-z, 0-9, dash and underline are allowed.
This is a pretty long file, but it&8217;s not as complicated as it looks.
Starting at line 8:
Version: 2.3
The format version for the UI definition is optional and its default value is the latest supported version. If you want to use your application with one of the previous versions you may need to set the version field explicitly.
Moving down the file, we basically have two UI forms: appConfiguration and instanceConfiguration.
Each form contains list of parameters that will be present on it. We place all of the parameters related to our Plone Server application on the appConfiguration form, including the path, password and TCP Port. This will then be sent to the Murano agent to invoke the runPloneDeploy.sh script:
       – name: pathname
         type: string
         label: Installation pathname
         required: false
         initial: ‘/opt/plone/’
         description: >-
           Use to specify the top-level path for installation.
       – name: password
         type: string
         label: Admin password
         required: false
         initial: ‘admin’
         description: >-
           Enter administrative password for Plone.
       – name: port
         type: string
         label: Port
         required: false
         initial: ‘8080’
         description: >-
           Specify the port that Plone will listen to
           on available network interfaces.
For each parameter we also set initial values that will be used as defaults.
On the instanceConfiguration form, we’ll place all of the parameters related to instances that will be spawned during deployment. We need to set hardware limitations, such as minimum hardware requirements, in the requirements section:
       – name: flavor
         type: flavor
         label: Instance flavor
         description: >-
           Select registered in Openstack flavor. Consider that
           application performance depends on this parameter
         requirements:
           min_vcpus: 1
           min_memory_mb: 256
         required: false
Also, we need to add notices for users about minimum and recommended Plone hardware requirements on the UI form:
       – name: minrequirements
         type: string
         label: Minumum requirements
         description: |
           – Minimum 256 MB RAM and 512 MB of swap space per Plone site
           – Minimum 512 MB hard disk space
         hidden: true
         required: false
       – name: recrequirements
         type: string
         label: Recommended
         description: |
           – 2 GB or more RAM per Plone site
           – 40 GB or more hard disk space
Murano PL Class Definition
Perhaps the most complicated part of the application is the class definition.  Contained in PloneServer.yaml, it describes the methods the Murano agent must be able to execute in order to manage the application. In this case, the application class looks like this:
1. #  Plone uses GPL version 2 as its license. As of summer 2009, there are
2. #  no active plans to upgrade to GPL version 3.
3. #  You may obtain a copy of the License at
4. #
5. #       http://www.gnu.org
6. #
7. Namespaces:
8.  =: org.openstack.apps.plone
9.  std: io.murano
10.  res: io.murano.resources
11.  sys: io.murano.system
12. Name: PloneServer
13. Extends: std:Application
14. Properties:
15.  instance:
16.    Contract: $.class(res:Instance).notNull()
17.  pathname:
18.    Contract: $.string()
19.  password:
20.    Contract: $.string()
21.  port:
22.    Contract: $.string()
23. Methods:
24.  .init:
25.    Body:
26.      – $._environment: $.find(std:Environment).require()
27.  deploy:
28.    Body:
29.      – If: not $.getAttr(deployed, false)
30.        Then:
31.          – $._environment.reporter.report($this, ‘Creating VM for Plone Server.’)
32.          – $securityGroupIngress:
33.            – ToPort: 80
34.              FromPort: 80
35.              IpProtocol: tcp
36.              External: true
37.            – ToPort: 443
38.              FromPort: 443
39.              IpProtocol: tcp
40.              External: true
41.            – ToPort: $.port
42.              FromPort: $.port
43.              IpProtocol: tcp
44.              External: true
45.          – $._environment.securityGroupManager.addGroupIngress($securityGroupIngress)
46.          – $.instance.deploy()
47.          – $resources: new(sys:Resources)
48.          – $template: $resources.yaml(‘DeployPloneServer.template’).bind(dict(
49.                pathname => $.pathname,
50.                password => $.password,
51.                port => $.port
52.              ))
53.          – $._environment.reporter.report($this, ‘Instance is created. Deploying Plone’)
54.          – $.instance.agent.call($template, $resources)
55.          – $._environment.reporter.report($this, ‘Plone Server is installed.’)
56.          – If: $.instance.assignFloatingIp
57.            Then:
58.              – $host: $.instance.floatingIpAddress
59.            Else:
60.              – $host: $.instance.ipAddresses.first()
61.          – $._environment.reporter.report($this, format(‘Plone Server is available at http://{0}:{1}’, $host, $.port))
62.          – $.setAttr(deployed, true)
First we set the namespaces and class name, then define the parameters we&8217;ll be using later. We can then move into methods.
Besides the standard init method, our PloneServer class has one main method &8211; deploy. It sets up instances of spawning and configuration. The deploy method performs the following tasks:

It configures a security group and opens the TCP port 80, SSH port and our custom TCP port (as determined by the user):
         – $securityGroupIngress:
           – ToPort: 80
             FromPort: 80
             IpProtocol: tcp
             External: true
           – ToPort: 443
             FromPort: 443
             IpProtocol: tcp
             External: true
           – ToPort: $.port
             FromPort: $.port
             IpProtocol: tcp
             External: true
       -$._environment.securityGroupManager.addGroupIngress($securityGroupIngress)

It initiates the spawning of a new virtual machine:
        – $.instance.deploy()

It creates a Resources object, then loads the execution plan template (in the Resources directory) into it, updating the plan with parameters taken from the user:
         – $resources: new(sys:Resources)
         – $template: $resources.yaml(‘DeployPloneServer.template’).bind(dict(
               pathname => $.pathname,
               password => $.password,
               port => $.port
             ))

It sends the ready-to-execute-plan to the murano agent:
         – $.instance.agent.call($template, $resources)

Lastly, it assigns a floating IP  to the newly spawned machine, if it was chosen:
         – If: $.instance.assignFloatingIp
           Then:
             – $host: $.instance.floatingIpAddress
           Else:
             – $host: $.instance.ipAddresses.first()

Before we move on, just a few words about floating IPs &8211; I will provide you with the key points from Piotr Siwczak’s article  “Configuring Floating IP addresses for Networking in OpenStack Public and Private Clouds”:
“The floating IP mechanism, besides exposing instances directly to the Internet, gives cloud users some flexibility. Having “grabbed” a floating IP from a pool, they can shuffle them (i.e., detach and attach them to different instances on the fly) thus facilitating new code releases and system upgrades. For sysadmins it poses a potential security risk, as the underlying mechanism (iptables) functions in a complicated way and lacks proper monitoring from the OpenStack side.”
Be aware that OpenStack is rapidly changing and some article’s statements may become obsolete, but the point is that there are advantages and disadvantages of using floating IPs.
Image File
In order to use OpenStack, you generally need an image to serve as the template for VMs you spawn. In some cases, those images will already be part of your cloud, but if not, you can specify them in the image.lst file. When you mention any image in this file and put it in your package, the image will be uploaded to your Cloud automatically. When importing images from the image.lst file, the client simply searches for a file with the same name as the name attribute of the image in the images directory of the package.  
An image file is optional, but to make sure your Murano App works you need to point any image with a pre-installed Murano agent. In our case it is Ubuntu 14.04 with a preinstalled Murano agent:
Images:
– Name: ‘ubuntu-14.04-m-agent.qcow2′
 Hash: ‘393d4f2a7446ab9804fc96f98b3c9ba1′
 Meta:
   title: ‘Ubuntu 14.04 x64 (pre-installed murano-agent)’
   type: ‘linux’
 DiskFormat: qcow2
 ContainerFormat: bare
Application Logo
The logo.png file is a preview image that will be visible to users in the application catalog. Having a logo file is optional, but for now, let’s choose this one:

Create a Package
Finally, now that all the files are ready go to our package files directory (where the manifest.yaml file is placed) we can create a .zip package:
$ zip -r org.openstack.apps.plone.PloneServer.zip *
Tomorrow we&8217;ll wrap up by showing you how to add your new package to the Murano application catalog.
The post Develop Cloud Applications for OpenStack on Murano, Day 4: The application, part 2: Creating the Murano App appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Using the CloudForms Topology View

When working with complex provider environments with many objects, the topology widget in Red Hat CloudForms can be extremely useful to quickly view and categorize information. The topology view provides the ability to view a container provider’s objects plus their details, such as the properties, status, and relationships to other objects on the provider. The topology view is also quite useful for showing cross-links between objects — all of which can be very difficult to visualize when only viewing an object’s summary page.

First introduced in CloudForms 4.0, the topology view was previously available only for containers providers. As of CloudForms 4.1, the topology view can also be used for network providers for viewing objects such as cloud subnets, floating IPs, and network routers:

In addition to topology view for network providers, CloudForms 4.1 also adds a search bar so that objects can be easily searched by name.
The topology view for containers is accessed through the Compute menu, by navigating from Compute > Containers > Topology. The network providers topology is accessed from Networks > Topology. In addition, you can access the topology view from the provider summary screen by clicking the icon in the Topology section.

Simplifying details with the Topology View
First of all, the topology view allows you to simplify information.
In the example below, the OpenShift Enterprise environment comprises multiple objects. It can be difficult to track down specific information, for example the OpenShift nodes. The topology view can assist in sorting out the objects to focus only on the relevant ones.

To view only the nodes, click each object except for Nodes in the top bar to toggle their display off. When the display is off for these objects, they appear greyed out. As a result, in the next screenshot, all you see are the container provider’s nodes:

Alternatively, you can also rearrange the topology diagram by dragging the objects into a manageable layout. For example, if you want to isolate one node and see its relationships, click on the node and pull it to one side of the topology map.
The following screenshot shows one node isolated from the rest, so you can easily see its relationships:

Searching with the Topology View
To make finding objects even simpler, Red Hat CloudForms 4.1 adds a search bar to the topology widget to allow you to search for an object by name. Search a name, and any unrelated objects are greyed out, so that only the objects you’ve searched for are highlighted.
In the following example, a search for the name “ruby-hello-world” reveals a container and a service by that name:

Note that you must cancel the search by clicking the X button before running your next search.
Identifying Relationships with the Topology View
Let’s say you wanted to find out which cloud network provider your floating IPs were attached to. Your network provider setup is fairly complicated, with many cross-linked relationships:

To make viewing your floating IPs much easier, hide the objects you aren’t interested in by clicking them on the top menu bar —  in this case, hide everything but Floating IPs and Cloud Networks.
As a result, you can easily see that 10 of your floating IPs are attached to the cloud network on the left, 7 floating IPs are attached to the cloud network on the right, and one floating IP is not attached to any cloud network. If you want more details on which cloud network is connected to 10 floating IPs and which is connected to 7 floating IPs, you could find out more details by either hovering over the cloud network icons or double clicking on the icon to open a summary page. Right clicking on the icon brings up a dialog with additional actions you can perform on the item.

Troubleshooting with the Topology View
The topology view is also very useful for identifying objects that are not functioning properly, along with errors. Objects that are active and running correctly are displayed with a green outline, where objects with problems are outlined in red, and a grey outline signifies an unknown status. Let’s look at this containers topology example again:

This image shows four non-functional nodes. After identifying these nodes, you can find more information by either:

Hovering over the node: This will show the the object’s name, type (node), and its status.
Double-clicking on a node: This opens a summary page listing the node’s properties, labels, relationships, conditions, and Smart Management tags.
Toggling the Display Names checkbox: This shows all displayed objects’ names.

You can then quickly narrow down where there may be a problem, and troubleshoot from there.
For more information about working with these provider types, see the Red Hat CloudForms 4.1 Managing Providers guide:

For containers providers &; see Chapter 5. Containers Providers
For network providers &8211; see Chapter 4. Network Providers

Quelle: CloudForms

Develop Cloud Applications for OpenStack on Murano, Part 3: The application, part 1: Understanding Plone deployment

The post Develop Cloud Applications for OpenStack on Murano, Part 3: The application, part 1: Understanding Plone deployment appeared first on Mirantis | The Pure Play OpenStack Company.
OK, so so far, in Part 1 we talked about what Murano is and why you need it, and in Part 2 we put together the development environment, which consists of a text editor and a small OpenStack cluster with Murano.  Now let&;s start building the actual Murano App.
What we&8217;re trying to accomplish
In our case, we&8217;re going to create a Murano App that enables the user to easily install the Plone CMS. We&8217;ll call it PloneServerApp.
Plone is an enterprise level CMS (think WordPress on steroids).  It comes with its own installer, but it also needs a variety of libraries and other resources to be available to that installer.
Our task will be to create a Murano App that provides an opportunity for the user to provide information the installer needs, then creates the necessary resources (such as a VM), configures it properly, and then executes the installer.
To do that, we&8217;ll start by looking at the installer itself, so we understand what&8217;s going on behind the scenes.  Once we&8217;ve verified that we have a working script, we can go ahead and build a Murano package around it.
Plone Server Requirements
First of all, let’s clarify the resources needed to install the Plone server in terms of the host VM and preinstalled software and libraries. We can find this information in the official Plone Installation Requirements.
Host VM Requirements
Plone supports nearly all Operating Systems, but for the purposes of our tutorial, let’s suppose that our Plone Server needs to run on a VM under Ubuntu.
As far as hardware requirements, the Plone server requires the following:
Minimum requirements:

A minimum of 256 MB RAM and 512 MB of swap space per Plone site
A minimum of 512 MB hard disk space

Recommended requirements:

2 GB or more of RAM per Plone site
40 GB or more of hard disk space

The Plone Server also requires the following to be preinstalled:

Python 2.7 (dev), built with support for expat (xml.parsers.expat), zlib and ssl.
Libraries:

libz (dev),
libjpeg (dev),
readline (dev),
libexpat (dev),
libssl or openssl (dev),
libxml2 >= 2.7.8 (dev),
libxslt >= 1.1.26 (dev).

The PloneServerApp will need to make sure that all of this is available.
Defining what the PloneServerApp does
Next we are going to define the deployment plan. The PloneServerApp executes all necessary steps in a completely automatic way to get the Plone Server working and to make it available outside of your OpenStack Cloud, so we need to know how to make that happen.
The PloneServerApp should follow these steps:

Ask the user to specify the host VM, such as number of CPUs, RAM, disk space, OS image file, etc. The app should then check that the requested VM meets all of the minimum hardware requirements for Plone.
Ask the user to provide values for the mandatory and optional Plone Server installation parameter.
Spawn a single Host VM, according to the user&8217;s chosen VM flavor.
Install the Plone Server and all of its required software and libraries on the spawned host VM. Well have PloneServerApp do this by launching an installation script (runPloneDeploy.sh).

Let&8217;s start at the bottom and make sure we have a working runPloneDeploy.sh script; we can then look at incorporating that into the PloneServerApp.
Creating and debugging a script that fully deploys the Plone Server on a single VM
We&8217;ll need to build and test our script on a Ubuntu machine; if you don&8217;t have one handy, go ahead and deploy one in your new OpenStack cluster. (When we&8217;re done debugging, you can then terminate it to clean up the mess.)
Our runPloneDeploy.sh will be based on the Universal Plone UNIX Installer. You can get more details about it in the official Plone Installation Documentation, but the easiest way is to follow these steps:

Download the latest version of Plone:
$ wget –no-check-certificate https://launchpad.net/plone/5.0/5.0.4/+download/Plone-5.0.4-UnifiedInstaller.tgz

Unzip the archive:
<pre?$ tar -xf Plone-5.0.4-UnifiedInstaller.tgz
Go to the folder containing the installation script&;
$ cd Plone-5.0.4-UnifiedInstaller

&8230;and see all installation options provided by the Universal UNIX Plone Installer:
$ ./install.sh –help

The Universal UNIX Installer lets you choose an installation mode:

a standalone mode &; single Zope web application server will be installed, or
a ZEO cluster mode &8211; ZEO Server and Zope instances will be installed.

It also lets you set several optional installation parameters. If you don’t set these, default values will be used.
In this tutorial let’s choose standalone installation mode and make it possible to configure the most significant parameters for standalone installation. These most significant parameters are the:

administrative user password
top level path on Host VM to install the Plone Server.
TCP port from which the Plone site will be available from outside the VM and outside your OpenStack Cloud

Now, if we were installing Plone manually, we would feed these values into the script on the command line, or set them in configuration files.  To automate the process, we&8217;re going to create a new script, runPloneDeploy.sh, which gets those values from the user, then feeds them to the installer programmatically.
So our script should be invoked as follows:
$ ./runPloneDeploy.sh <InstallationPath> <AdminstrativePassword> <TCPPort>
For example:
$ ./runPloneDeploy.sh “/opt/plone/” “YetAnotherAdminPassword” “8080”
The runPloneDeploy.sh script
Let&8217;s start by taking a look at the final version of the install script, and then we&8217;ll pick it apart.
1. #!/bin/bash
2. #
3. #  Plone uses GPL version 2 as its license. As of summer 2009, there are
4. #  no active plans to upgrade to GPL version 3.
5. #  You may obtain a copy of the License at
6. #
7. #       http://www.gnu.org
8. #
9.
10. PL_PATH=”$1″
11. PL_PASS=”$2″
12. PL_PORT=”$3″
13.
14. # Write log. Redirect stdout & stderr into log file:
15. exec &> /var/log/runPloneDeploy.log
16.
17. # echo “Installing all packages.”
18. sudo apt-get update
19.
20. # Install the operating system software and libraries needed to run Plone:
21. sudo apt-get -y install python-setuptools python-dev build-essential libssl-dev libxml2-dev libxslt1-dev libbz2-dev libjpeg62-dev
22.
23. # Install optional system packages for the handling of PDF and Office files. Can be omitted:
24. sudo apt-get -y install libreadline-dev wv poppler-utils
25.
26. # Download the latest Plone unified installer:
27. wget –no-check-certificate https://launchpad.net/plone/5.0/5.0.4/+download/Plone-5.0.4-UnifiedInstaller.tgz
28.
29. # Unzip the latest Plone unified installer:
30. tar -xvf Plone-5.0.4-UnifiedInstaller.tgz
31. cd Plone-5.0.4-UnifiedInstaller
32.
33. # Set the port that Plone will listen to on available network interfaces. Editing “http-address” param in buildout.cfg file:
34. sed -i “s/^http-address = [0-9]*$/http-address = ${PL_PORT}/” buildout_templates/buildout.cfg
35.
36. # Run the Plone installer in standalone mode
37. ./install.sh –password=”${PL_PASS}” –target=”${PL_PATH}” standalone
38.
39. # Start Plone
40. cd “${PL_PATH}/zinstance”
41. bin/plonectl start
The first line states which shell should be execute the various commands commands:
#!/bin/bash
Lines 2-8 are comments describing the license under which Plone is distributed:
#
#  Plone uses GPL version 2 as its license. As of summer 2009, there are
#  no active plans to upgrade to GPL version 3.
#  You may obtain a copy of the License at
#
#       http://www.gnu.org
#
The next three lines contain commands assigning input script arguments to their corresponding variables:
PL_PATH=”$1″
PL_PASS=”$2″
PL_PORT=”$3″
It’s almost impossible to write a script with no errors, so Line 15 sets up logging. It redirects both stdout and stderr outputs of each command to a log-file for later analysis:
exec &> /var/log/runPloneDeploy.log
Lines 18-31 (inclusive) are taken straight from the Plone Installation Guide:
sudo apt-get update

# Install the operating system software and libraries needed to run Plone:
sudo apt-get -y install python-setuptools python-dev build-essential libssl-dev libxml2-dev libxslt1-dev libbz2-dev libjpeg62-dev

# Install optional system packages for the handling of PDF and Office files. Can be omitted:
sudo apt-get -y install libreadline-dev wv poppler-utils

# Download the latest Plone unified installer:
wget –no-check-certificate https://launchpad.net/plone/5.0/5.0.4/+download/Plone-5.0.4-UnifiedInstaller.tgz

# Unzip the latest Plone unified installer:
tar -xvf Plone-5.0.4-UnifiedInstaller.tgz
cd Plone-5.0.4-UnifiedInstaller
Unfortunately, the Unified UNIX Installer doesn’t give us the ability to configure a TCP Port as a default argument of the install.sh script. We need to edit it in buildout.cfg before carrying out the main install.sh script.
At line 34 we set the desired port using a sed command:
sed -i “s/^http-address = [0-9]*$/http-address = ${PL_PORT}/” buildout_templates/buildout.cfg
Then at line 37 we launch the Plone Server installation in standalone mode, passing in the other two parameters:
./install.sh –password=”${PL_PASS}” –target=”${PL_PATH}” standalone
After setup is done, on line 40, we change to the directory where Plone was installed:
cd “${PL_PATH}/zinstance”
And finally, the last action is to launch the Plone service on line 40.
bin/plonectl start
Also, please don’t forget to leave comments before every executed command in order to make your script easy to read and understand. (This is especially important if you&8217;ll be distributing your app.)
Run the deployment script
Check your script, then spawn a standalone VM with an appropriate OS (in our case it is Ubuntu OS 14.04) and execute the runPloneDeply.sh script to test and debug it. (Make sure to set it as executable, and if necessary, to run it as root (or using sudo)!)
You&8217;ll use the same format we discussed earlier:
$ ./runPloneDeploy.sh <InstallationPath> <AdminstrativePassword> <TCPPort>
For example:
$ ./runPloneDeploy.sh “/opt/plone/” “YetAnotherAdminPassword” “8080”
Once the script is finished, check the outcome:

Find where Plone Server was installed on your VM using the find command, or by checking the directory you specified on the command line.
Try to visit the address http://127.0.0.1:[Port] &8211; where [Port] is the TCP Port that you point to as an argument of the runPloneDeploy.sh script.
Try to login to Plone using the &;admin&; username and [Password] that you point to as an argument of the runPloneDeploy.sh script.

If something doesn’t seem to be right check the runPloneDeploy.log file for errors.
As you can see, our scenario has a pretty small number of lines but it really does the whole installation work on a single VM. Undoubtedly, there are several ways in which you can improve the script, like smart error handling, passing more customizations or enabling Plone autostart. It’s all up to you.
In part 4, we&8217;ll turn this script into an actual Murano App.
The post Develop Cloud Applications for OpenStack on Murano, Part 3: The application, part 1: Understanding Plone deployment appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

How To Dockerize Vendor Apps like Confluence

Docker Datacenter customer, Shawn Bower of Cornell University recently shared their experiences in containerizing Confluence as being the start of their Docker journey.
Through that project they were able to demonstrate a 10X savings in application maintenance, reduce the time to build a disaster recovery plan from days to 30 minutes and improve the security profile of their Confluence deployment. This change allowed the Cloudification team that Shawn leads to start spending the majority of their time helping Cornelians to use technology to be innovative.
Since the original blog was posted, there’s been a lot of requests to get the pragmatic info on how Cornell actually did this project.  In the post below, Shawn provides detailed instructions on how Confluence is containerized and how the Docker workflow is integrated with Puppet.

Written by Shawn Bower
As we started our Journey to move Confluence to the cloud using Docker we were emboldened by the following post from Atlassian. We use many of the Atlassian products and love how well integrated they are.  In this post I will walk you through the process we used to get Confluence in a container and running.
First we needed to craft a Dockerfile.  At Cornell we used image inheritance which enables our automated patching and security scanning process.  We start with the cannonical ubuntu image: https://hub.docker.com/_/ubuntu/ and then build on defaults used here at Cornell.  Our base image is available publicly on github here: https://github.com/CU-CommunityApps/docker-base.
Let’s take a look at the Dockerfile.
FROM ubuntu:14.04

# File Author / Maintainer
MAINTAINER Shawn Bower <my email address>

# Install.
RUN
apt-get update && apt-get install –no-install-recommends -y
build-essential
curl
git
unzip
vim
wget
ruby
ruby-dev
-daemon
openssh-client &&
rm -rf /var/lib/apt/lists/*

RUN rm /etc/localtime
RUN ln -s /usr/share/zoneinfo/America/New_York /etc/localtime

Clamav stuff
RUN freshclam -v &&
mkdir /var/run/clamav &&
chown clamav:clamav /var/run/clamav &&
chmod 750 /var/run/clamav

COPY conf/clamd.conf /etc/clamav/clamd.conf

RUN echo “gem: –no-ri –no-rdoc” > ~/.gemrc &&
gem install json_pure -v 1.8.1 &&
gem install puppet -v 3.7.5 &&
gem install librarian-puppet -v 2.1.0 &&
gem install hiera-eyaml -v 2.1.0

# Set environment variables.
ENV HOME /root

# Define working directory.
WORKDIR /root

# Define default command.
CMD [“bash”]

At Cornell we use Puppet for configuration management so we bake that directly into our base image.  We do a few other things like setting the timezone and installing the clamav agent as we have some applications that use that for virus scanning.  We have an automated project in Jenkins that pulls that latest ubuntu:14.04 image from Docker Hub and then builds this base image every weekend.  Once the base image is built we tag it with ‘latest’, a time stamp tag and automatically push it to our local Docker Trusted Registry.  This allows the brave to pull in patches continuously while allowing others to pin to a specific version until they are ready to migrate.  From that image we create a base Java image which installs Oracle’s JVM.
The Dockerfile is available here and explained below.
# Pull base image.
FROM DTR Repo path /cs/base

# Install Java.
RUN
apt-get update &&
apt-get -y install software-properties-common &&
add-apt-repository ppa:webupd8team/java -y &&
apt-get update &&
echo “oracle-java8-installer shared/accepted-oracle-license-v1-1 select true” | sudo debconf-set-selections &&
apt-get install -y oracle-java8-installer &&
apt-get install oracle-java8-set-default &&
rm -rf /var/lib/apt/lists/*

# Define commonly used JAVA_HOME variable
ENV JAVA_HOME /usr/lib/jvm/java-8-oracle

# Define working directory.
WORKDIR /data

# Define default command.
CMD [“bash”]

The same automated patching process is followed for the Java image as with the base image.  The Java image is automatically built after the base imaged and tagged accordingly so there is a matching set of base and java8.  Now that we have our Java image we can layer on Confluence.  Our Confluence repository is private but the important bits of the Dockerfile are below.
FROM DTR Repo path for cs/java8

# Configuration variables.
ENV CONF_HOME     /var/local/atlassian/confluence
ENV CONF_INSTALL  /usr/local/atlassian/confluence
ENV CONF_VERSION  5.8.18

ARG environment=local

# Install Atlassian Confluence and helper tools and setup initial home
# directory structure.
RUN set -x
&& apt-get update –quiet
&& apt-get install –quiet –yes –no-install-recommends libtcnative-1 xmlstarlet
&& apt-get clean
&& mkdir -p                “${CONF_HOME}”
&& chmod -R 700            “${CONF_HOME}”
&& chown daemon:daemon     “${CONF_HOME}”
&& mkdir -p                “${CONF_INSTALL}/conf”
&& curl -Ls                “http://www.atlassian.com/software/confluence/downloads/binary/atlassian-confluence-${CONF_VERSION}.tar.gz” | tar -xz –directory “${CONF_INSTALL}” –strip-components=1 –no-same-owner
&& chmod -R 700            “${CONF_INSTALL}/conf”
&& chmod -R 700            “${CONF_INSTALL}/temp”
&& chmod -R 700            “${CONF_INSTALL}/logs”
&& chmod -R 700            “${CONF_INSTALL}/work”
&& chown -R daemon:daemon  “${CONF_INSTALL}/conf”
&& chown -R daemon:daemon  “${CONF_INSTALL}/temp”
&& chown -R daemon:daemon  “${CONF_INSTALL}/logs”
&& chown -R daemon:daemon  “${CONF_INSTALL}/work”
&& echo -e                 “nconfluence.home=$CONF_HOME” >> “${CONF_INSTALL}/confluence/WEB-INF/classes/confluence-init.properties”
&& xmlstarlet              ed –inplace
–delete               “Server/@debug”
–delete               “Server/Service/Connector/@debug”
–delete               “Server/Service/Connector/@useURIValidationHack”
–delete               “Server/Service/Connector/@minProcessors”
–delete               “Server/Service/Connector/@maxProcessors”
–delete               “Server/Service/Engine/@debug”
–delete               “Server/Service/Engine/Host/@debug”
–delete               “Server/Service/Engine/Host/Context/@debug”
“${CONF_INSTALL}/conf/server.xml”

# bust cache
ADD version /version

# RUN Puppet
WORKDIR /
COPY Puppetfile /
COPY keys/ /keys

RUN mkdir -p /root/.ssh/ &&
cp /keys/id_rsa /root/.ssh/id_rsa &&
chmod 400 /root/.ssh/id_rsa &&
touch /root/.ssh/known_hosts &&
ssh-keyscan github.com >> /root/.ssh/known_hosts &&
librarian-puppet install &&
puppet apply –modulepath=/modules – hiera_config=/modules/confluence/hiera.yaml

–environment=${environment} -e “class { confluence::app': }” &&
rm -rf /modules &&
rm -rf /Puppetfile* &&
rm -rf /root/.ssh &&
rm -rf /keys

USER daemon:daemon

# Expose default HTTP connector port.
EXPOSE 8080

VOLUME [“/opt/atlassian/confluence/logs”]

# Set the default working directory as the installation directory.
WORKDIR /var/atlassian/confluence

# Run Atlassian Confluence as a foreground process by default.
CMD [“/opt/atlassian/confluence/bin/catalina.sh”, “run”]

We bring down the install media from Atlassian, explode that into the install path and do a bit of cleanup on some of the XML configs.  We use Docker build cache for that part of the process becauses it does not change often.  After the Confluence installation we bust the cache by adding a version file which changes each time the build runs in Jenkins.  This ensuers that Puppet will run in the container and configure the environment.  Puppet is used to lay down environment (dev, test, prod, etc.) configuration and use a docker build argument called ‘environment.’  This allows us to bake everything needed to run Confluence into the image so we can launch it on any machine with no extra configuration.  Whether to store the configuration in the image or outside is a contested subject for sure, but our decision was  to store all configurations directly in the image. We believe this ensures the highest level of portability.
Here are some general rules we follow with Docker

Use base images that are a part of the automated patching
Follow Dockerfile best practices
Keep the base infrastructure in a Dockerfile, and environment specific information in Puppet
Build one process per container
Keep all components of the stack in one repository
If the stack has multiple components (ie, apache, tomcat) they should live in the same repository
Use subdirectories for each component

Hope you enjoyed this post and gets you containerizing some vendored apps. This is just the beginning as we recently moved a legacy coldfusion app into Docker &; almost anything can probably be containerized!

Tips on how to dockerize @atlassian @Confluence by @Cornell&;s @drizzt51Click To Tweet

More Resources

Try Docker Datacenter free for 30 days
Learn more about Docker Datacenter
Read the blog post &8211; It all started with containerizing Confluence at Cornell
Watch the webinar featuring Shawn and Docker at Cornell

The post How To Dockerize Vendor Apps like Confluence appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

How to Develop Cloud Applications for OpenStack using Murano, Part 2: Creating the Development Environment

The post How to Develop Cloud Applications for OpenStack using Murano, Part 2: Creating the Development Environment appeared first on Mirantis | The Pure Play OpenStack Company.
In part 1 of this series, we talked about what Murano is, and why you&;d want to use it as a platform for developing end user applications. Now in part 2 we&8217;ll help you get set up for doing the actual development,.
All that you need to develop your Murano App is:

A text editor to edit source code. There is no special IDE required; a plain text editor will do.
OpenStack with Murano. You will, of course, want to test your Murano App, so you&8217;ll need an environment in which to run it.

Since there&8217;s no special setup for the text editor, let&8217;s move on to getting a functional OpenStack cluster with Murano.
Where to find OpenStack Murano
If you don&8217;t already have access to a cloud with Murano deployed, that&8217;ll be your first task.  (You&8217;ll know Murano is available if you see an &;Application&; tab in Horizon.)
There are two possible ways to deploy OpenStack and Murano:

You can Install vanilla OpenStack (raw upstream code) using the DevStack scripts, but you&8217;ll need to do some manual configuration for Murano. If you want to take this route, you can find out how to install DevStack with Murano here.
You can take the easy way out and use one of the ready-to-use commercial distros that come with Murano to install OpenStack.

If this is your first time, I recommend that you start with one of the ready-to-use commercial OpenStack distros, for several reasons:

A distro is more stable and has fewer bugs, so you won’t waste your time on OpenStack deployment troubleshooting.
A distro will let you see how a correctly configured OpenStack cloud should look.
A distro doesn’t require a deep dive into OpenStack deployment, which means you can fully concentrate on developing your Murano App.

I recommend that you install the Mirantis OpenStack distro (MOS) because deploying Murano with  it can’t be more simple; you just need to click on one checkbox before deploying OpenStack and that’s all. (You can choose any other commercial distro, but the most of them are not able to install Murano in an automatic way. You can find out how to install Murano manually on an already deployed OpenStack Cloud here.)
Deploying OpenStack with Murano
You can get all of the details about Mirantis OpenStack in the Official Mirantis OpenStack Documentation, but here are the basic steps. You can follow them on Windows, Mac, or Linux; in my case, I&8217;m using a laptop running Mac OS X with 8GB RAM; we&8217;ll create virtual machines rather than trying to cobble together multiple pieces of hardware:

If you don&8217;t already have it installed, download and install Oracle VirtualBox. In this tutorial we’ll use VirtualBox 5.1.2 for OS X (VirtualBox-5.1.2-108956-OSX.dmg).
Download and install the Oracle VM VirtualBox Extension Pack. (Make sure you use the right download for your version of VirtualBox. In my case, that meansOracle_VM_VirtualBox_Extension_Pack-5.1.2-108956.vbox-extpack.)
Download the Mirantis OpenStack image.
Download the Mirantis OpenStack VirtualBox Scripts..
Unzip the script archive and copy the Mirantis OpenStack .ISO image to thevirtualbox/iso folder.
You can optionally edit config.sh if you want to set up a custom password or edit network settings. There are a lot of detailed comments, so it will not be a problem to configure your main parameters.
From the command line, launch the launch.sh script.
Unless you&8217;ve changed your configuration, when the scripts finish you’ll have one Fuel Master Node VM and three slave VMs running in VirtualBox.

Next we&8217;ll create the actual OpenStack cluster itself.
Creating the OpenStack cluster
At this point we&8217;ve installed Fuel, but we haven&8217;t actually deployed the OpenStack cluster itself. To do that, follow these steps:

Point your browser to http://10.20.0.2:8000/ and log in as an administrator using “admin” as your password (or the address and credentials you added in configure.sh).

Once you’ve logged into Fuel Master Node it lets you deploy the OpenStack Cloud and you can begin to explore it.

Click New OpenStack Environment.

Choose a name for your OpenStack Cloud and click Next:

Don’t change anything on the Compute tab, just click Next:

Don’t change anything on the Networking Setup tab, just click Next:

Don’t change anything on the Storage Backends tab, just click Next:

On the Additional Services tab tick the “Install Murano” checkbox and click Next:

On the Finish tab click Create:

From here you&8217;ll see the cluster&8217;s Dashboard.  Click Add Nodes.

Here you can see that the launch script automatically created three VirtualBox VMs, and that Fuel has automatically discovered them:

The next step is to assign roles to your nodes. In this tutorial you need at least two nodes:

The Controller Node &; This node manages all of the operations within an OpenStack environment and provides an external API.
The Compute Node &8211; This node provides processing resources to accommodate virtual machine workloads and it creates, manages and terminates VM instances. The VMs, or instances, that you create in Murano run on the compute nodes.
Assign a controller role to a node with 2GB RAM.

 Click Apply Changes and follow the same steps to add a 1 GB compute node. The last node will not be needed in our case, so you can remove it and give more hardware resources to other nodes later if you like.
Leave all of the other settings at their default values, but before you deploy, you will want to check your networking to make sure everything is configured properly.  (Fuel configures networking automatically, but it&8217;s always good to check.)  Click the Networks tab, then Connectivity Check in the left-hand pane. Click Verify Networks and wait a few moments.

Go to the Dashboard tab and click Deploy Changes to deploy your OpenStack Cloud.

When Fuel has finished you can login into the Horizon UI, http://172.16.0.3/horizon by default, or you can click the link on the Dashboard tab. (You also can go to the Health Check tab and run tests to ensure that your OpenStack Cloud was deployed properly.)

Log into Horizon using the credentials admin/admin (unless you changed them in the Fuel Settings tab).

As you can see by the Applications tab at the bottom of the left-hand pane, the Murano Application Catalog has been installed.
Tomorrow we&8217;ll talk about creating an application you can deploy with it.
The post How to Develop Cloud Applications for OpenStack using Murano, Part 2: Creating the Development Environment appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Introducing InfraKit, an open source toolkit for creating and managing declarative, self-healing infrastructure

Written by Bill Farner and David Chung
Docker’s mission is to build tools of mass innovation, starting with a programmable layer for the Internet that enables developers and IT operations teams to build and run distributed applications. As part of this mission, we have always endeavored to contribute software plumbing toolkits back to the community, following the UNIX philosophy of building small loosely coupled tools that are created to simply do one thing well. As Docker adoption has grown from 0 to 6 billion pulls, we have worked to address the needs of a growing and diverse set of distributed systems users. This work has led to the creation of many infrastructure plumbing components that have been contributed back to the community.

It started in 2014 with libcontainer and libnetwork. In 2015 we created runC and co-founded OCI with an industry-wide set of partners to provide a standard for container runtimes, a reference implementation based on libcontainer, and notary, which provides the basis for Docker Content Trust. From there we added containerd, a daemon to control runC, built for performance and density. Docker Engine was refactored so that Docker 1.11 is built on top of containerd and runC, providing benefits such as the ability to upgrade Docker Engine without restarting containers. In May 2016 at OSCON, we open sourced HyperKit, VPNKit and DataKit, the underlying components that enable us  to deeply integrate Docker for Mac and Windows with the native Operating System. Most recently,  in June, we unveiled SwarmKit, a toolkit for scheduling tasks and the basis for swarm mode, the built-in orchestration feature in Docker 1.12.
With SwarmKit, Docker introduced a declarative management toolkit for orchestrating containers. Today, we are doing the same for infrastructure. We are excited to announce InfraKit, a declarative management toolkit for orchestrating infrastructure. Solomon Hykes  open sourced it today during his keynote address at  LinuxCon Europe. You can find the source code at https://github.com/docker/infrakit
 
InfraKit Origins
Back in June at DockerCon, we introduced Docker for AWS and Azure beta to simplify the IT operations experience in setting up Docker and to optimally leverage the native capabilities of the respective cloud environment. To do this, Docker provided deep integrations into these platforms’ capabilities for storage, networking and load balancing.
In the diagram below, the architecture for these versions includes platform-specific network and storage plugins, but also a new component specific to infrastructure management.
While working on Docker for AWS and Azure, we realized the need for a standard way to create and manage infrastructure state that was portable across any type of infrastructure, from different cloud providers to on-prem.  One challenge is that each vendor has differentiated IP invested in how they handle certain aspects of their cloud infrastructure. It is not enough to just provision five servers;what IT ops teams need is a simple and consistent way to declare the number of servers, what size they should be, and what sort of base software configuration is required.  And in the case of server failures (especially unplanned), that sudden change needs to be reconciled against the desired state to ensure that any required servers are re-provisioned with the necessary configuration. We started InfraKit to solves these problems and to provide the ability to create a self healing infrastructure for distributed systems.
 
InfraKit Internals
InfraKit breaks infrastructure automation down into simple, pluggable components for declarative infrastructure state, active monitoring and automatic reconciliation of that state. These components work together to actively ensure the infrastructure state matches the user&;s specifications. InfraKit emphasizes primitives for building self-healing infrastructure but can also be used passively like conventional tools.
InfraKit at the core consists of a set of collaborating, active processes. These components are called plugins and different plugins can be written to meet different needs. These plugins are active controllers that can look at current infrastructure state and take action when the state diverges from user specification.
Initially, these plugins are implemented as servers listening on unix sockets and communicate using HTTP. By nature, the plugin interface definitions are language agnostic so it&8217;s possible to implement a plugin in a language other than Go. Plugins can be packaged and deployed differently, such as with Docker containers.
Plugins are the active components that provide the behavior for the primitives that InfraKit supports. InfraKit supports these primitives: groups, instances, and flavors. They are active components running as plugins.
Groups
When managing infrastructure like computing clusters, Groups make good abstraction, and working with groups is easier than managing individual instances. For example, a group can be made up of a collection of machines as individual instances. The machines in a group can have identical configurations (replicas, or so-called “cattle”). They can also have slightly different configurations and properties like identity,ordering, and persistent storage (as members of a quorum or so-called “pets”).
Instances
Instances are members of a group. An instance plugin manages some physical resource instances. It knows only about individual instances and nothing about Groups. Instance is technically defined by the plugin, and need not be a physical machine at all.   As part of the toolkit, we have included examples of instance plugins for Vagrant and Terraform. These examples show that it’s easy to develop plugins.  They are also examples of how InfraKit can play well with existing system management tools while extending their capabilities with active management.  We envision more plugins in the future &; for example plugins for AWS and Azure.
Flavors
Flavors help distinguish members of one group from another by describing how these members should be treated. A flavor plugin can be thought of as defining what runs on an Instance. It is responsible for configuring the physical instance and for providing health-check in an application-aware way.  It is also what gives the member instances properties like identity and ordering when they require special handling.  Examples of flavor plugins include plain servers, Zookeeper ensemble members, and Docker swarm mode managers.
By separating provisioning of physical instances and configuration of applications into Instance and Flavor plugins, application vendors can directly develop a Flavor plugin, for example, MySQL, that can work with a wide array of instance plugins.
Active Monitoring and Automatic Reconciliation
The active self-healing aspect of InfraKit sets it apart from existing infrastructure management solutions, and we hope it will help our industry build more resilient and self-healing systems. The InfraKit plugins themselves continuously monitor at the group, instance and flavor level for any drift in configuration and automatically correct it without any manual intervention.

The group plugin checks on the size, overall health of the group and decides on strategies for updating.
The instance plugin monitors for the physical presence of resources.
The flavor plugin can make additional determination beyond presence of the resource. For example the swarm mode flavor plugin would check not only that a swarm member node is up, but that the node is also a member of the cluster.  This provides an application-specific meaning to a node’s “health.”

This active monitoring and automatic reconciliation brings a new level of reliability for distributed systems.
The diagram below shows an example of how InfraKit can be used. There are three groups defined; one for a set of stateless cattle instances, one for a set of stateful and uniquely named pet instances and one defined for the Infrakit manager instances themselves. Each group will be monitored for their declared infrastructure state and reconciled independently of the other groups.  For example, if one of the nodes (blue and yellow) in the cattle group goes down, a new one will be started to maintain the desired size.  When the leader host (M2) running InfraKit goes down, a new leader will be elected (from the standby M1 and M3). This new leader will go into action by starting up a new member to join the quorum to ensure availability and desired size of the group.

InfraKit, Docker and Community
InfraKit was born out of our engineering efforts around Docker for AWS and Azure and future versions will see further integration of InfraKit into Docker and those environments, continuing the path building Docker with a set of reusable components.
As the diagram below shows, Docker Engine is already made up of a number of infrastructure plumbing components mentioned earlier.  The components are not only available separately to the community, but integrated together as the Docker Engine.  In a future release, InfraKit will also become part of the Docker Engine.
With community participation, we aim to evolve InfraKit into exciting new areas beyond managing nodes in a cluster.  There’s much work ahead of us to build this into a cohesive framework for managing infrastructure resources, physical, virtual or containerized, from cluster nodes to networks to load balancers and storage volumes.
We are excited to open source InfraKit and invite the community to participate in this project:

Help define and implement new and interesting plugins
Instance plugins to support different infrastructure providers
Flavor plugins to support a variety of systems like etcd or mysql clusters
Group controller plugins like metrics-driven auto scaling and more
Help define interfaces and implement new infrastructure resource types for things like load balancers, networks and storage volume provisioners

Check out the InfraKit repository README for more info, a quick tutorial and to start experimenting &; from plain files to Terraform integration to building a Zookeeper ensemble.  Have a look, explore, and send us a PR or open an issue with your ideas!

Introducing InfraKit: A new open source toolkit for declarative infrastructureClick To Tweet

More Resources:

Check out all the Infrastructure Plumbing projects
Sign up for Docker for AWS or Docker for Azure
Try Docker today

The post Introducing InfraKit, an open source toolkit for creating and managing declarative, self-healing infrastructure appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Mirantis at EDGE 2016 – Unlocked Private Clouds on IBM Power8

The post Mirantis at EDGE 2016 &; Unlocked Private Clouds on IBM Power8 appeared first on Mirantis | The Pure Play OpenStack Company.
On September 22, Mirantis&; Senior Technical Director, Greg Elkinbard, spoke at IBM&8217;s Edge 2016 IT infrastructure conference in Las Vegas. His short talk described Mirantis&8217; mission: to create clouds using OpenStack and Kubernetes under a &;Build, Operate, Transfer&; model. He enumerated some of the benefits Mirantis customers like Volkswagen are gaining from their large-scale clouds, including more-engaged developers, faster release cycles, platform delivery times reduced from months to hours, and significantly lower costs.
Greg wrapped up the session with a progress report on IBM and Mirantis&8217; recent collaboration to produce a reference architecture for compute node placement on IBM Power8 systems: a solution aimed at lowering costs and raising performance for database and similar demanding workloads. Mirantis is also validating Murano applications and other methods for deploying a wide range of apps on IBM Power hardware, including important container orchestration frameworks, NFV apps, Big Data tools, webservers and proxies, popular databases and developer toolchain elements.

Mirantis IBM Partner Page: https://www.mirantis.com/partners/ibm/
For more on IBM Power8 servers, please visit http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=POB03046USEN

The post Mirantis at EDGE 2016 &8211; Unlocked Private Clouds on IBM Power8 appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis