Full Stack Automation with Ansible and OpenStack

Ansible offers great flexibility. Because of this the community has figured out many useful ways to leverage Ansible modules and playbook structures to automate frequent operations on multiple layers, including using it with OpenStack.
In this blog we’ll cover the many use-cases Ansible, the most popular automation software, with OpenStack, the most popular cloud infrastructure software. We’ll help you understand here how and why you should use Ansible to make your life easier, in what we like to call Full-Stack Automation.

Let’s begin by analyzing the layers of Full-Stack Automation, shown in the diagram above. At the bottom, we have the hardware resources (servers, storage area networks, and networking gear). Above, is the operating system (Linux or Windows). On the Linux side, you can install OpenStack to abstract all of your datacenter resources and offer a software-defined version of your compute, network, and storage resources. On top of OpenStack, are the tenant-defined services needed to create the virtual machines where the applications will reside. Finally, you have to manage the operating system (Linux or Windows) to deploy the actual applications and workloads that you really care about (databases, web servers, mobile application backends, etc.). If you use containers (like Docker or Rkt), you’ll package those applications in images that will be deployed on top of your Guest OS. In addition to that, some languages introduce the concept of application servers, which adds another layer (i.e. J2EE).
Ansible management possibilities
With Ansible, you have a module to manage every layer. This is true even for the networking hardware, although technically speaking it’s for the network operating system, like IOS or NXOS (see the full list of Ansible network modules here).

General interaction with the Operating System: install packages, change or enforce file content or permissions, manage services, create/remove users and groups, etc.

Linux and BSD via SSH (the first and most popular use-case)
Windows via PowerShell (since 1.7)

IaaS Software: install the IaaS software and its dependencies (databases, load balancers, configuration files, services, and other helper tools)

OpenStack-ansible installer https://github.com/openstack/openstack-ansible, as used in some upstream-based OpenStack distributions from other vendors. Note that the Red Hat OpenStack Platform does not use Ansible, but Heat and Puppet. Future releases will leverage Ansible to perform certain validations and to help operators perform their updates and upgrades.
CloudStack installer is also an Ansible-based project.

Virtual Resources: define the resource, like a Virtual Machine or Instance, in terms of how big it is, who can access it, what content should it have, what security profile and network access it requires, etc.

OpenStack Ansible modules (since Ansible 2.0): for instance, Nova or Neutron. It&;s based on the OpenStack &;shade&; library, a common tool for all CLI tools in OpenStack.
It can also manage not so virtual network resources, via netconf (since 2.2) https://docs.ansible.com/ansible/netconf_config_module.html
VmWare vSphere Ansible modules
RHV or oVirt or Libvirt for bare KVM
It also has modules for public cloud providers, like Amazon, Google Cloud, Azure and Digital Ocean

Guest OS: the same components as described for the Host OS. But how do you discover how many Guests you have?

Ansible Dynamic Inventory will dynamically interrogate the IaaS/VM layer and discover which instances are currently available. It detects their hostname, IPs, and security settings and replaces the static Inventory concept. This is especially useful if you leverage Auto Scaling Groups in your cloud infrastructure, which makes your list of instances very variable over time.

Containers Engine (optional)

Docker: Note that the old Docker module is deprecated for a new, native version, in Ansible 2.1.
Kubernetes
Atomic Host

Tenant Software: databases, web servers, load balancers, data processing engines, etc.

Ansible Galaxy is the repository of recipes (playbooks) to deploy the most popular software, and it’s the result of the contributions of thousands of community members.
You can also manage web Infrastructure such as JBoss, allowing Ansible to define how an app is deployed in the application server.

How to install the latest Ansible on a Python virtual environment
As you have seen, some features are only available with very recent Ansible versions, like 2.2. However, your OS may not ship it yet. For example, RHEL 7 or CentOS 7 only comes with Ansible 1.9.
Given that Ansible is a command-line tool written in Python, which supports multiple versions on a system, you may not need the security hardening in Ansible that your distribution offers, and you may want to try the latest version instead.
However, as any other Python software, there are many dependencies, and it’s very dangerous to mix untested upstream libraries with your system-provided ones. Those libraries may be shared and used in other parts of your system, and untested newer libraries can break other applications. The quick solution is to install the latest Ansible version, with all its dependencies, in a isolated folder under your non-privileged user account. This is called a Python Virtual Environment (virtualenv), and if done properly, allows you to safely play with the latest Ansible modules for a full-stack orchestration. Of course, we do not recommend this practice for any production use-case; consider it a learning exercise to improve your DevOps skills.
1) Install prerequisites (pip, virtualenv)
The only system-wide python library we need here is “virtualenvwrapper”. Other than that, you should not do “sudo pip install” as it will replace system python libraries with untested, newer ones. We only trust one here, “virtualenvwrapper”. The virtual environment method is a good mechanism for installing and testing newer python modules in your non-privileged user account.
$ sudo yum install python-pip
$ sudo pip install virtualenvwrapper
$ sudo yum install python-heatclient python-openstackclient python2-shade
2) Setup a fresh virtualenv, where we’ll install the latest Ansible release
First, create a directory to hold the virtual environments.
$ mkdir $HOME/.virtualenvs
Then, add a line like &8220;export WORKON_HOME=$HOME/.virtualenvs&8221; to your .bashrc. Also, add a line like &8220;source /usr/bin/virtualenvwrapper.sh&8221; to your .bashrc. Now source it.
$ source ~/.bashrc
At this point, wrapper links are created, but only the first time you run it. To see the list of environments, just execute &8220;workon&8221;. Next, we&8217;ll create a new virtualenv named “ansible2” , which will be automatically enabled, with access to the default RPM-installed packages.
$ workon
$ mkvirtualenv ansible2 –system-site-packages
To exit the virtualenv, type &8220;deactivate&8221;, and to re-enter again, use &8220;workon&8221;.
$ deactivate
$ workon ansible2
3) Enter the new virtualenv and install Ansible2 via PIP (as regular user, not root)
You can notice your shell prompt has changed and it shows the virtualenv name in brackets.
(ansible2) $ pip install ansible
The above command will install just the ansible 2 dependencies, leveraging your system-wide RPM-provided python packages (thanks to the &;system-site-packages flag we used earlier). Alternatively, if you want to try the development branch:
(ansible2) $ pip install git+git://github.com/ansible/ansible.git@devel
(ansible2) $ ansible –version
If you ever want to remove the virtualenv, and all its dependencies, just use use &8220;rmvirtualenv ansible2&8221;.
4) Install OpenStack client dependencies
The first command below ensures you have the latest stable OpenStack API versions, although you can also try a pip install to get the latest CLI. The second command provides the latest python “shade” library to connect to latest OpenStack API versions using ansible, regardless of the CLI tool.
(ansible2) $ yum install python-openstackclient python-heatclient
(ansible2) $ pip install shade –upgrade
5) Test it
(ansible2) $ ansible -m ping localhost

localhost | SUCCESS => {

“changed”: false,

“ping”: “pong”

}
NOTE: you cannot run this version of ansible outside the virtualenv, so always remember to do “workon ansible2” before usi.

Using Ansible to orchestrate OpenStack
Our savvy readers will notice that using Ansible to orchestrate OpenStack seems to ignore the fact that Heat is the official orchestration module for OpenStack. Indeed, an Ansible Playbook will do almost the same as a HOT template (HOT is the YAML-based syntax for Heat, an evolution of AWS CloudFormation). However, there are many DevOps professionals out there who don’t like to learn new syntax, and they are already consolidating all their process for their hybrid infrastructure.
The Ansible team recognized that and leveraged Shade, the official library from the OpenStack project, to build interfaces to OpenStack APIs. At the time of this writing, Ansible 2.2 includes modules to call the following APIs

Keystone: users, groups, roles, projects
Nova: servers, keypairs, security-groups, flavors
Neutron: ports, network, subnets, routers, floating IPs
Ironic: nodes, introspection
Swift Objects
Cinder volumes
Glance images

From an Ansible perspective, it needs to interact with a server where it can load the OpenStack credentials and open an HTTP connection to the OpenStack APIs. If that server is your machine (localhost), then it will work locally, load the Keystone credentials, and start talking to OpenStack.
Let’s see an example. We’ll use Ansible OpenStack modules to connect to Nova and start a small instance with the Cirros image. But we’ll first upload the latest Cirros image, if not present. We’ll use an existing SSH key from our current user. You can download this playbook from this github link.

# Setup according to Blogpost “Full Stack automation with Ansible and OpenStack”. Execute with “ansible-playbook ansible-openstack-blogpost.yml -c local -vv”
# #
# #
# #
– name: Execute the Blogpost demo tasks
hosts: localhost
tasks:
– name: Download cirros image
get_url:
url: http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
dest: /tmp/cirros-0.3.4-x86_64-disk.img
– name: Upload cirros image to openstack
os_image:
name: cirros
container_format: bare
disk_format: qcow2
state: present
filename: /tmp/cirros-0.3.4-x86_64-disk.img

– name: Create new keypair from current user’s default SSH key
os_keypair:
state: present
name: ansible_key
public_key_file: “{{ ‘~’ | expanduser }}/.ssh/id_rsa.pub”

– name: Create the test network
os_network:
state: present
name: testnet
external: False
shared: False
: vlan
: datacentre
register: testnet_network

– name: Create the test subnet
os_subnet:
state: present
network_name: “{{ testnet_network.id }}”
name: testnet_sub
ip_version: 4
cidr: 192.168.0.0/24
gateway_ip: 192.168.0.1
enable_dhcp: yes
dns_nameservers:
– 8.8.8.8
register: testnet_sub

– name: Create the test router
ignore_errors: yes for some reasons, re-running this task gives errors
os_router:
state: present
name: testnet_router
network: nova
external_fixed_ips:
– subnet: nova
interfaces:
– testnet_sub

– name: Create a new security group
os_security_group:
state: present
name: secgr
– name: Create a new security group allowing any ICMP
os_security_group_rule:
security_group: secgr
protocol: icmp
remote_ip_prefix: 0.0.0.0/0
– name: Create a new security group allowing any SSH connection
os_security_group_rule:
security_group: secgr
protocol: tcp
port_range_min: 22
port_range_max: 22
remote_ip_prefix: 0.0.0.0/0

– name: Create server instance
os_server:
state: present
name: testServer
image: cirros
flavor: m1.small
security_groups: secgr
key_name: ansible_key
nics:
– net-id: “{{ testnet_network.id }}”
register: testServer

– name: Show Server’s IP
debug: var=testServer.openstack.public_v4

After the execution, we see the IP of the instance. We write it down, and we can now use Ansible to connect into it via SSH. We assume Nova’s default network allows connections from our workstation, in our case via a provider network.

Comparison with OpenStack Heat
Using Ansible instead of Heat has it&8217;s advantages and disadvantages. For instance, with Ansible you must keep track of the resources you create, and manually delete them (in reverse order) once you are done with them. This is especially tricky with Neutron ports, floating IPs and routers. With Heat, you just delete the stack, and all the created resources will be properly deleted.
Compare the above with a similar (but not equivalent) Heat Template, that can be downloaded from this github gist:
heat_template_version: 2015-04-30

description: >
Node template. Launch with “openstack stack create –parameter public_network=nova –parameter ctrl_network=default –parameter secgroups=default –parameter image=cirros –parameter key=ansible_key –parameter flavor=m1.small –parameter name=myserver -t openstack-blogpost-heat.yaml testStack”

parameters:
name:
type: string
description: Name of node
key:
type: string
description: Name of keypair to assign to server
secgroups:
type: comma_delimited_list
description: List of security group to assign to server
image:
type: string
description: Name of image to use for servers
flavor:
type: string
description: Flavor to use for server
availability_zone:
type: string
description: Availability zone for server
default: nova
ctrl_network:
type: string
label: Private network name or ID
description: Network to attach instance to.
public_network:
type: string
label: Public network name or ID
description: Network to attach instance to.

resources:

ctrl_port:
type: OS::Neutron::Port
properties:
network: { get_param: ctrl_network }
security_groups: { get_param: secgroups }

floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network: { get_param: public_network }
port_id: { get_resource: ctrl_port }

instance:
type: OS::Nova::Server
properties:
name: { get_param: name }
image: { get_param: image }
flavor: { get_param: flavor }
availability_zone: { get_param: availability_zone }
key_name: { get_param: key }
networks:
– port: { get_resource: ctrl_port }

Combining Dynamic Inventory with the OpenStack modules
Now let’s see what happens when we create many instances, but forget to write down their IP’s. The perfect example to leverage Dynamic Inventory for OpenStack is to learn the current state of our tenant virtualized resources, and gather all server IP’s so we can check their kernel version, for instance. This is transparently done by Ansible Tower, for instance, which will periodically run the inventory and collect the updated list of OpenStack servers to manage.
Before you execute this, you don’t have stale cloud.yaml files in either ~/.config/openstack, /etc/openstack, or /etc/ansible. The Dynamic Inventory script will look for environment variables first (OS_*), and then it will search for those files.
ensure you are using latest ansible version

$ workon ansible2
$ wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/openstack.py
$ chmod +x openstack.py
$ ansible -i openstack.py all -m ping
bdef428a-10fe-4af7-ae70-c78a0aba7a42 | SUCCESS => {
    “changed”: false,
    “ping”: “pong”
}
343c6e76-b3f6-4e78-ae59-a7cf31f8cc44 | SUCCESS => {
    “changed”: false,
    “ping”: “pong”
}
You can have fun by looking at all the information that the Inventory script above returns if you just executed as follows:
$ ./openstack.py &8211;list
{
 “”: [
    “777a3e02-a7e1-4bec-86b7-47ae7679d214″,
    “bdef428a-10fe-4af7-ae70-c78a0aba7a42″,
    “0a0c2f0e-4ac6-422d-8d9b-12b7a87daa72″,
    “9d4ee5c0-b53d-4cdb-be0f-c77fece0a8b9″,
    “343c6e76-b3f6-4e78-ae59-a7cf31f8cc44″
 ],
 “_meta”: {
    “hostvars”: {
     “0a0c2f0e-4ac6-422d-8d9b-12b7a87daa72″: {
       “ansible_ssh_host”: “172.31.1.42”,
       “openstack”: {
         “HUMAN_ID”: true,
         “NAME_ATTR”: “name”,
         “OS-DCF:diskConfig”: “MANUAL”,
         “OS-EXT-AZ:availability_zone”: “nova”,
         “OS-EXT-SRV-ATTR:host”: “compute-0.localdomain”,
         “OS-EXT-SRV-ATTR:hypervisor_hostname”: “compute-0.localdomain”,
         “OS-EXT-SRV-ATTR:instance_name”: “instance-000003e7″,
         “OS-EXT-STS:power_state”: 1,
         “OS-EXT-STS:task_state”: null,
         “OS-EXT-STS:vm_state”: “active”,
         “OS-SRV-USG:launched_at”: “2016-10-10T21:13:24.000000″,
         “OS-SRV-USG:terminated_at”: null,
         “accessIPv4″: “172.31.1.42”,
         “accessIPv6″: “”,
(….)

Conclusion
Even though Heat is very useful, some people may prefer to learn Ansible to do their workload orchestration, as it offers a common language to define and automate the full stack of I.T. resources. I hope this article has provided you with a practical example, with a very basic use case for Ansible to launch OpenStack resources. If you are interested in trying Ansible and Ansible Tower, please visit https://www.ansible.com/openstack. A good starting point would be connecting Heat with Ansible Tower callbacks, as described in this other blog post
Also, if you want to learn more about Red Hat OpenStack Platform, you&8217;ll find lots of valuable resources (including videos and whitepapers) on our website. https://www.redhat.com/en/technologies/linux-platforms/openstack-platform
 
Quelle: RedHat Stack

Mirantis OpenStack 9.1 – Continuing to Simplify the Day-2 Experience

The post Mirantis OpenStack 9.1 &; Continuing to Simplify the Day-2 Experience appeared first on Mirantis | The Pure Play OpenStack Company.
Mirantis OpenStack 9.1 makes it easier for cloud operators to consume upstream innovation on a periodic basis, both for bug fixes and minor feature enhancements, and you can get access to this capability through an easy and reliable update mechanism. In addition, along with a number of additional features in Fuel, Mirantis OpenStack 9.1 simplifies the day-2, or post-deployment, experience for operators.
Improved Day-2 Operations
Streamline OpenStack updates
The prior mechanism of applying Maintenance Updates (MU) had several limitations. First, the MU script could only apply package updates to controller and compute nodes, and not to the Fuel Master itself. Next, the previous mechanism suffered from the inability to restart services automatically, and lacked integration with Fuel.
In 9.1, a new update mechanism has been introduced that uses Fuel’s internal Deployment Tasks to update the cloud and the Fuel Master. This new mechanism delivers the following:

Reliability: It is tested and verified as part of the Mirantis OpenStack release. This includes going through our automated CI/CD pipelines and extensive QA process.
Customizations: It provides users the ability to detect any customizations before applying an update to a cloud to enable operators to decide whether an update is safe to apply.
Automatic restart: It enables automatic restart of services so that changes can take effect. The prior mechanism required users to manually restart services.

Simplify Custom Deployment Tasks With a New User Interface
In Mirantis OpenStack 9.0, we introduced the ability to define custom deployment tasks to satisfy advanced lifecycle management requirements. Operators could customize configuration options, execute any command on any node, update packages etc. with deployment tasks. In the 9.1 release, you get access to a new Deployment Task user interface in Fuel that shows the deployment workflow history. The UI can also be used to manage deployment tasks.

Automate Deployment Tasks With Event-Driven Execution
Consider an example where you need to integrate third-party monitoring software. In that case, you would want to register a new node with the monitoring software as soon as it is deployed via Fuel. Items such as these can now be automated with 9.1, where a custom deployment task can be triggered by specific Fuel events.
Reduce Footprint With Targeted Diagnostic Snapshots
With prior releases, diagnostic snapshots continued to grow over time to consume multiple GB of storage per node in just a few weeks. To solve this problem, 9.1 features targeted diagnostic snapshots to only allow log retrievals of recent N (configurable) days for a specific set of nodes.
Enhanced Security
Mirantis OpenStack 9.1 includes a number of important security features:

SSH Brute Force protection on the Host OS
Basic DMZ Enablement to separate the API/Public Network from the Floating Network
RadosGW S3 API authentication through Keystone to enable the use of the same credentials for Ceph object storage APIs

The latest versions of StackLight and Murano are compatible with 9.1, so you will also be able to benefit from the latest features of the logging, monitoring and alerting (LMA) toolchain and application catalog and orchestration tool.
Because it&;s an update, installation of the Mirantis OpenStack 9.1 update package requires you to already have Mirantis OpenStack 9.0 installed, but then you&8217;re ready to go.  All set? Then hit the 9.0 to 9.1 update instructions to get started.
The post Mirantis OpenStack 9.1 &8211; Continuing to Simplify the Day-2 Experience appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Announcing Docker Global Mentor Week 2016

Building on the the success of the Docker Birthday  Celebration and Training events earlier this year, we’re excited to announce the Docker Global Mentor Week. This global event series aims to provide Docker training to both newcomers and intermediate Docker users. More advanced users will have the opportunity to get involved as mentors to further encourage connection and collaboration within the community.

The Docker Global Mentor Week is your opportunity to either or help others learndocker. Participants will work through self paced labs that will be available through an online Learning Management System (LMS). We’ll have different labs for beginners and intermediate users, Developers and Ops and Linux or Windows users.
Are you an advanced Docker user?
We are recruiting a network of mentors to help guide learners work through the labs. Mentors will be invited to attend local events to help answer questions attendees may have while completing the self-paced beginner and intermediate labs. To help mentors prepare for their events, we&;ll be sharing the content of the labs and hosting a Q&A session with the Docker team before the start of the global mentor week.
 
Sign up as a Mentor!
 
With over 250 Docker Meetup groups worldwide, there is always an opportunity for collaboration and knowledge sharing. With the launch of Global Mentor Week, Docker is also introducing a Sister City program to help create and strengthen partnerships between local Docker communities which share similar challenges.
Docker NYC Organiser Jesse White talks about their collaboration with Docker London:
“Having been a part of the Docker community ecosystem from the beginning, it&8217;s thrilling for us at Docker NYC to see the community spread across the globe. As direct acknowledgment and support of the importance of always reaching out and working together, we&8217;re partnering with Docker London to capture the essence of what&8217;s great about Docker Global Mentor week. We&8217;ll be creating a transatlantic, volunteer-based partnership to help get the word out, collaborate on and develop training materials, and to boost the recruitment of mentors. If we&8217;re lucky, we might get some international dial-in and mentorship at each event too!”
If you’re part of a community group for a specific programming language, open source software projects, CS students at local universities, coding institutions or organizations promoting inclusion in the larger tech community and interested in learning about Docker, we&8217;d love to partner with you. Please email us at meetups@docker.com for more information about next steps.
We&8217;re thrilled to announce that there are already 37 events scheduled around the world! Check out the list of confirmed events below to see if there is one happening near you. Make sure to check back as we’ll be updating this list as more events are announced. Want to help us organize a Mentor Week training in your city? Email us at meetups@docker.com for more information!
 
Saturday, November 12th

New Delhi, India

Sunday, November 13th

Mumbai, India

Monday, November 14th

Auckland, New Zealand
London, United Kingdom
Mexico City, Mexico
Orange County, CA

Tuesday, November 15th

Atlanta, GA
Austin, TX
Brussels, Belgium
Denver, CO
Jakarta, Indonesia
Las Vegas, NV
Medan, Indonesia
Nice, France
Singapore, Singapore

Wednesday, November 16th

Århus, Denmark
Boston, MA
Dhahran, Saudia Arabia
Hamburg, Germany
Novosibirsk, Russia
San Francisco, CA
Santa Barbara, CA
Santa Clara, CA
Washington, D.C.
Rio de Janeiro, Brazil

Thursday, November 17th

Berlin, Germany
Budapest, Hungary
Glasgow, United Kingdom
Lima, Peru
Minneapolis, MN
Oslo, Norway
Richmond, VA

Friday, November 18th

Kanpur, India
Tokyo, Japan

Saturday, November 19th

Ha Noi, Vietnam
Mangaluru, India
Taipei, Taiwan

Excited about Docker Global Mentor Week? Let your community know!

Excited to learndocker during @docker Global Mentor Week! Get involved by signing up for&;Click To Tweet

The post Announcing Docker Global Mentor Week 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

53 new things to look for in OpenStack Newton

The post 53 new things to look for in OpenStack Newton appeared first on Mirantis | The Pure Play OpenStack Company.
OpenStack Newton, the technology&;s 14th release, shows just how far we&8217;ve come: where we used to focus on basic things, such as supporting specific hypervisors or enabling basic SDN capabilities, now that&8217;s a given, and we&8217;re talking about how OpenStack has reached its goal of supporting cloud-native applications in all of their forms &; virtual machines, containers, and bare metal.
There are hundreds of changes and new features in OpenStack Newton, and you can see some of the most important in our What&8217;s New in OpenStack Newton webinar.  Meanwhile, as we do with each release, let&8217;s take a look at 53 things that are new in OpenStack Newton.
Compute (Nova)

Get me a network enables users to let OpenStack do the heavy lifting rather than having to understand the underlying networking setup.
A default policy means that users no longer have to provide a full policy file; instead they can provide just those rules that are different from the default.
Mutable config lets you change configuration options for a running Nova service without having to restart it.  (This option is available for a limited number of options, such as debugging, but the framework is in place for this to expand.)
Placement API gives you more visibility into and control over resources such as Resource providers, Inventories, Allocations and Usage records.
Cells v2, which enables you to segregate your data center into sections for easier manageability and scalability,has been revamped and is now feature-complete.

Network (Neutron)

802.1Q tagged VM connections (VLAN aware VMs) enables VNFs to target specific VMs.
The ability to create VMs without IP Address means you  can create a VM with no IP address and specify complex networking later as a separate process.
Specific pools of external IP addresses let you optimize resource placement by controlling IP decisions.
OSProfiler support lets you find bottlenecks and troubleshoot interoperability issues.
No downtime API service upgrades

Storage (Cinder, Glance, Swift)
Cinder

Microversions let developers can add new features you can access without breaking the main version.
Rolling upgrades let you update to Newton without having to take down the entire cloud.
enabled_backends config option defines which backend types are available for volume creation.
Retype volumes from encrypted to not encrypted, and back again after creation.
Delete volumes with snapshots using the cascade feature rather than having to delete the snapshots first.
The Cinder backup service can now be scaled to multiple instances for better reliability and scalability.

Glance

Glare, the Glance Artifact Repository, provides the ability to store more than just images.
A trust concept for long-lived snapshots makes it possible to avoid errors on long-running operations.
The new restrictive default policy means that all operations are locked down unless you provide access, rather than the other way around.

Swift

Object versioning lets you keep multiple copies of an individual object, and choose whether to keep all versions, or just the most recent.
Object encryption provides some measure of confidentiality should your disk be separated from the cluster.
Concurrent bulk-deletes speed up operations.

Other core projects (Keystone, Horizon)
Keystone

Simplified configuration setup
PCI support of password configuration options
Credentials encrypted at rest

Horizon

You can now exercise more control over user operations with parameters such as IMAGES_ALLOW_LOCATION, TOKEN_DELETE_DISABLED, LAUNCH_INSTANCE_DEFAULTS
Horizon now works if only Keystone is deployed, making it possible to use Horizon to manage a Swift-only deployment.
Horizon now checks for Network IP availability rather than enabling users to set bad configurations.
Be more specific when setting up networking by restricting the CIDR range for a user private network, or specify a fixed IP or subnet when creating a port.
Manage Consistency Groups.

Containers (Magnum, Kolla, Kuryr)
Magnum

Magnum is now more about container orchestration engines (COEs) than containers, and can now deploy Swarm, Kubernetes, and Mesos.
The API service is now protected by SSL.
You can now use Kubernetes on bare metal.
Asynchronous cluster creation improves performance for complex operations.

Kolla

You can now use Kolla to deploy containerized OpenStack to bare metal.

Kuryr

Use Neutron networking capabilities in containers.
Nest VMs through integration with Magnum and Neutron.

Additional projects (Heat, Ceilometer, Fuel, Murano, Ironic, Community App Catalog, Mistral)
Heat

Use DNS resolution and integration with an external DNS.
Access external resources using the external_id attribute.

Ceilometer

New REST API that makes it possible to use services such as Gnocchi rather than just interacting with the database.
Magnum support.

FUEL

Deploy Fuel without having to use an ISO.
Improved life cycle management user experience, including Infrastructure as Code.
Container-based deployment possibilities.

Murano

Use the new Application Development Framework to build more complex applications.
Enable users to deploy your application across multiple regions for better reliability and scalability.
Specify that when resources are no longer needed, they should be deallocated.

Ironic

You can now have multiple nova-compute services using Ironic without causing duplicate entries.
Multi-tenant networking makes it possible for more than one tenant to use ironic without sharing network traffic.
Specify granular access restrictions to the REST API rather than just turning it off or on.

Community App Catalog

The Community App Catalog now uses Glare as its backend, making it possible to more easily store multiple application types.
Use the new v2 API to add and manage assets directly, rather than having to go through gerrit.
Add and manage applications via the Community App Catalog website.

Did we miss your favorite project or feature?  Let us know what new features you&8217;re excited about in the comments.
The post 53 new things to look for in OpenStack Newton appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Auto-remediation: making an Openstack cloud self-healing

The post Auto-remediation: making an Openstack cloud self-healing appeared first on Mirantis | The Pure Play OpenStack Company.
The bigger the Openstack cloud you have, the bigger the operation challenges you will face. Things break &; daemons die, logs fill up the disk, nodes have hardware issues, rabbitmq clusters fall apart, databases get a split brain due to network outages&; All of these problems require engineering time to create outage tickets, troubleshoot and fix the problem &; not to mention writing the RCA and a runbook on how to fix the same problem in the future.
Some of the outages will never happen again if you’ll make the proper long-term fix to the environment, but others will rear their heads again and again. Finding an automated way to handle those issues, either by preventing or fixing them, is crucial if you want to keep your environment stable and reliable.
That&;s where auto-remediation kicks in.
What is Auto-Remediation?
Auto-Remediation, or Self-Healing, is when automation responds to alerts or events by executing actions that can prevent or fix the problem.
The simplest example of auto-remediation is cleaning up the log files of a service that has filled up the available disk space. (It happens to everybody. Admit it.) Imagine an automated action that is triggered by a monitoring system to clean the logs and prevent the service from crashing. In addition, it creates a ticket and sends a notification so the engineer can fix log rotation during business hours, and there is no need to do it in the middle of the night. Furthermore, the event-driven automation can be used for assisted troubleshooting, so when you get an alert it includes related logs, monitoring metrics/graphs, and so on.

This is what an incident resolution workflow should look like:

Auto-remediation tooling
Facebook, LinkedIn, Netflix, and other hyper-scale operators use event-driven automation and workflows, as described above. While looking for an open source solution, we found StackStorm, which was used by Netflix for the same purpose. Sometimes called IFTTT (If This, Then That) for ops, the StackStorm platform is built on the same principles as a famous Facebook FBAR (FaceBook AutoRemediation), with “infrastructure as code”, a scalable microservice architecture, and it&8217;s supported by a solid and responsive team. (They are now part of Brocade, but the project is accelerating.) StackStorm uses OpenStack Mistral as a workflow engine, and offers a rich set of sensors and actions that are easy to build and extend.
The auto-remediation approach can easily be applied when operating an OpenStack cloud in order to improve reliability. And it&8217;s a good thing, too, because OpenStack has many moving parts that can break. Event-driven automation can take care of a cloud when you sleep, handling not only basic operations such as restarting nova-api and cleaning ceilometer logs, but also complex actions such as rebuilding the rabbitmq cluster or fixing Galera replication.
Automation can also expedite incident resolution by “assisting” engineers with troubleshooting. For example, if monitoring detects that keystone has started to return 503 for every request, the on-call engineer can be provided with logs from every keystone node, memcached and DB state even before starting the terminal.
In building our own self-healing OpenStack cloud, we started small. Our initial POC had just 3 simple automations: cleaning logs, service restarts and cleaning rabbitmq queues. We placed them on our 1,000 node OpenStack cluster, and they run there for 3 months, taking these 3 headaches off our operators. This example showed us that we need to add more and more self-healing actions, so our on-call engineers can sleep better at night.
Here is the short list of issues that can be auto-remediated:

Dead process
Lack of free disk space
Overflowed rabbitmq queues
Corrupted rabbitmq mnesia
Broken database replication
Node hardware failures (e.g. triggering VM evacuation)
Capacity issue (by adding more hypervisors)

Where to see more
We&8217;d love to give you a more detailed explanation on how we approached self-healing of an OpenStack cloud. If you’re at the OpenStack summit, we invite you to attend our talk on Thursday, October 27, 9:00am at Room 112, or if you are in San Jose, CA come to the Auto-Remediation meetup on October 20th and hear us sharing the story there. You can also meet with the StackStorm team and other operators who are making the vision of Self-Healing a reality.
The post Auto-remediation: making an Openstack cloud self-healing appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Announcing DockerCon 2017

The Docker Team is excited to announce the next DockerCon will be in held in Austin, Texas from April 17-20. For anyone not in an event planning role, finding a venue is always an adventure. Finding a venue for a unique event such as DockerCon adds an extra layer of complexity. After inquiring on over 15 venues and visiting 3 cities, we are confident that we have chosen a great venue for DockerCon 2017 and the Docker community.
DockerCon US 2017: Austin
April 17-20, 2017
Between the lively tech community, amazing restaurants and culture, Austin will be a natural fit for DockerCon. A diverse range of companies such as Dell, Whole Foods Market, Rackspace, HomeAway and many more of the hottest IT startups call Austin home. We can’t wait to welcome back many returning DockerCon alumni as well as open the DockerCon doors to so many new attendees and companies in the Austin area.
One of the most exciting additions to the DockerCon program is an extra day of content! We reviewed every attendee survey from Seattle in June, debriefed with Docker Captains and others in the community and came to the overwhelming conclusion that two days was not enough time to get the most value out of the jampacked DockerCon agenda. In 2017, we will introduce a third day of content that will repeat the top voted sessions, give more time to complete Hands-on Labs and allow more time for other learning opportunities that are in the works.
Let’s get this party started!
Save the dates:

Monday April 17: Paid training, afternoon workshops and evening welcome reception
Tuesday April 18: DockerCon Day 1, After Party
Wednesday April 19: DockerCon Day 2
Thursday April 20: DockerCon Day 3 &; half day of repeat top sessions, Hands-on Labs and workshops

Pre-register now for early bird pricing and we’ll send you an additional $50 discount code once DockerCon registration launches.
 
Pre-register for DockerCon
 
Calling all speakers!
We’re excited to hear about all of the interesting ways you’re using Docker. We’re looking for a variety of talks such as cool and unique use cases and Docker hack projects, advanced technical talks, or maybe you have a great talk on tech culture. Check out our sample CFP proposals for DockerCon for more information on what the program committee is looking for when reviewing a proposal, our tips for getting a proposal accepted, and our previous talks from DockerCon 2016. Our Call for Proposals will be open November 17, 2016 &8211; January 7, 2017.
Are you interested in learning more about sponsorship opportunities at DockerCon? Please sign up here to be among the first to receive the sponsorship prospectus.
 
Sponsor DockerCon
 
So, by now you’ve read this entire blog post and are now shouting, “What about DockerCon Europe?!” The truth is that we have spent many months searching for an available venue and we were unable to secure a site for this year. The reality is that the conference industry is incredibly competitive and we need to lock in venues farther in advance. For this reason we are now working on bringing DockerCon back to Europe in 2017. We will update the community as soon as we concrete details.
 
About DockerCon
DockerCon 2017 is a three day, Docker-centric conference organized by Docker. This year’s US edition will take place in Austin, TX and continue to build on the success of previous events as it grows and reflects Docker’s established ecosystem and ever growing community. DockerCon will feature topics and content covering all aspects of Docker and will be suitable for Developers, DevOps, Ops, System Administrators and C­-level executives. You will have ample opportunities to connect and learn about how others are using Docker. We&;re confident that no matter your level of expertise with Docker or your company size, you&8217;ll meet and learn from other attendees who share the same use cases and overcame the same challenges using Docker.

Save the date for @DockerCon 2017 in Austin April 17-20 ! we hope to see you all at To Tweet

The post Announcing DockerCon 2017 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Cloud-based Project DataWorks aims to make data accessible, organized

Increasingly, data is a form of currency in business. Not just the data itself, but the ability to find just the right piece of information at just the right time.
As organizations amass more and more data reaching into petabyte sizes, it can sometimes become diffuse, which can make it hard for someone to quickly find exactly the right key to unlock a barrier to progress.
To solve that challenge, IBM unveiled Project DataWorks last week, a cloud-based data organization catalog which puts all of a company’s data in one easy-to-access, intuitive dashboard. Here’s how TechCrunch describes Project DataWorks:
With natural language search, users can pull up specific data sets from those catalogs much more quickly than with traditional methods. DataWorks also touts data ingestion at speeds of 50 to 100s of Gbps.
The tool is available through the IBM Bluemix platform and uses Watson cognitive technology to raise its speed and usability.
In an interview with PCWorld, Derek Scholette, general manager of cloud data services for IBM Analytics, explained: &;Analytics is no longer something in isolation for IT to solve. In the world we&;re entering, it&8217;s a team sport where data professionals all want to be able to operate on a platform that lets them collaborate securely in a governed manner.&;
Project DataWorks is open to enterprise customers but it’s also open to small businesses. It’s currently available as a pay-as-you-go service.
For more, read the full articles at TechCrunch and PCWorld.
The post Cloud-based Project DataWorks aims to make data accessible, organized appeared first on news.
Quelle: Thoughts on Cloud

Develop Cloud Applications for OpenStack on Murano, Day 5: Uploading and troubleshooting the app

The post Develop Cloud Applications for OpenStack on Murano, Day 5: Uploading and troubleshooting the app appeared first on Mirantis | The Pure Play OpenStack Company.
We&;re in the home stretch! So far, we’ve explained what Murano is, created an OpenStack cluster with Murano, built the main script that will install our application, and packaged it as a Murano app. We&8217;re finally ready to deploy the app to Murano.
Now let’s upload the PloneServerApp package to Murano.
Add the Murano app to the OpenStack Application Catalog
To upload an application to the cloud:

Log into the OpenStack Horizon dashboard.
Navigate to Applications > Manage > Packages.
Click the Import Package button.

Select the zip package that we created yesterday and click Next.
In the pop-up window you can see the information that we added to the manifest.yaml file earlier. Also, we’ve got a notification message that Glance has started retrieving Ubuntu image mentioned in image.lst.  (This only happens if the image doesn&8217;t already exist in Glance.)

Now we just have to wait for the image to finish saving so we can move on to try out the app.  To check on that, go to Projects > Images. Wait for the status to be listed as Active rather than Saving.

Deploy the new app
Now that we&8217;ve created the app, it&8217;s time to test it out in all its glory.

Navigate to Applications > Catalog > Browse.

You will find that the Plone Server has appeared with the icon from our logo.png file. Click Quick deploy and you’ll see the configuration wizard appear, with all of the information we added to the ui.yaml file in the appConfiguration form:

Click on Assign Floating IP and click Next.
You’ll then see the instanceConfiguration form we mentioned in the ui.yaml file:

Choose an appropriate instance flavor. In my case I used a “m1.small” flavor and edited it to have: 1 CPU, 1GB RAM, and 20GB disk space. I also, shut down the Compute node VM and gave it more RAM in VirtualBox, with 2GB instead of 1GB. You can edit flavors by navigating from Admin > System > Flavors.
Be aware: that if you select a flavor that requires more hardware than your Compute node really has then you take an error during spawning an instance.
Choose the instance image that we mentioned in image.lst. If no images appear in the drop-down menu check that your image has finished uploading.
Choose a Key Pair or import it instantly by clicking the “+” button:

Click Next.
Set the Application Name and click Create:

The Plone Server application has now been successfully added to the newly created quick-env-1 environment. Click the Deploy This Environment button to start the deployment:

It may take some time for the environment to deploy:

Wait until the status has changed from Deploying to Ready:

Once it does, go to the Plone home page at http://172.16.0.134:8080 from your Host OS browser, this is, outside your OpenStack Cloud:

You should see the Plone home page. If you don&8217;t, you&8217;ll need to do some troubleshooting.
Debugging and Troubleshooting Your Murano App
While deploying your Murano App you may have encountered a number of errors. Some of them could be related to spawning a VM, others may have occured during runDeployPlone.sh execution.
For information on errors relating to spawning the VM, check the Horizon UI. Navigate Catalog > Environments then click the environment and open the Deployment History page. Click on the Show Details button located at the corresponding deployment row of the table and then go to the Logs tab. From there you can see the steps of deployment and ones that have failed will have a red color.
Several of the most frequently occurring errors, as well as their suggested solutions, are described in  Murano documentation.
The other type of errors relates to the app installing script runDeployPlone.sh. As you remember, we collect all output from this file in a special log-file, /var/log/runPloneDeploy.log to help you track any possible issues. By knowing the floating IP-address of the newly created VM for the Plone Server, we can access the log-file via an ssh-connection.
It&8217;s important to note, though, that because we applied a special Ubuntu image from the repository during the environment deployment, the login process has a security limitation. By default, the password authentication mechanism is turned off and the only way to connect to your VM is to use an access key pair. You can find out more about how to create and set this up here.
First log in to the VM as the default user, ubuntu:
$ ssh -i <private_key> ubuntu@<floating IP address>
You can then read the log:
$ less /var/log/runPloneDeploy.log
Now it’s possible to fix the errors that have appeared and polish the installation process.
Remember, when encountering issues with your Murano App, you can always contact the Murano team, or any other OpenStack related teams, through IRC. You can find the list of IRC channels here: IRC. Feel free to ask any questions.
Summary
In this series, we outlined the creation process of a Murano App for the ultimate enterprise CMS &; Plone. We also saw how to easily build a Murano App from the ground up and showed how it didn’t require you to be an OpenStack or Linux guru.
Murano is a great OpenStack service that provides application lifecycle management and dramatically simplifies the introduction of new software to the OpenStack community.
Moreover, it provides other great features not mentioned in this tutorial, such as High-Availability mode, Auto-Scaling or application dependencies management.
Try it out for yourself and get excited by how easy it is. Next time, we&8217;ll look at the steps needed to publish your Murano App in the OpenStack application catalog at http://apps.openstack.org.
Thanks for joining us!
The post Develop Cloud Applications for OpenStack on Murano, Day 5: Uploading and troubleshooting the app appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Develop Cloud Applications for OpenStack on Murano, Day 4: The application, part 2: Creating the Murano App

The post Develop Cloud Applications for OpenStack on Murano, Day 4: The application, part 2: Creating the Murano App appeared first on Mirantis | The Pure Play OpenStack Company.
So far in this series, we&;ve explained what Murano is, created an OpenStack cluster with Murano, and built the main script that will install our application. Now it&8217;s time to actually package PloneServerApp up for Murano.
In this series, we&8217;re looking at a very basic example, and we&8217;ll tell you all you need to make it work, but there are some great tutorials and references that describe this process (and more) in detail.  You can find them in the official Murano documentation:

Murano package structure
Create Murano application step-by-step
Murano Programming Language Reference

So before we move on, let&8217;s just distill that down to the basics.
What we&8217;re ultimately trying to do
When we&8217;re all finished, what we want is basically a *.zip file structured in a way that Murano expects, with files that provide all of the information that it needs. There&8217;s nothing really magical about this process, it&8217;s just a matter of creating the various resources.  In general, the structure of a Murano application looks something like this:
..
|_  Classes
|   |_  PloneServer.yaml
|
|_  Resources
|   |_  scripts
|       |_ runPloneDeploy.sh
|   |_  DeployPloneServer.template
|
|_  UI
|   |_  ui.yaml
|
|_  logo.png
|
|_  manifest.yaml
Obviously the filenames (and content!) will depend on your specific application, but you get the idea. (If you&8217;d like to see the finished version of this application, you can get it from GitHub.)
When we&8217;ve assembled all of these pieces, we&8217;ll zip them up and they&8217;ll be ready to import into Murano.
Let&8217;s take a look at the individual pieces.
The individual files in a Murano package
Each of the individual files we&8217;re working with is basically just a text file.
The Manifest file
The manifest.yaml file contains the main application’s information. For our PloneServerApp, that means the following:
1. #  Plone uses GPL version 2 as its license. As of summer 2009, there are
2. #  no active plans to upgrade to GPL version 3.
3. #  You may obtain a copy of the License at
4. #
5. #       http://www.gnu.org
6. #
7.
8. Format: 1.3
9. Type: Application
10. FullName: org.openstack.apps.plone.PloneServer
11. Name: Plone CMS
12. Description: |
13. The Ultimate Open Source Enterprise CMS.
14. The Plone CMS is one of the most secure
15. website systems available. This installer
16. lets you deploy Plone in standalone mode.
17. Requires Ubuntu 14.04 image with
18. preinstalled murano-agent.
19. Author: ‘Evgeniy Mashkin’
20. Tags: [CMS, WCM]
21. Classes:
22. org.openstack.apps.plone.PloneServer: PloneServer.yaml
Let’s start at Line 8:
Format: 1.3
The versioning of the manifest format is directly connected with YAQL and the version of Murano itself. See the short description of format versions and choose the format version according to the OpenStack release you going to develop your application for. In our case, we&8217;re using Mirantis OpenStack 9.0, which is built on the Mitaka OpenStack release, so I chose the 1.3 version that corresponds to Mitaka.
Now let’s move to Line 10:
FullName: org.openstack.apps.plone.PloneServer
Here you&8217;re adding a fully qualified name for your application, including the namespace if your choice.
IMPORTANT: Don&8217;t use the io.murano namespace for your apps; it&8217;s being used for the Murano Core Library.
Lines 11 through 20 show the Name, Description, Author and Tags, which will be shown in the UI:
Name: Plone CMS

Description: |
The Ultimate Open Source Enterprise CMS.
The Plone CMS is one of the most secure
website systems available. This installer
lets you deploy Plone in standalone mode.
Requires Ubuntu 14.04 image with
preinstalled murano-agent.
Author: ‘Evgeniy Mashkin’
Tags: [CMS, WCM]
Finally, on lines 21 and 22, you&8217;ll point to your application class file (which we&8217;ll build later). This file should be in the Classes directory of the package.
Classes:
org.openstack.apps.plone.PloneServer: PloneServer.yaml
Make sure to double check all of your references, filenames, and whitespaces as errors with these can cause errors when you upload your application package to Murano.
Execution Plan Template
The execution plan template &; DeployPloneServer.template &8212; describes the installation process of the Plone Server on a virtual machine and contains instructions to the murano-agent on what should be executed to deploy the application. Essentially, it tells Murano how to handle the runPloneDeploy.sh script we created yesterday.
Here&8217;s the DeployPloneServer.template listing for our PloneServerApp:
1. #  Plone uses GPL version 2 as its license. As of summer 2009, there are
2. #  no active plans to upgrade to GPL version 3.
3. #  You may obtain a copy of the License at
4. #
5. #       http://www.gnu.org
6. #
7. FormatVersion: 2.0.0
8. Version: 1.0.0
9. Name: Deploy Plone
10. Parameters:
11.  pathname: $pathname
12.  password: $password
13.  port: $port
14. Body: |
15.  return ploneDeploy(‘{0} {1} {2}’.format(args.pathname, args.password, args.port)).stdout
16. Scripts:
17.  ploneDeploy:
18.    Type: Application
19.    Version: 1.0.0
20.    EntryPoint: runPloneDeploy.sh
21.    Files: []
22.    Options:
23.      captureStdout: true
24.      captureStderr: true
Starting with lines 12 through 15, you can see that we&8217;re defining our parameters &; the installation path, administrative password, and TCP port. Just as we added them on the command line yesterday, we need to tell Murano to ask the user for them.
Parameters:
 pathname: $pathname
 password: $password
 port: $port
In the Body section we have a string that describes the Python statement to execute, and how it will be executed by the Murano agent on the virtual machine:
Body: |
return ploneDeploy(‘{0} {1} {2}’.format(args.pathname, args.password, args.port)).stdout
Scripts defined in the Scripts section are invoked from here, so, we need to keep the order of arguments consistent with the runPloneDeploy.sh script that we developed yesterday.
Also, double check all filenames, whitespaces, and brackets. Mistakes here can cause the Murano agent to experience errors when it tries to run our installation script. If you do experience errors in this case, after  an error has occurred, connect to the spawned VM via SSH and check the runPloneDeploy.log file we added for just this purpose.
Dynamic UI form definition
In order for the user to be able to add parameters such as the administrative password, we need to make sure that the user interface is set up correctly.  We do this with the UI.yaml file, which contains the UI forms description that will be shown to users and tells users where they can set available installation options. The ui.yaml file for our PloneServerApp reads as follows:
1. #  Plone uses GPL version 2 as its license. As of summer 2009, there are
2. #  no active plans to upgrade to GPL version 3.
3. #  You may obtain a copy of the License at
4. #
5. #       http://www.gnu.org
6. #
7. Version: 2.3
8. Application:
9.  ?:
10.    type: org.openstack.apps.plone.PloneServer
11.  pathname: $.appConfiguration.pathname
12.  password: $.appConfiguration.password
13.  port: $.appConfiguration.port
14.  instance:
15.    ?:
16.      type: io.murano.resources.LinuxMuranoInstance
17.    name: generateHostname($.instanceConfiguration.unitNamingPattern, 1)
18.    flavor: $.instanceConfiguration.flavor
19.    image: $.instanceConfiguration.osImage
20.    keyname: $.instanceConfiguration.keyPair
21.    availabilityZone: $.instanceConfiguration.availabilityZone
22.    assignFloatingIp: $.appConfiguration.assignFloatingIP
23. Forms:
24.  – appConfiguration:
25.      fields:
26.        – name: license
27.          type: string
28.          description: GPL License, Version 2
29.          hidden: true
30.          required: false
31.        – name: pathname
32.          type: string
33.          label: Installation pathname
34.          required: false
35.          initial: ‘/opt/plone/’
36.          description: >-
37.            Use to specify the top-level path for installation.
38.        – name: password
39.          type: string
40.          label: Admin password
41.          required: false
42.          initial: ‘admin’
43.          description: >-
44.            Enter administrative password for Plone.
45.        – name: port
46.          type: string
47.          label: Port
48.          required: false
49.          initial: ‘8080’
50.          description: >-
51.            Specify the port that Plone will listen to
52.            on available network interfaces.
53.        – name: assignFloatingIP
54.          type: boolean
55.          label: Assign Floating IP
56.          description: >-
57.             Select to true to assign floating IP automatically.
58.          initial: false
59.          required: false
60.        – name: dcInstances
61.          type: integer
62.          hidden: true
63.          initial: 1
64.  – instanceConfiguration:
65,      fields:
66.        – name: title
67.          type: string
68.          required: false
69.          hidden: true
70.          description: Specify some instance parameters on which the application would be created
71.        – name: flavor
72.          type: flavor
73.          label: Instance flavor
74.          description: >-
75.            Select registered in Openstack flavor. Consider that
76.            application performance depends on this parameter
77.          requirements:
78.            min_vcpus: 1
79.            min_memory_mb: 256
80.          required: false
81.        – name: minrequirements
82.          type: string
83.          label: Minumum requirements
84.          description: |
85.            – Minimum 256 MB RAM and 512 MB of swap space per Plone site
86.            – Minimum 512 MB hard disk space
87.          hidden: true
88.          required: false
89.        – name: recrequirements
90.          type: string
91.          label: Recommended
92.          description: |
93.            – 2 GB or more RAM per Plone site
94.            – 40 GB or more hard disk space
95.          hidden: true
96.          required: false
97.        – name: osImage
98.          type: image
99.          imageType: linux
100.          label: Instance image
101.          description: >-
102.            Select a valid image for the application. The image
103.            should already be prepared and registered in Glance
104.        – name: keyPair
105.          type: keypair
106.          label: Key Pair
107.          description: >-
108.            Select the Key Pair to control access to instances. You can login to
109.            instances using this KeyPair after the deployment of application.
110.          required: false
111.        – name: availabilityZone
112.          type: azone
113.          label: Availability zone
114.          description: Select availability zone where the application would be installed.
115.          required: false
116.        – name: unitNamingPattern
117.          type: string
118.          label: Instance Naming Pattern
119.          required: false
120.          maxLength: 64
121.          regexpValidator: ‘^[a-zA-z][-_w]*$’
122.          errorMessages:
123.            invalid: Just letters, numbers, underscores and hyphens are allowed.
124.          helpText: Just letters, numbers, underscores and hyphens are allowed.
125.          description: >-
126.            Specify a string, that will be used in instance hostname.
127.            Just A-Z, a-z, 0-9, dash and underline are allowed.
This is a pretty long file, but it&8217;s not as complicated as it looks.
Starting at line 8:
Version: 2.3
The format version for the UI definition is optional and its default value is the latest supported version. If you want to use your application with one of the previous versions you may need to set the version field explicitly.
Moving down the file, we basically have two UI forms: appConfiguration and instanceConfiguration.
Each form contains list of parameters that will be present on it. We place all of the parameters related to our Plone Server application on the appConfiguration form, including the path, password and TCP Port. This will then be sent to the Murano agent to invoke the runPloneDeploy.sh script:
       – name: pathname
         type: string
         label: Installation pathname
         required: false
         initial: ‘/opt/plone/’
         description: >-
           Use to specify the top-level path for installation.
       – name: password
         type: string
         label: Admin password
         required: false
         initial: ‘admin’
         description: >-
           Enter administrative password for Plone.
       – name: port
         type: string
         label: Port
         required: false
         initial: ‘8080’
         description: >-
           Specify the port that Plone will listen to
           on available network interfaces.
For each parameter we also set initial values that will be used as defaults.
On the instanceConfiguration form, we’ll place all of the parameters related to instances that will be spawned during deployment. We need to set hardware limitations, such as minimum hardware requirements, in the requirements section:
       – name: flavor
         type: flavor
         label: Instance flavor
         description: >-
           Select registered in Openstack flavor. Consider that
           application performance depends on this parameter
         requirements:
           min_vcpus: 1
           min_memory_mb: 256
         required: false
Also, we need to add notices for users about minimum and recommended Plone hardware requirements on the UI form:
       – name: minrequirements
         type: string
         label: Minumum requirements
         description: |
           – Minimum 256 MB RAM and 512 MB of swap space per Plone site
           – Minimum 512 MB hard disk space
         hidden: true
         required: false
       – name: recrequirements
         type: string
         label: Recommended
         description: |
           – 2 GB or more RAM per Plone site
           – 40 GB or more hard disk space
Murano PL Class Definition
Perhaps the most complicated part of the application is the class definition.  Contained in PloneServer.yaml, it describes the methods the Murano agent must be able to execute in order to manage the application. In this case, the application class looks like this:
1. #  Plone uses GPL version 2 as its license. As of summer 2009, there are
2. #  no active plans to upgrade to GPL version 3.
3. #  You may obtain a copy of the License at
4. #
5. #       http://www.gnu.org
6. #
7. Namespaces:
8.  =: org.openstack.apps.plone
9.  std: io.murano
10.  res: io.murano.resources
11.  sys: io.murano.system
12. Name: PloneServer
13. Extends: std:Application
14. Properties:
15.  instance:
16.    Contract: $.class(res:Instance).notNull()
17.  pathname:
18.    Contract: $.string()
19.  password:
20.    Contract: $.string()
21.  port:
22.    Contract: $.string()
23. Methods:
24.  .init:
25.    Body:
26.      – $._environment: $.find(std:Environment).require()
27.  deploy:
28.    Body:
29.      – If: not $.getAttr(deployed, false)
30.        Then:
31.          – $._environment.reporter.report($this, ‘Creating VM for Plone Server.’)
32.          – $securityGroupIngress:
33.            – ToPort: 80
34.              FromPort: 80
35.              IpProtocol: tcp
36.              External: true
37.            – ToPort: 443
38.              FromPort: 443
39.              IpProtocol: tcp
40.              External: true
41.            – ToPort: $.port
42.              FromPort: $.port
43.              IpProtocol: tcp
44.              External: true
45.          – $._environment.securityGroupManager.addGroupIngress($securityGroupIngress)
46.          – $.instance.deploy()
47.          – $resources: new(sys:Resources)
48.          – $template: $resources.yaml(‘DeployPloneServer.template’).bind(dict(
49.                pathname => $.pathname,
50.                password => $.password,
51.                port => $.port
52.              ))
53.          – $._environment.reporter.report($this, ‘Instance is created. Deploying Plone’)
54.          – $.instance.agent.call($template, $resources)
55.          – $._environment.reporter.report($this, ‘Plone Server is installed.’)
56.          – If: $.instance.assignFloatingIp
57.            Then:
58.              – $host: $.instance.floatingIpAddress
59.            Else:
60.              – $host: $.instance.ipAddresses.first()
61.          – $._environment.reporter.report($this, format(‘Plone Server is available at http://{0}:{1}’, $host, $.port))
62.          – $.setAttr(deployed, true)
First we set the namespaces and class name, then define the parameters we&8217;ll be using later. We can then move into methods.
Besides the standard init method, our PloneServer class has one main method &8211; deploy. It sets up instances of spawning and configuration. The deploy method performs the following tasks:

It configures a security group and opens the TCP port 80, SSH port and our custom TCP port (as determined by the user):
         – $securityGroupIngress:
           – ToPort: 80
             FromPort: 80
             IpProtocol: tcp
             External: true
           – ToPort: 443
             FromPort: 443
             IpProtocol: tcp
             External: true
           – ToPort: $.port
             FromPort: $.port
             IpProtocol: tcp
             External: true
       -$._environment.securityGroupManager.addGroupIngress($securityGroupIngress)

It initiates the spawning of a new virtual machine:
        – $.instance.deploy()

It creates a Resources object, then loads the execution plan template (in the Resources directory) into it, updating the plan with parameters taken from the user:
         – $resources: new(sys:Resources)
         – $template: $resources.yaml(‘DeployPloneServer.template’).bind(dict(
               pathname => $.pathname,
               password => $.password,
               port => $.port
             ))

It sends the ready-to-execute-plan to the murano agent:
         – $.instance.agent.call($template, $resources)

Lastly, it assigns a floating IP  to the newly spawned machine, if it was chosen:
         – If: $.instance.assignFloatingIp
           Then:
             – $host: $.instance.floatingIpAddress
           Else:
             – $host: $.instance.ipAddresses.first()

Before we move on, just a few words about floating IPs &8211; I will provide you with the key points from Piotr Siwczak’s article  “Configuring Floating IP addresses for Networking in OpenStack Public and Private Clouds”:
“The floating IP mechanism, besides exposing instances directly to the Internet, gives cloud users some flexibility. Having “grabbed” a floating IP from a pool, they can shuffle them (i.e., detach and attach them to different instances on the fly) thus facilitating new code releases and system upgrades. For sysadmins it poses a potential security risk, as the underlying mechanism (iptables) functions in a complicated way and lacks proper monitoring from the OpenStack side.”
Be aware that OpenStack is rapidly changing and some article’s statements may become obsolete, but the point is that there are advantages and disadvantages of using floating IPs.
Image File
In order to use OpenStack, you generally need an image to serve as the template for VMs you spawn. In some cases, those images will already be part of your cloud, but if not, you can specify them in the image.lst file. When you mention any image in this file and put it in your package, the image will be uploaded to your Cloud automatically. When importing images from the image.lst file, the client simply searches for a file with the same name as the name attribute of the image in the images directory of the package.  
An image file is optional, but to make sure your Murano App works you need to point any image with a pre-installed Murano agent. In our case it is Ubuntu 14.04 with a preinstalled Murano agent:
Images:
– Name: ‘ubuntu-14.04-m-agent.qcow2′
 Hash: ‘393d4f2a7446ab9804fc96f98b3c9ba1′
 Meta:
   title: ‘Ubuntu 14.04 x64 (pre-installed murano-agent)’
   type: ‘linux’
 DiskFormat: qcow2
 ContainerFormat: bare
Application Logo
The logo.png file is a preview image that will be visible to users in the application catalog. Having a logo file is optional, but for now, let’s choose this one:

Create a Package
Finally, now that all the files are ready go to our package files directory (where the manifest.yaml file is placed) we can create a .zip package:
$ zip -r org.openstack.apps.plone.PloneServer.zip *
Tomorrow we&8217;ll wrap up by showing you how to add your new package to the Murano application catalog.
The post Develop Cloud Applications for OpenStack on Murano, Day 4: The application, part 2: Creating the Murano App appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Develop Cloud Applications for OpenStack on Murano, Part 3: The application, part 1: Understanding Plone deployment

The post Develop Cloud Applications for OpenStack on Murano, Part 3: The application, part 1: Understanding Plone deployment appeared first on Mirantis | The Pure Play OpenStack Company.
OK, so so far, in Part 1 we talked about what Murano is and why you need it, and in Part 2 we put together the development environment, which consists of a text editor and a small OpenStack cluster with Murano.  Now let&;s start building the actual Murano App.
What we&8217;re trying to accomplish
In our case, we&8217;re going to create a Murano App that enables the user to easily install the Plone CMS. We&8217;ll call it PloneServerApp.
Plone is an enterprise level CMS (think WordPress on steroids).  It comes with its own installer, but it also needs a variety of libraries and other resources to be available to that installer.
Our task will be to create a Murano App that provides an opportunity for the user to provide information the installer needs, then creates the necessary resources (such as a VM), configures it properly, and then executes the installer.
To do that, we&8217;ll start by looking at the installer itself, so we understand what&8217;s going on behind the scenes.  Once we&8217;ve verified that we have a working script, we can go ahead and build a Murano package around it.
Plone Server Requirements
First of all, let’s clarify the resources needed to install the Plone server in terms of the host VM and preinstalled software and libraries. We can find this information in the official Plone Installation Requirements.
Host VM Requirements
Plone supports nearly all Operating Systems, but for the purposes of our tutorial, let’s suppose that our Plone Server needs to run on a VM under Ubuntu.
As far as hardware requirements, the Plone server requires the following:
Minimum requirements:

A minimum of 256 MB RAM and 512 MB of swap space per Plone site
A minimum of 512 MB hard disk space

Recommended requirements:

2 GB or more of RAM per Plone site
40 GB or more of hard disk space

The Plone Server also requires the following to be preinstalled:

Python 2.7 (dev), built with support for expat (xml.parsers.expat), zlib and ssl.
Libraries:

libz (dev),
libjpeg (dev),
readline (dev),
libexpat (dev),
libssl or openssl (dev),
libxml2 >= 2.7.8 (dev),
libxslt >= 1.1.26 (dev).

The PloneServerApp will need to make sure that all of this is available.
Defining what the PloneServerApp does
Next we are going to define the deployment plan. The PloneServerApp executes all necessary steps in a completely automatic way to get the Plone Server working and to make it available outside of your OpenStack Cloud, so we need to know how to make that happen.
The PloneServerApp should follow these steps:

Ask the user to specify the host VM, such as number of CPUs, RAM, disk space, OS image file, etc. The app should then check that the requested VM meets all of the minimum hardware requirements for Plone.
Ask the user to provide values for the mandatory and optional Plone Server installation parameter.
Spawn a single Host VM, according to the user&8217;s chosen VM flavor.
Install the Plone Server and all of its required software and libraries on the spawned host VM. Well have PloneServerApp do this by launching an installation script (runPloneDeploy.sh).

Let&8217;s start at the bottom and make sure we have a working runPloneDeploy.sh script; we can then look at incorporating that into the PloneServerApp.
Creating and debugging a script that fully deploys the Plone Server on a single VM
We&8217;ll need to build and test our script on a Ubuntu machine; if you don&8217;t have one handy, go ahead and deploy one in your new OpenStack cluster. (When we&8217;re done debugging, you can then terminate it to clean up the mess.)
Our runPloneDeploy.sh will be based on the Universal Plone UNIX Installer. You can get more details about it in the official Plone Installation Documentation, but the easiest way is to follow these steps:

Download the latest version of Plone:
$ wget –no-check-certificate https://launchpad.net/plone/5.0/5.0.4/+download/Plone-5.0.4-UnifiedInstaller.tgz

Unzip the archive:
<pre?$ tar -xf Plone-5.0.4-UnifiedInstaller.tgz
Go to the folder containing the installation script&;
$ cd Plone-5.0.4-UnifiedInstaller

&8230;and see all installation options provided by the Universal UNIX Plone Installer:
$ ./install.sh –help

The Universal UNIX Installer lets you choose an installation mode:

a standalone mode &; single Zope web application server will be installed, or
a ZEO cluster mode &8211; ZEO Server and Zope instances will be installed.

It also lets you set several optional installation parameters. If you don’t set these, default values will be used.
In this tutorial let’s choose standalone installation mode and make it possible to configure the most significant parameters for standalone installation. These most significant parameters are the:

administrative user password
top level path on Host VM to install the Plone Server.
TCP port from which the Plone site will be available from outside the VM and outside your OpenStack Cloud

Now, if we were installing Plone manually, we would feed these values into the script on the command line, or set them in configuration files.  To automate the process, we&8217;re going to create a new script, runPloneDeploy.sh, which gets those values from the user, then feeds them to the installer programmatically.
So our script should be invoked as follows:
$ ./runPloneDeploy.sh <InstallationPath> <AdminstrativePassword> <TCPPort>
For example:
$ ./runPloneDeploy.sh “/opt/plone/” “YetAnotherAdminPassword” “8080”
The runPloneDeploy.sh script
Let&8217;s start by taking a look at the final version of the install script, and then we&8217;ll pick it apart.
1. #!/bin/bash
2. #
3. #  Plone uses GPL version 2 as its license. As of summer 2009, there are
4. #  no active plans to upgrade to GPL version 3.
5. #  You may obtain a copy of the License at
6. #
7. #       http://www.gnu.org
8. #
9.
10. PL_PATH=”$1″
11. PL_PASS=”$2″
12. PL_PORT=”$3″
13.
14. # Write log. Redirect stdout & stderr into log file:
15. exec &> /var/log/runPloneDeploy.log
16.
17. # echo “Installing all packages.”
18. sudo apt-get update
19.
20. # Install the operating system software and libraries needed to run Plone:
21. sudo apt-get -y install python-setuptools python-dev build-essential libssl-dev libxml2-dev libxslt1-dev libbz2-dev libjpeg62-dev
22.
23. # Install optional system packages for the handling of PDF and Office files. Can be omitted:
24. sudo apt-get -y install libreadline-dev wv poppler-utils
25.
26. # Download the latest Plone unified installer:
27. wget –no-check-certificate https://launchpad.net/plone/5.0/5.0.4/+download/Plone-5.0.4-UnifiedInstaller.tgz
28.
29. # Unzip the latest Plone unified installer:
30. tar -xvf Plone-5.0.4-UnifiedInstaller.tgz
31. cd Plone-5.0.4-UnifiedInstaller
32.
33. # Set the port that Plone will listen to on available network interfaces. Editing “http-address” param in buildout.cfg file:
34. sed -i “s/^http-address = [0-9]*$/http-address = ${PL_PORT}/” buildout_templates/buildout.cfg
35.
36. # Run the Plone installer in standalone mode
37. ./install.sh –password=”${PL_PASS}” –target=”${PL_PATH}” standalone
38.
39. # Start Plone
40. cd “${PL_PATH}/zinstance”
41. bin/plonectl start
The first line states which shell should be execute the various commands commands:
#!/bin/bash
Lines 2-8 are comments describing the license under which Plone is distributed:
#
#  Plone uses GPL version 2 as its license. As of summer 2009, there are
#  no active plans to upgrade to GPL version 3.
#  You may obtain a copy of the License at
#
#       http://www.gnu.org
#
The next three lines contain commands assigning input script arguments to their corresponding variables:
PL_PATH=”$1″
PL_PASS=”$2″
PL_PORT=”$3″
It’s almost impossible to write a script with no errors, so Line 15 sets up logging. It redirects both stdout and stderr outputs of each command to a log-file for later analysis:
exec &> /var/log/runPloneDeploy.log
Lines 18-31 (inclusive) are taken straight from the Plone Installation Guide:
sudo apt-get update

# Install the operating system software and libraries needed to run Plone:
sudo apt-get -y install python-setuptools python-dev build-essential libssl-dev libxml2-dev libxslt1-dev libbz2-dev libjpeg62-dev

# Install optional system packages for the handling of PDF and Office files. Can be omitted:
sudo apt-get -y install libreadline-dev wv poppler-utils

# Download the latest Plone unified installer:
wget –no-check-certificate https://launchpad.net/plone/5.0/5.0.4/+download/Plone-5.0.4-UnifiedInstaller.tgz

# Unzip the latest Plone unified installer:
tar -xvf Plone-5.0.4-UnifiedInstaller.tgz
cd Plone-5.0.4-UnifiedInstaller
Unfortunately, the Unified UNIX Installer doesn’t give us the ability to configure a TCP Port as a default argument of the install.sh script. We need to edit it in buildout.cfg before carrying out the main install.sh script.
At line 34 we set the desired port using a sed command:
sed -i “s/^http-address = [0-9]*$/http-address = ${PL_PORT}/” buildout_templates/buildout.cfg
Then at line 37 we launch the Plone Server installation in standalone mode, passing in the other two parameters:
./install.sh –password=”${PL_PASS}” –target=”${PL_PATH}” standalone
After setup is done, on line 40, we change to the directory where Plone was installed:
cd “${PL_PATH}/zinstance”
And finally, the last action is to launch the Plone service on line 40.
bin/plonectl start
Also, please don’t forget to leave comments before every executed command in order to make your script easy to read and understand. (This is especially important if you&8217;ll be distributing your app.)
Run the deployment script
Check your script, then spawn a standalone VM with an appropriate OS (in our case it is Ubuntu OS 14.04) and execute the runPloneDeply.sh script to test and debug it. (Make sure to set it as executable, and if necessary, to run it as root (or using sudo)!)
You&8217;ll use the same format we discussed earlier:
$ ./runPloneDeploy.sh <InstallationPath> <AdminstrativePassword> <TCPPort>
For example:
$ ./runPloneDeploy.sh “/opt/plone/” “YetAnotherAdminPassword” “8080”
Once the script is finished, check the outcome:

Find where Plone Server was installed on your VM using the find command, or by checking the directory you specified on the command line.
Try to visit the address http://127.0.0.1:[Port] &8211; where [Port] is the TCP Port that you point to as an argument of the runPloneDeploy.sh script.
Try to login to Plone using the &;admin&; username and [Password] that you point to as an argument of the runPloneDeploy.sh script.

If something doesn’t seem to be right check the runPloneDeploy.log file for errors.
As you can see, our scenario has a pretty small number of lines but it really does the whole installation work on a single VM. Undoubtedly, there are several ways in which you can improve the script, like smart error handling, passing more customizations or enabling Plone autostart. It’s all up to you.
In part 4, we&8217;ll turn this script into an actual Murano App.
The post Develop Cloud Applications for OpenStack on Murano, Part 3: The application, part 1: Understanding Plone deployment appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis