Get Mirantis OpenStack 9.1 from the New Mirantis Community Site

The post Get Mirantis OpenStack 9.1 from the New Mirantis Community Site appeared first on Mirantis | The Pure Play OpenStack Company.
Mirantis has launched a Software Community site where you can download and get resources for our latest offerings, including our recently released Mirantis OpenStack 9.1 maintenance update.
The Mirantis Software Community site is your go-to place to find the latest downloads, repositories, documentation and other materials for not just Mirantis branded software but also the various open source software projects we’re working on, including Fuel, StackLight, Salt, Kubernetes, Ceph and OpenContrail. Additionally, you’ll find links to relevant plugins and add-ons, from both Mirantis and our Unlocked Partners.
Mirantis Software Community site
Mirantis OpenStack 9.1
The Community Site currently features Mirantis OpenStack 9.1, which has several enhancements for lifecycle management, including a streamlined mechanism for maintenance updates. Additionally, MOS 9.1 provides new tabs in Fuel for Workflows and History where we’ve made it easier to manage custom deployment workflows and view details about in-progress or completed deployment tasks. We’ve also added support for event-driven task execution for further deployment automation, targeted diagnostic snapshots for reduced footprint, and various security features.
Deployment HIstory tab in Fuel

Learn more in our technical blog or release notes
View MOS 9.0 to 9.1 update instructions

Please note that in order to install the MOS 9.1 update package, you must first have MOS 9.0 installed.
Get Involved and Contribute
Besides providing software and related technical materials, the Community site is also a starting point for you to get involved and contribute to the community. We invite you to join a mailing list, jump on an IRC channel or submit to a Launchpad community page or Q&A forum &; we’ve provided relevant links for you to quickly get in touch and offer your ideas and expertise, including information on how to contribute.
We&;ve also set up a monthly community newsletter, geared towards users and operators. Subscribe to get information on the latest software and related resources from both Mirantis and the open source projects we’re contributing to.
The post Get Mirantis OpenStack 9.1 from the New Mirantis Community Site appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

What you missed at OpenStack Barcelona

The post What you missed at OpenStack Barcelona appeared first on Mirantis | The Pure Play OpenStack Company.
The OpenStack Summit in Barcelona was, in some ways, like those that had preceded it &; and in other ways, it was very different.  As in previous years, the community showed off large customer use cases, but there was something different this year: whereas before it had been mostly early adopters &8212; and the same early adopters, for a time &8212; this year there were talks from new faces with very large use cases, such as Sky TV and Banco Santander.
And why not? Statistically, OpenSTack seems to have turned a corner, with the semi-annual user survey showing that workloads are no longer just development and testing but actual production, users are no longer limited to huge corporations but also to work at small to medium sized businesses, and containers have gone from an existential threat to a solution with which to work, not fight, and concerns about interoperability seem to have been squashed, finally.
Let&;s look at some of the highlights of the week.

It&8217;s traditional to bring large users up on stage during the keynotes, but this year, with users such as Spain&8217;s largest bank, Banco Santander, Britain&8217;s broadcaster, Sky UK, the world&8217;s largest particle physics laboratory, CERN, and the world&8217;s largest retailer, Walmart, it did seem more like showing what OpenStack can do, than in previous years, when it was more about proving that anybody was actually using it in the first place.
For example, Cambridge’s Dr. Rosie Bolton talked about the SKA radio observatory  that will look at 65,000 frequency channels, consuming and destroying 1.3 zettabytes of data every six hours. The project will run for 50 years cost over a billion dollars.

This.is.Big.Data @OpenStack   pic.twitter.com/XgT3eEjDVh
— Sean Kerner (@TechJournalist) October 25, 2016

OpenStack Foundation CEO Mark Collier also introduced enhancements to the OpenStack Project Navigator, which provides information on the individual projects and their maturity, corporate diversity, adoption, and so on. The Navigator now includes a Sample Configs section, which provides the projects that are normally used for various use cases, such as web applications, eCommerce, and high throughput computing.
Research from 451 Research
The Foundation also talked about findings from a new 451 Research report that looked at OpenStack adoption and challenges.  
Key findings from the 451 Research include:

Mid-market adoption shows that OpenStack use is not limited to large enterprises. Two-thirds of respondents (65 percent) are in organizations of between 1,000 and 10,000 employees.1
OpenStack-powered clouds have moved beyond small-scale deployments. Approximately 72 percent of OpenStack enterprise deployments are between 1,000 to 10,000 cores in size. Additionally, five percent of OpenStack clouds among enterprises top the 100,000 core mark.
OpenStack supports workloads that matter to enterprises, not just test and dev. These include infrastructure services (66 percent), business applications and big data (60 percent and 59 percent, respectively), and web services and ecommerce (57 percent).
OpenStack users can be found in a diverse cross section of industries. While 20 percent cited the technology industry, the majority come from manufacturing (15 percent), retail/hospitality (11 percent), professional services (10 percent), healthcare (7 percent), insurance (6 percent), transportation (5 percent), communications/media (5 percent), wholesale trade (5 percent), energy & utilities (4 percent), education (3 percent), financial services (3 percent) and government (3 percent).
Increasing operational efficiency and accelerating innovation/deployment speed are top business drivers for enterprise adoption of OpenStack, at 76 and 75 percent, respectively. Supporting DevOps is a close second, at 69 percent. Reducing cost and standardizing on OpenStack APIs were close behind, at 50 and 45 percent, respectively.

The report talked about the challenge OpenStack faces from containers in the infrastructure market, but contrary to the notion that more companies were leaning on containers than OpenStack, the report pointed out that OpenStack users are adopting containers at a faster rate than the rest of the enterprise market, with 55 percent of OpenStack users also using containers, compared to just 17 percent across all respondents.
According to Light Reading, &;451 Research believes OpenStack will succeed in private cloud and providing orchestration between public cloud and on-premises and hosted OpenStack.&;
The Fall 2016 OpenStack User Survey
The OpenStack Summit is also the where we hear the results of the semi-annual user-survey. In this case, the key findings among OpenStack deployments include:

Seventy-two percent of OpenStack users cite cost savings as their No. 1 business driver.
The Net Promoter Score (NPS) for OpenStack deployments—an indicator of user satisfaction—continues to tick up, eight points higher than a year ago.
Containers continues to lead the list of emerging technologies, as it has for three consecutive survey cycles. In the same question, interest in NFV and bare metal is significantly higher than a year ago.
Kubernetes shows growth as a container orchestration tool.
Seventy-one percent of deployments catalogued are in “production” versus in testing or proof of concept. This is a 20 percent increase year over year.
OpenStack is adopted by companies of every size. Nearly one-quarter of users are organizations smaller than 100 people.

New this year is the ability to explore the full data, rather than just relying on highlights.
Community announcements
Also announced during the keynotes were new Foundation Gold members, the winner of the SuperUser award, and progress on the Foundation&8217;s Certified OpenStack Administrator exam.
The OpenStack Foundation charter allows for 24 Gold member companies, who elect 8 Board Directors to represent them all.  (The other members include one each chosen by the 8 Platinum member companies, and 8 individual directors elected by the community at large.) Gold member companies must be approved by existing board members, and this time around City Network, Deutsche Telekom, 99Cloud and China Mobile were added.
China Mobile was also given the Superuser award, which honors a company&8217;s commitment to and use of OpenStack.
Meanwhile, in Austin, the Foundation announced the Certified OpenStack Administrator exam, and in the past six months, 500 individuals have taken advantage of the opportunity.
And then there were the demos&;
While demos used to be simply to show how the software works, that now seems to be a given, and instead demos were done to tackle serious issues.  For example, Network Functions Virtualization is a huge subject for OpenStack users &8212; in fact 86% of telcos say OpenStack will be essential to their adoption of the technology &8212; but what is it, exactly?  Mark Collier and representatives of the OPNFV and Vitrage projects were able to demonstrate how OpenStack applies in this case, showing how a High Availability Virtual Network Function (VNF) enables the system to keep a mobile phone call from disconnecting even if a cable or two is cut.  (In this case, literally, as Mark Collier levied a comically huge pair of scissors against the hardware.)
But perhaps the demo that got the most attention wasn&8217;t so much of a demo as a challenge.  One of the criticisms constantly levied against OpenStack is that there&8217;s no &8220;vanilla&8221; version &8212; that despite the claims of freedom from lock-in, each distribution of OpenStack is so different from the others that it&8217;s impossible to move an application from one distro to another.
To fight that charge, the OpenStack community has been developing RefStack, a series of tests that a distro must pass in order to be considered &8220;OpenStack&8221;. But beyond that, IBM issued the &8220;Interoperability Challenge,&8221; which required teams to take a standard deployment tool &8212; in this case, based on Ansible &8212; and use it, unmodified, to create a WordPress-hosting LAMP stack.
In the end, 18 companies joined the challenge, and 16 of them appeared on stage to simultaneously take part.
So the question remained: would it work?  See for yourself:

Coming up next
So the next OpenStack Summit will be in Boston, May 8-12, 2017. For the first time, however, it won&8217;t include the OpenStack Design Summit, which will be replaced by a separate Project Teams Gathering, so it&8217;s likely to once again have a different feel and flavor as the community &8212; and the OpenStack industry &8212; grows.
The post What you missed at OpenStack Barcelona appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Tieto’s path to containerized OpenStack, or How I learned to stop worrying and love containers

The post Tieto&;s path to containerized OpenStack, or How I learned to stop worrying and love containers appeared first on Mirantis | The Pure Play OpenStack Company.
Tieto is the cloud service provider in Northern Europe, with over 150 cloud customers in the region and revenues in the neighborhood of €1.5 billion (with a &;b&;). So when the company decided to take the leap into OpenStack, it was a decision that wasn&8217;t taken lightly &; or without very strict requirements.
Now, we&8217;ve been talking a lot about containerized OpenStack here at Mirantis lately, and at the OpenStack Summit in Barcelona, our Director of Product Engineering will get with Tieto&8217;s Cloud Architect  Lukáš Kubín to explain the company&8217;s journey from a traditional architecture to a fully adaptable cloud infrastructure, so we wanted to take a moment and ask the question:
How does a company decide that containerized OpenStack is a good idea?
What Tieto wanted
At its heart, Tieto wanted to deliver a bimodal multicloud solution that would help customers digitize their businesses. In order to do that, it needed an infrastructure in which it could have confidence, and OpenStack was chosen as the platform for cloud native applications delivery.  The company had the following goals:

Remove vendor lock-in
Achieve the elasticity of a seamless on-demand capacity fulfillment
Rely on robust automation and orchestration
Adopt innovative open source solutions
Implement Infrastructure as Code

It was this last item, implementing Infrastructure as Code, that was perhaps the biggest challenge from an OpenStack standpoint.
Where we started
In fact, Tieto had been working with OpenStack since 2013, creating internal projects to evaluate OpenStack Havana and Icehouse using internal software development projects; at that time, the target architecture included Neutron and Open vSwitch. 
By 2015, the company was providing scale-up focused IaaS cloud offerings and unique application-focused PaaS services, but what was lacking was a shared platform with full API controlled infrastructure for horizontally scalable workload.
Finally, this year, the company announced its OpenStack Cloud offering, based on the OpenStack distribution of tcp cloud (now part of Mirantis), and OpenContrail rather than Open vSwitch.
Why OpenContrail? The company cited several reasons:

Licensing: OpenContrail is an open source solution, but commercial support is available from vendors such as Mirantis.
High Availability: OpenContrail includes native HA support.
Cloud gateway routing: North-South traffic must be routed on physical edge routers  instead of software gateways to work with existing solutions
Performance: OpenContrail provides excellent pps, bandwidth, scalability, and so on (up to 9.6 Gbps)
Interconnection between SDN and Fabric: OpenContrail supports the dynamic legacy connections through EVPN or ToR switches
Containers: OpenContrail includes support for containers, making it possible to use one networking framework for multiple environments.

Once completed, the Tieto Proof of Concept cloud included;

OpenContrail 2.21
20 compute nodes
Glance and Cinder running on Ceph
Heat orchestration

Tieto had achieved Infrastructure as Code, in that deployment and operations were controlled through OpenStack Salt formulas. This architecture enabled the company to use DevOps principles, in that they could use declarative configurations that could be stored in a repository and re-used as necessary.
What&8217;s more, the company had an architecture that worked, and that included commercial support for OpenContrail (through Mirantis).
But there was still something missing.
What was missing
With operations support and Infrastructure as Code, Tieto&8217;s OpenStack Cloud was already beyond what many deployments ever achieve, but it still wasn&8217;t as straightforward as the company would have liked.  
As designed, the OpenStack architecture consisted of almost two dozen VMs on at least 3 physical KVM nodes &8212; and that was just the control plane!

As you might imagine, trying to keep all of those VMs up to date through operating system updates and other changes made operations more complex that it needed to be.  Any time an update needed to be applied, it had to be applied to each and every VM. Sure, that process was easier because of the DevOps advantages introduced by the OpenStack-Salt formulas that were already in the repository, but that was still an awful lot of moving parts.
There had to be a better way.
How to meet that challenge
That &8220;better way&8221; involves treating OpenStack as a containerized application in order to take advantage of the efficiencies this architecture enables, including:

Easier operations, because each service no longer has its own VM, with it own operating system to worry about
Better reliability and easier manageability, because containers and docker files can be tested as part of a CI/CD workflow
Easier upgrades, because once OpenStack has been converted to a microservices architecture, it&8217;s much easier to simply replace one service
Better performance and scalability, because the containerized OpenStack services can be orchestrated by a tool such as Kubernetes.

So that&8217;s the &8220;why&8221;.  But what about the &8220;how&8221;?  Well, that&8217;s a tale for another day, but if you&8217;ll be in Barcelona, join us at 12:15pm on Wednesday to get the full story and maybe even see a demo of the new system in action!
The post Tieto&8217;s path to containerized OpenStack, or How I learned to stop worrying and love containers appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

53 new things to look for in OpenStack Newton

The post 53 new things to look for in OpenStack Newton appeared first on Mirantis | The Pure Play OpenStack Company.
OpenStack Newton, the technology&;s 14th release, shows just how far we&8217;ve come: where we used to focus on basic things, such as supporting specific hypervisors or enabling basic SDN capabilities, now that&8217;s a given, and we&8217;re talking about how OpenStack has reached its goal of supporting cloud-native applications in all of their forms &; virtual machines, containers, and bare metal.
There are hundreds of changes and new features in OpenStack Newton, and you can see some of the most important in our What&8217;s New in OpenStack Newton webinar.  Meanwhile, as we do with each release, let&8217;s take a look at 53 things that are new in OpenStack Newton.
Compute (Nova)

Get me a network enables users to let OpenStack do the heavy lifting rather than having to understand the underlying networking setup.
A default policy means that users no longer have to provide a full policy file; instead they can provide just those rules that are different from the default.
Mutable config lets you change configuration options for a running Nova service without having to restart it.  (This option is available for a limited number of options, such as debugging, but the framework is in place for this to expand.)
Placement API gives you more visibility into and control over resources such as Resource providers, Inventories, Allocations and Usage records.
Cells v2, which enables you to segregate your data center into sections for easier manageability and scalability,has been revamped and is now feature-complete.

Network (Neutron)

802.1Q tagged VM connections (VLAN aware VMs) enables VNFs to target specific VMs.
The ability to create VMs without IP Address means you  can create a VM with no IP address and specify complex networking later as a separate process.
Specific pools of external IP addresses let you optimize resource placement by controlling IP decisions.
OSProfiler support lets you find bottlenecks and troubleshoot interoperability issues.
No downtime API service upgrades

Storage (Cinder, Glance, Swift)
Cinder

Microversions let developers can add new features you can access without breaking the main version.
Rolling upgrades let you update to Newton without having to take down the entire cloud.
enabled_backends config option defines which backend types are available for volume creation.
Retype volumes from encrypted to not encrypted, and back again after creation.
Delete volumes with snapshots using the cascade feature rather than having to delete the snapshots first.
The Cinder backup service can now be scaled to multiple instances for better reliability and scalability.

Glance

Glare, the Glance Artifact Repository, provides the ability to store more than just images.
A trust concept for long-lived snapshots makes it possible to avoid errors on long-running operations.
The new restrictive default policy means that all operations are locked down unless you provide access, rather than the other way around.

Swift

Object versioning lets you keep multiple copies of an individual object, and choose whether to keep all versions, or just the most recent.
Object encryption provides some measure of confidentiality should your disk be separated from the cluster.
Concurrent bulk-deletes speed up operations.

Other core projects (Keystone, Horizon)
Keystone

Simplified configuration setup
PCI support of password configuration options
Credentials encrypted at rest

Horizon

You can now exercise more control over user operations with parameters such as IMAGES_ALLOW_LOCATION, TOKEN_DELETE_DISABLED, LAUNCH_INSTANCE_DEFAULTS
Horizon now works if only Keystone is deployed, making it possible to use Horizon to manage a Swift-only deployment.
Horizon now checks for Network IP availability rather than enabling users to set bad configurations.
Be more specific when setting up networking by restricting the CIDR range for a user private network, or specify a fixed IP or subnet when creating a port.
Manage Consistency Groups.

Containers (Magnum, Kolla, Kuryr)
Magnum

Magnum is now more about container orchestration engines (COEs) than containers, and can now deploy Swarm, Kubernetes, and Mesos.
The API service is now protected by SSL.
You can now use Kubernetes on bare metal.
Asynchronous cluster creation improves performance for complex operations.

Kolla

You can now use Kolla to deploy containerized OpenStack to bare metal.

Kuryr

Use Neutron networking capabilities in containers.
Nest VMs through integration with Magnum and Neutron.

Additional projects (Heat, Ceilometer, Fuel, Murano, Ironic, Community App Catalog, Mistral)
Heat

Use DNS resolution and integration with an external DNS.
Access external resources using the external_id attribute.

Ceilometer

New REST API that makes it possible to use services such as Gnocchi rather than just interacting with the database.
Magnum support.

FUEL

Deploy Fuel without having to use an ISO.
Improved life cycle management user experience, including Infrastructure as Code.
Container-based deployment possibilities.

Murano

Use the new Application Development Framework to build more complex applications.
Enable users to deploy your application across multiple regions for better reliability and scalability.
Specify that when resources are no longer needed, they should be deallocated.

Ironic

You can now have multiple nova-compute services using Ironic without causing duplicate entries.
Multi-tenant networking makes it possible for more than one tenant to use ironic without sharing network traffic.
Specify granular access restrictions to the REST API rather than just turning it off or on.

Community App Catalog

The Community App Catalog now uses Glare as its backend, making it possible to more easily store multiple application types.
Use the new v2 API to add and manage assets directly, rather than having to go through gerrit.
Add and manage applications via the Community App Catalog website.

Did we miss your favorite project or feature?  Let us know what new features you&8217;re excited about in the comments.
The post 53 new things to look for in OpenStack Newton appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Auto-remediation: making an Openstack cloud self-healing

The post Auto-remediation: making an Openstack cloud self-healing appeared first on Mirantis | The Pure Play OpenStack Company.
The bigger the Openstack cloud you have, the bigger the operation challenges you will face. Things break &; daemons die, logs fill up the disk, nodes have hardware issues, rabbitmq clusters fall apart, databases get a split brain due to network outages&; All of these problems require engineering time to create outage tickets, troubleshoot and fix the problem &; not to mention writing the RCA and a runbook on how to fix the same problem in the future.
Some of the outages will never happen again if you’ll make the proper long-term fix to the environment, but others will rear their heads again and again. Finding an automated way to handle those issues, either by preventing or fixing them, is crucial if you want to keep your environment stable and reliable.
That&;s where auto-remediation kicks in.
What is Auto-Remediation?
Auto-Remediation, or Self-Healing, is when automation responds to alerts or events by executing actions that can prevent or fix the problem.
The simplest example of auto-remediation is cleaning up the log files of a service that has filled up the available disk space. (It happens to everybody. Admit it.) Imagine an automated action that is triggered by a monitoring system to clean the logs and prevent the service from crashing. In addition, it creates a ticket and sends a notification so the engineer can fix log rotation during business hours, and there is no need to do it in the middle of the night. Furthermore, the event-driven automation can be used for assisted troubleshooting, so when you get an alert it includes related logs, monitoring metrics/graphs, and so on.

This is what an incident resolution workflow should look like:

Auto-remediation tooling
Facebook, LinkedIn, Netflix, and other hyper-scale operators use event-driven automation and workflows, as described above. While looking for an open source solution, we found StackStorm, which was used by Netflix for the same purpose. Sometimes called IFTTT (If This, Then That) for ops, the StackStorm platform is built on the same principles as a famous Facebook FBAR (FaceBook AutoRemediation), with “infrastructure as code”, a scalable microservice architecture, and it&8217;s supported by a solid and responsive team. (They are now part of Brocade, but the project is accelerating.) StackStorm uses OpenStack Mistral as a workflow engine, and offers a rich set of sensors and actions that are easy to build and extend.
The auto-remediation approach can easily be applied when operating an OpenStack cloud in order to improve reliability. And it&8217;s a good thing, too, because OpenStack has many moving parts that can break. Event-driven automation can take care of a cloud when you sleep, handling not only basic operations such as restarting nova-api and cleaning ceilometer logs, but also complex actions such as rebuilding the rabbitmq cluster or fixing Galera replication.
Automation can also expedite incident resolution by “assisting” engineers with troubleshooting. For example, if monitoring detects that keystone has started to return 503 for every request, the on-call engineer can be provided with logs from every keystone node, memcached and DB state even before starting the terminal.
In building our own self-healing OpenStack cloud, we started small. Our initial POC had just 3 simple automations: cleaning logs, service restarts and cleaning rabbitmq queues. We placed them on our 1,000 node OpenStack cluster, and they run there for 3 months, taking these 3 headaches off our operators. This example showed us that we need to add more and more self-healing actions, so our on-call engineers can sleep better at night.
Here is the short list of issues that can be auto-remediated:

Dead process
Lack of free disk space
Overflowed rabbitmq queues
Corrupted rabbitmq mnesia
Broken database replication
Node hardware failures (e.g. triggering VM evacuation)
Capacity issue (by adding more hypervisors)

Where to see more
We&8217;d love to give you a more detailed explanation on how we approached self-healing of an OpenStack cloud. If you’re at the OpenStack summit, we invite you to attend our talk on Thursday, October 27, 9:00am at Room 112, or if you are in San Jose, CA come to the Auto-Remediation meetup on October 20th and hear us sharing the story there. You can also meet with the StackStorm team and other operators who are making the vision of Self-Healing a reality.
The post Auto-remediation: making an Openstack cloud self-healing appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Develop Cloud Applications for OpenStack on Murano, Day 4: The application, part 2: Creating the Murano App

The post Develop Cloud Applications for OpenStack on Murano, Day 4: The application, part 2: Creating the Murano App appeared first on Mirantis | The Pure Play OpenStack Company.
So far in this series, we&;ve explained what Murano is, created an OpenStack cluster with Murano, and built the main script that will install our application. Now it&8217;s time to actually package PloneServerApp up for Murano.
In this series, we&8217;re looking at a very basic example, and we&8217;ll tell you all you need to make it work, but there are some great tutorials and references that describe this process (and more) in detail.  You can find them in the official Murano documentation:

Murano package structure
Create Murano application step-by-step
Murano Programming Language Reference

So before we move on, let&8217;s just distill that down to the basics.
What we&8217;re ultimately trying to do
When we&8217;re all finished, what we want is basically a *.zip file structured in a way that Murano expects, with files that provide all of the information that it needs. There&8217;s nothing really magical about this process, it&8217;s just a matter of creating the various resources.  In general, the structure of a Murano application looks something like this:
..
|_  Classes
|   |_  PloneServer.yaml
|
|_  Resources
|   |_  scripts
|       |_ runPloneDeploy.sh
|   |_  DeployPloneServer.template
|
|_  UI
|   |_  ui.yaml
|
|_  logo.png
|
|_  manifest.yaml
Obviously the filenames (and content!) will depend on your specific application, but you get the idea. (If you&8217;d like to see the finished version of this application, you can get it from GitHub.)
When we&8217;ve assembled all of these pieces, we&8217;ll zip them up and they&8217;ll be ready to import into Murano.
Let&8217;s take a look at the individual pieces.
The individual files in a Murano package
Each of the individual files we&8217;re working with is basically just a text file.
The Manifest file
The manifest.yaml file contains the main application’s information. For our PloneServerApp, that means the following:
1. #  Plone uses GPL version 2 as its license. As of summer 2009, there are
2. #  no active plans to upgrade to GPL version 3.
3. #  You may obtain a copy of the License at
4. #
5. #       http://www.gnu.org
6. #
7.
8. Format: 1.3
9. Type: Application
10. FullName: org.openstack.apps.plone.PloneServer
11. Name: Plone CMS
12. Description: |
13. The Ultimate Open Source Enterprise CMS.
14. The Plone CMS is one of the most secure
15. website systems available. This installer
16. lets you deploy Plone in standalone mode.
17. Requires Ubuntu 14.04 image with
18. preinstalled murano-agent.
19. Author: ‘Evgeniy Mashkin’
20. Tags: [CMS, WCM]
21. Classes:
22. org.openstack.apps.plone.PloneServer: PloneServer.yaml
Let’s start at Line 8:
Format: 1.3
The versioning of the manifest format is directly connected with YAQL and the version of Murano itself. See the short description of format versions and choose the format version according to the OpenStack release you going to develop your application for. In our case, we&8217;re using Mirantis OpenStack 9.0, which is built on the Mitaka OpenStack release, so I chose the 1.3 version that corresponds to Mitaka.
Now let’s move to Line 10:
FullName: org.openstack.apps.plone.PloneServer
Here you&8217;re adding a fully qualified name for your application, including the namespace if your choice.
IMPORTANT: Don&8217;t use the io.murano namespace for your apps; it&8217;s being used for the Murano Core Library.
Lines 11 through 20 show the Name, Description, Author and Tags, which will be shown in the UI:
Name: Plone CMS

Description: |
The Ultimate Open Source Enterprise CMS.
The Plone CMS is one of the most secure
website systems available. This installer
lets you deploy Plone in standalone mode.
Requires Ubuntu 14.04 image with
preinstalled murano-agent.
Author: ‘Evgeniy Mashkin’
Tags: [CMS, WCM]
Finally, on lines 21 and 22, you&8217;ll point to your application class file (which we&8217;ll build later). This file should be in the Classes directory of the package.
Classes:
org.openstack.apps.plone.PloneServer: PloneServer.yaml
Make sure to double check all of your references, filenames, and whitespaces as errors with these can cause errors when you upload your application package to Murano.
Execution Plan Template
The execution plan template &; DeployPloneServer.template &8212; describes the installation process of the Plone Server on a virtual machine and contains instructions to the murano-agent on what should be executed to deploy the application. Essentially, it tells Murano how to handle the runPloneDeploy.sh script we created yesterday.
Here&8217;s the DeployPloneServer.template listing for our PloneServerApp:
1. #  Plone uses GPL version 2 as its license. As of summer 2009, there are
2. #  no active plans to upgrade to GPL version 3.
3. #  You may obtain a copy of the License at
4. #
5. #       http://www.gnu.org
6. #
7. FormatVersion: 2.0.0
8. Version: 1.0.0
9. Name: Deploy Plone
10. Parameters:
11.  pathname: $pathname
12.  password: $password
13.  port: $port
14. Body: |
15.  return ploneDeploy(‘{0} {1} {2}’.format(args.pathname, args.password, args.port)).stdout
16. Scripts:
17.  ploneDeploy:
18.    Type: Application
19.    Version: 1.0.0
20.    EntryPoint: runPloneDeploy.sh
21.    Files: []
22.    Options:
23.      captureStdout: true
24.      captureStderr: true
Starting with lines 12 through 15, you can see that we&8217;re defining our parameters &; the installation path, administrative password, and TCP port. Just as we added them on the command line yesterday, we need to tell Murano to ask the user for them.
Parameters:
 pathname: $pathname
 password: $password
 port: $port
In the Body section we have a string that describes the Python statement to execute, and how it will be executed by the Murano agent on the virtual machine:
Body: |
return ploneDeploy(‘{0} {1} {2}’.format(args.pathname, args.password, args.port)).stdout
Scripts defined in the Scripts section are invoked from here, so, we need to keep the order of arguments consistent with the runPloneDeploy.sh script that we developed yesterday.
Also, double check all filenames, whitespaces, and brackets. Mistakes here can cause the Murano agent to experience errors when it tries to run our installation script. If you do experience errors in this case, after  an error has occurred, connect to the spawned VM via SSH and check the runPloneDeploy.log file we added for just this purpose.
Dynamic UI form definition
In order for the user to be able to add parameters such as the administrative password, we need to make sure that the user interface is set up correctly.  We do this with the UI.yaml file, which contains the UI forms description that will be shown to users and tells users where they can set available installation options. The ui.yaml file for our PloneServerApp reads as follows:
1. #  Plone uses GPL version 2 as its license. As of summer 2009, there are
2. #  no active plans to upgrade to GPL version 3.
3. #  You may obtain a copy of the License at
4. #
5. #       http://www.gnu.org
6. #
7. Version: 2.3
8. Application:
9.  ?:
10.    type: org.openstack.apps.plone.PloneServer
11.  pathname: $.appConfiguration.pathname
12.  password: $.appConfiguration.password
13.  port: $.appConfiguration.port
14.  instance:
15.    ?:
16.      type: io.murano.resources.LinuxMuranoInstance
17.    name: generateHostname($.instanceConfiguration.unitNamingPattern, 1)
18.    flavor: $.instanceConfiguration.flavor
19.    image: $.instanceConfiguration.osImage
20.    keyname: $.instanceConfiguration.keyPair
21.    availabilityZone: $.instanceConfiguration.availabilityZone
22.    assignFloatingIp: $.appConfiguration.assignFloatingIP
23. Forms:
24.  – appConfiguration:
25.      fields:
26.        – name: license
27.          type: string
28.          description: GPL License, Version 2
29.          hidden: true
30.          required: false
31.        – name: pathname
32.          type: string
33.          label: Installation pathname
34.          required: false
35.          initial: ‘/opt/plone/’
36.          description: >-
37.            Use to specify the top-level path for installation.
38.        – name: password
39.          type: string
40.          label: Admin password
41.          required: false
42.          initial: ‘admin’
43.          description: >-
44.            Enter administrative password for Plone.
45.        – name: port
46.          type: string
47.          label: Port
48.          required: false
49.          initial: ‘8080’
50.          description: >-
51.            Specify the port that Plone will listen to
52.            on available network interfaces.
53.        – name: assignFloatingIP
54.          type: boolean
55.          label: Assign Floating IP
56.          description: >-
57.             Select to true to assign floating IP automatically.
58.          initial: false
59.          required: false
60.        – name: dcInstances
61.          type: integer
62.          hidden: true
63.          initial: 1
64.  – instanceConfiguration:
65,      fields:
66.        – name: title
67.          type: string
68.          required: false
69.          hidden: true
70.          description: Specify some instance parameters on which the application would be created
71.        – name: flavor
72.          type: flavor
73.          label: Instance flavor
74.          description: >-
75.            Select registered in Openstack flavor. Consider that
76.            application performance depends on this parameter
77.          requirements:
78.            min_vcpus: 1
79.            min_memory_mb: 256
80.          required: false
81.        – name: minrequirements
82.          type: string
83.          label: Minumum requirements
84.          description: |
85.            – Minimum 256 MB RAM and 512 MB of swap space per Plone site
86.            – Minimum 512 MB hard disk space
87.          hidden: true
88.          required: false
89.        – name: recrequirements
90.          type: string
91.          label: Recommended
92.          description: |
93.            – 2 GB or more RAM per Plone site
94.            – 40 GB or more hard disk space
95.          hidden: true
96.          required: false
97.        – name: osImage
98.          type: image
99.          imageType: linux
100.          label: Instance image
101.          description: >-
102.            Select a valid image for the application. The image
103.            should already be prepared and registered in Glance
104.        – name: keyPair
105.          type: keypair
106.          label: Key Pair
107.          description: >-
108.            Select the Key Pair to control access to instances. You can login to
109.            instances using this KeyPair after the deployment of application.
110.          required: false
111.        – name: availabilityZone
112.          type: azone
113.          label: Availability zone
114.          description: Select availability zone where the application would be installed.
115.          required: false
116.        – name: unitNamingPattern
117.          type: string
118.          label: Instance Naming Pattern
119.          required: false
120.          maxLength: 64
121.          regexpValidator: ‘^[a-zA-z][-_w]*$’
122.          errorMessages:
123.            invalid: Just letters, numbers, underscores and hyphens are allowed.
124.          helpText: Just letters, numbers, underscores and hyphens are allowed.
125.          description: >-
126.            Specify a string, that will be used in instance hostname.
127.            Just A-Z, a-z, 0-9, dash and underline are allowed.
This is a pretty long file, but it&8217;s not as complicated as it looks.
Starting at line 8:
Version: 2.3
The format version for the UI definition is optional and its default value is the latest supported version. If you want to use your application with one of the previous versions you may need to set the version field explicitly.
Moving down the file, we basically have two UI forms: appConfiguration and instanceConfiguration.
Each form contains list of parameters that will be present on it. We place all of the parameters related to our Plone Server application on the appConfiguration form, including the path, password and TCP Port. This will then be sent to the Murano agent to invoke the runPloneDeploy.sh script:
       – name: pathname
         type: string
         label: Installation pathname
         required: false
         initial: ‘/opt/plone/’
         description: >-
           Use to specify the top-level path for installation.
       – name: password
         type: string
         label: Admin password
         required: false
         initial: ‘admin’
         description: >-
           Enter administrative password for Plone.
       – name: port
         type: string
         label: Port
         required: false
         initial: ‘8080’
         description: >-
           Specify the port that Plone will listen to
           on available network interfaces.
For each parameter we also set initial values that will be used as defaults.
On the instanceConfiguration form, we’ll place all of the parameters related to instances that will be spawned during deployment. We need to set hardware limitations, such as minimum hardware requirements, in the requirements section:
       – name: flavor
         type: flavor
         label: Instance flavor
         description: >-
           Select registered in Openstack flavor. Consider that
           application performance depends on this parameter
         requirements:
           min_vcpus: 1
           min_memory_mb: 256
         required: false
Also, we need to add notices for users about minimum and recommended Plone hardware requirements on the UI form:
       – name: minrequirements
         type: string
         label: Minumum requirements
         description: |
           – Minimum 256 MB RAM and 512 MB of swap space per Plone site
           – Minimum 512 MB hard disk space
         hidden: true
         required: false
       – name: recrequirements
         type: string
         label: Recommended
         description: |
           – 2 GB or more RAM per Plone site
           – 40 GB or more hard disk space
Murano PL Class Definition
Perhaps the most complicated part of the application is the class definition.  Contained in PloneServer.yaml, it describes the methods the Murano agent must be able to execute in order to manage the application. In this case, the application class looks like this:
1. #  Plone uses GPL version 2 as its license. As of summer 2009, there are
2. #  no active plans to upgrade to GPL version 3.
3. #  You may obtain a copy of the License at
4. #
5. #       http://www.gnu.org
6. #
7. Namespaces:
8.  =: org.openstack.apps.plone
9.  std: io.murano
10.  res: io.murano.resources
11.  sys: io.murano.system
12. Name: PloneServer
13. Extends: std:Application
14. Properties:
15.  instance:
16.    Contract: $.class(res:Instance).notNull()
17.  pathname:
18.    Contract: $.string()
19.  password:
20.    Contract: $.string()
21.  port:
22.    Contract: $.string()
23. Methods:
24.  .init:
25.    Body:
26.      – $._environment: $.find(std:Environment).require()
27.  deploy:
28.    Body:
29.      – If: not $.getAttr(deployed, false)
30.        Then:
31.          – $._environment.reporter.report($this, ‘Creating VM for Plone Server.’)
32.          – $securityGroupIngress:
33.            – ToPort: 80
34.              FromPort: 80
35.              IpProtocol: tcp
36.              External: true
37.            – ToPort: 443
38.              FromPort: 443
39.              IpProtocol: tcp
40.              External: true
41.            – ToPort: $.port
42.              FromPort: $.port
43.              IpProtocol: tcp
44.              External: true
45.          – $._environment.securityGroupManager.addGroupIngress($securityGroupIngress)
46.          – $.instance.deploy()
47.          – $resources: new(sys:Resources)
48.          – $template: $resources.yaml(‘DeployPloneServer.template’).bind(dict(
49.                pathname => $.pathname,
50.                password => $.password,
51.                port => $.port
52.              ))
53.          – $._environment.reporter.report($this, ‘Instance is created. Deploying Plone’)
54.          – $.instance.agent.call($template, $resources)
55.          – $._environment.reporter.report($this, ‘Plone Server is installed.’)
56.          – If: $.instance.assignFloatingIp
57.            Then:
58.              – $host: $.instance.floatingIpAddress
59.            Else:
60.              – $host: $.instance.ipAddresses.first()
61.          – $._environment.reporter.report($this, format(‘Plone Server is available at http://{0}:{1}’, $host, $.port))
62.          – $.setAttr(deployed, true)
First we set the namespaces and class name, then define the parameters we&8217;ll be using later. We can then move into methods.
Besides the standard init method, our PloneServer class has one main method &8211; deploy. It sets up instances of spawning and configuration. The deploy method performs the following tasks:

It configures a security group and opens the TCP port 80, SSH port and our custom TCP port (as determined by the user):
         – $securityGroupIngress:
           – ToPort: 80
             FromPort: 80
             IpProtocol: tcp
             External: true
           – ToPort: 443
             FromPort: 443
             IpProtocol: tcp
             External: true
           – ToPort: $.port
             FromPort: $.port
             IpProtocol: tcp
             External: true
       -$._environment.securityGroupManager.addGroupIngress($securityGroupIngress)

It initiates the spawning of a new virtual machine:
        – $.instance.deploy()

It creates a Resources object, then loads the execution plan template (in the Resources directory) into it, updating the plan with parameters taken from the user:
         – $resources: new(sys:Resources)
         – $template: $resources.yaml(‘DeployPloneServer.template’).bind(dict(
               pathname => $.pathname,
               password => $.password,
               port => $.port
             ))

It sends the ready-to-execute-plan to the murano agent:
         – $.instance.agent.call($template, $resources)

Lastly, it assigns a floating IP  to the newly spawned machine, if it was chosen:
         – If: $.instance.assignFloatingIp
           Then:
             – $host: $.instance.floatingIpAddress
           Else:
             – $host: $.instance.ipAddresses.first()

Before we move on, just a few words about floating IPs &8211; I will provide you with the key points from Piotr Siwczak’s article  “Configuring Floating IP addresses for Networking in OpenStack Public and Private Clouds”:
“The floating IP mechanism, besides exposing instances directly to the Internet, gives cloud users some flexibility. Having “grabbed” a floating IP from a pool, they can shuffle them (i.e., detach and attach them to different instances on the fly) thus facilitating new code releases and system upgrades. For sysadmins it poses a potential security risk, as the underlying mechanism (iptables) functions in a complicated way and lacks proper monitoring from the OpenStack side.”
Be aware that OpenStack is rapidly changing and some article’s statements may become obsolete, but the point is that there are advantages and disadvantages of using floating IPs.
Image File
In order to use OpenStack, you generally need an image to serve as the template for VMs you spawn. In some cases, those images will already be part of your cloud, but if not, you can specify them in the image.lst file. When you mention any image in this file and put it in your package, the image will be uploaded to your Cloud automatically. When importing images from the image.lst file, the client simply searches for a file with the same name as the name attribute of the image in the images directory of the package.  
An image file is optional, but to make sure your Murano App works you need to point any image with a pre-installed Murano agent. In our case it is Ubuntu 14.04 with a preinstalled Murano agent:
Images:
– Name: ‘ubuntu-14.04-m-agent.qcow2′
 Hash: ‘393d4f2a7446ab9804fc96f98b3c9ba1′
 Meta:
   title: ‘Ubuntu 14.04 x64 (pre-installed murano-agent)’
   type: ‘linux’
 DiskFormat: qcow2
 ContainerFormat: bare
Application Logo
The logo.png file is a preview image that will be visible to users in the application catalog. Having a logo file is optional, but for now, let’s choose this one:

Create a Package
Finally, now that all the files are ready go to our package files directory (where the manifest.yaml file is placed) we can create a .zip package:
$ zip -r org.openstack.apps.plone.PloneServer.zip *
Tomorrow we&8217;ll wrap up by showing you how to add your new package to the Murano application catalog.
The post Develop Cloud Applications for OpenStack on Murano, Day 4: The application, part 2: Creating the Murano App appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Introducing InfraKit, an open source toolkit for creating and managing declarative, self-healing infrastructure

Written by Bill Farner and David Chung
Docker’s mission is to build tools of mass innovation, starting with a programmable layer for the Internet that enables developers and IT operations teams to build and run distributed applications. As part of this mission, we have always endeavored to contribute software plumbing toolkits back to the community, following the UNIX philosophy of building small loosely coupled tools that are created to simply do one thing well. As Docker adoption has grown from 0 to 6 billion pulls, we have worked to address the needs of a growing and diverse set of distributed systems users. This work has led to the creation of many infrastructure plumbing components that have been contributed back to the community.

It started in 2014 with libcontainer and libnetwork. In 2015 we created runC and co-founded OCI with an industry-wide set of partners to provide a standard for container runtimes, a reference implementation based on libcontainer, and notary, which provides the basis for Docker Content Trust. From there we added containerd, a daemon to control runC, built for performance and density. Docker Engine was refactored so that Docker 1.11 is built on top of containerd and runC, providing benefits such as the ability to upgrade Docker Engine without restarting containers. In May 2016 at OSCON, we open sourced HyperKit, VPNKit and DataKit, the underlying components that enable us  to deeply integrate Docker for Mac and Windows with the native Operating System. Most recently,  in June, we unveiled SwarmKit, a toolkit for scheduling tasks and the basis for swarm mode, the built-in orchestration feature in Docker 1.12.
With SwarmKit, Docker introduced a declarative management toolkit for orchestrating containers. Today, we are doing the same for infrastructure. We are excited to announce InfraKit, a declarative management toolkit for orchestrating infrastructure. Solomon Hykes  open sourced it today during his keynote address at  LinuxCon Europe. You can find the source code at https://github.com/docker/infrakit
 
InfraKit Origins
Back in June at DockerCon, we introduced Docker for AWS and Azure beta to simplify the IT operations experience in setting up Docker and to optimally leverage the native capabilities of the respective cloud environment. To do this, Docker provided deep integrations into these platforms’ capabilities for storage, networking and load balancing.
In the diagram below, the architecture for these versions includes platform-specific network and storage plugins, but also a new component specific to infrastructure management.
While working on Docker for AWS and Azure, we realized the need for a standard way to create and manage infrastructure state that was portable across any type of infrastructure, from different cloud providers to on-prem.  One challenge is that each vendor has differentiated IP invested in how they handle certain aspects of their cloud infrastructure. It is not enough to just provision five servers;what IT ops teams need is a simple and consistent way to declare the number of servers, what size they should be, and what sort of base software configuration is required.  And in the case of server failures (especially unplanned), that sudden change needs to be reconciled against the desired state to ensure that any required servers are re-provisioned with the necessary configuration. We started InfraKit to solves these problems and to provide the ability to create a self healing infrastructure for distributed systems.
 
InfraKit Internals
InfraKit breaks infrastructure automation down into simple, pluggable components for declarative infrastructure state, active monitoring and automatic reconciliation of that state. These components work together to actively ensure the infrastructure state matches the user&;s specifications. InfraKit emphasizes primitives for building self-healing infrastructure but can also be used passively like conventional tools.
InfraKit at the core consists of a set of collaborating, active processes. These components are called plugins and different plugins can be written to meet different needs. These plugins are active controllers that can look at current infrastructure state and take action when the state diverges from user specification.
Initially, these plugins are implemented as servers listening on unix sockets and communicate using HTTP. By nature, the plugin interface definitions are language agnostic so it&8217;s possible to implement a plugin in a language other than Go. Plugins can be packaged and deployed differently, such as with Docker containers.
Plugins are the active components that provide the behavior for the primitives that InfraKit supports. InfraKit supports these primitives: groups, instances, and flavors. They are active components running as plugins.
Groups
When managing infrastructure like computing clusters, Groups make good abstraction, and working with groups is easier than managing individual instances. For example, a group can be made up of a collection of machines as individual instances. The machines in a group can have identical configurations (replicas, or so-called “cattle”). They can also have slightly different configurations and properties like identity,ordering, and persistent storage (as members of a quorum or so-called “pets”).
Instances
Instances are members of a group. An instance plugin manages some physical resource instances. It knows only about individual instances and nothing about Groups. Instance is technically defined by the plugin, and need not be a physical machine at all.   As part of the toolkit, we have included examples of instance plugins for Vagrant and Terraform. These examples show that it’s easy to develop plugins.  They are also examples of how InfraKit can play well with existing system management tools while extending their capabilities with active management.  We envision more plugins in the future &; for example plugins for AWS and Azure.
Flavors
Flavors help distinguish members of one group from another by describing how these members should be treated. A flavor plugin can be thought of as defining what runs on an Instance. It is responsible for configuring the physical instance and for providing health-check in an application-aware way.  It is also what gives the member instances properties like identity and ordering when they require special handling.  Examples of flavor plugins include plain servers, Zookeeper ensemble members, and Docker swarm mode managers.
By separating provisioning of physical instances and configuration of applications into Instance and Flavor plugins, application vendors can directly develop a Flavor plugin, for example, MySQL, that can work with a wide array of instance plugins.
Active Monitoring and Automatic Reconciliation
The active self-healing aspect of InfraKit sets it apart from existing infrastructure management solutions, and we hope it will help our industry build more resilient and self-healing systems. The InfraKit plugins themselves continuously monitor at the group, instance and flavor level for any drift in configuration and automatically correct it without any manual intervention.

The group plugin checks on the size, overall health of the group and decides on strategies for updating.
The instance plugin monitors for the physical presence of resources.
The flavor plugin can make additional determination beyond presence of the resource. For example the swarm mode flavor plugin would check not only that a swarm member node is up, but that the node is also a member of the cluster.  This provides an application-specific meaning to a node’s “health.”

This active monitoring and automatic reconciliation brings a new level of reliability for distributed systems.
The diagram below shows an example of how InfraKit can be used. There are three groups defined; one for a set of stateless cattle instances, one for a set of stateful and uniquely named pet instances and one defined for the Infrakit manager instances themselves. Each group will be monitored for their declared infrastructure state and reconciled independently of the other groups.  For example, if one of the nodes (blue and yellow) in the cattle group goes down, a new one will be started to maintain the desired size.  When the leader host (M2) running InfraKit goes down, a new leader will be elected (from the standby M1 and M3). This new leader will go into action by starting up a new member to join the quorum to ensure availability and desired size of the group.

InfraKit, Docker and Community
InfraKit was born out of our engineering efforts around Docker for AWS and Azure and future versions will see further integration of InfraKit into Docker and those environments, continuing the path building Docker with a set of reusable components.
As the diagram below shows, Docker Engine is already made up of a number of infrastructure plumbing components mentioned earlier.  The components are not only available separately to the community, but integrated together as the Docker Engine.  In a future release, InfraKit will also become part of the Docker Engine.
With community participation, we aim to evolve InfraKit into exciting new areas beyond managing nodes in a cluster.  There’s much work ahead of us to build this into a cohesive framework for managing infrastructure resources, physical, virtual or containerized, from cluster nodes to networks to load balancers and storage volumes.
We are excited to open source InfraKit and invite the community to participate in this project:

Help define and implement new and interesting plugins
Instance plugins to support different infrastructure providers
Flavor plugins to support a variety of systems like etcd or mysql clusters
Group controller plugins like metrics-driven auto scaling and more
Help define interfaces and implement new infrastructure resource types for things like load balancers, networks and storage volume provisioners

Check out the InfraKit repository README for more info, a quick tutorial and to start experimenting &; from plain files to Terraform integration to building a Zookeeper ensemble.  Have a look, explore, and send us a PR or open an issue with your ideas!

Introducing InfraKit: A new open source toolkit for declarative infrastructureClick To Tweet

More Resources:

Check out all the Infrastructure Plumbing projects
Sign up for Docker for AWS or Docker for Azure
Try Docker today

The post Introducing InfraKit, an open source toolkit for creating and managing declarative, self-healing infrastructure appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Develop Cloud Applications for OpenStack on Murano, Part 1: What is Murano, and why do I need it?

The post Develop Cloud Applications for OpenStack on Murano, Part 1: What is Murano, and why do I need it? appeared first on Mirantis | The Pure Play OpenStack Company.
So many apps, so little time.
Developing applications for cloud can be a complicated process; you need to think about resources, placement, scheduling, creating virtual machines, networking&; or do you?  The OpenStack Murano project makes it possible for you to create an application without having to worry about directly doing any of that.  Instead, you can create your application, package it with instructions, and let Murano do the rest.
In other worlds, Murano lets you much more easily distribute your applications &; users just have to click a few buttons to use them.
Every day this week we&;re going to look at the process of creating OpenStack Murano apps so that you can make your life easier &8212; and get your work out there for people to use without having to beg an administrator to install it for them.
We&8217;ll cover the following topics:

Day 1: What is Murano, and why do I need it?
In this article, we&8217;ll talk about what Murano is, who it helps, and how. We&8217;ll also start with the basic concepts you need to understand and let you know what you&8217;ll need for the rest of the series.
Day 2:  Creating the development environment
In this article, we&8217;ll look at deploying an OpenStack cluster with Murano so that you&8217;ve got the framework to work with.
Day 3:  The application, part 1:  Understanding Plone deployment
In our example, we&8217;ll show you how to use Murano to easily deploy the Plone enterprise CMS; in this article, we&8217;ll go over what Murano will actually have to do to install it.
Day 4:  The application, part 2:  Creating the Murano App
Next we&8217;ll go ahead and create the actual Murano App that will deploy Plone.
Day 5:  Uploading and troubleshooting the app
Now that we&8217;ve created the Plone Murano App, we&8217;ll go ahead and add it to the application catalog so that users can deploy it. We&8217;ll also look at some common issues and how to solve them.

Interested in seeing more? We&8217;ll showing you how to automate Plone deployments for OpenStack at Boston Plone October 17-23, 2016.
Before you start
Before you get started, let&8217;s make sure you&8217;re ready to go.
What you should know
Before we start, let&8217;s get the lay of the land. There&8217;s really not that much you need to know before building a Murano app, but it helps if you are familiar with the following concepts:

Virtualization: Wikipedia says that &;Hardware virtualization or platform virtualization refers to the creation of a virtual machine that acts like a real computer with an operating system.&; Perhaps that&8217;s an oversimplification, but it&8217;ll work for us here. For this series, it helps to have an understanding of virtualization fundamentals, as well as experience in the creation, configuration and deployment of virtual machines, and the creation and restoration of VM snapshots.
OpenStack: OpenStack is, of course, a platform that helps to orchestrate and manage these virtual resources for you; Murano is a project that runs on OpenStack.
UNIX-like OS fundamentals: It also helps to understand command line, basic commands and the structure of Unix-like systems. If you are not familiar with the UNIX command line you might want to study this Linux shell tutorial first.
SSH: It helps to know how to generate and manage multiple SSH keys, and how to connect to a remote host via SSH using SSH keys.
Networks: Finally, although you don&8217;t need to be a networking expert, it is useful if you are familiar with these concepts: IP, CIDR, Port, VPN, DNS, DHCP, and NAT.

If you are not familiar with these concepts, don&8217;t worry; you will be able to learn more about them as we move forward.
What you should have
In order to run the software we&8217;ll be talking about, your environment must meet certain prerequisites. You&8217;ll need a 64-bit host operating system with:

At least 8 GB RAM
300 GB of free disk space. It doesn’t matter if you have less than 300 GB of real free disk space, as it will be taken by demand. So, if you are going to deploy a lightweight application then maybe even 128 GB will be enough. It’s up to your application requirements. In the case of Plone, the recommendation is 40MB per site to be deployed.
Virtualization enabled in BIOS
Internet access

What is OpenStack Murano?
Imagine you&8217;re a cloud user. You just want to get things done. You don&8217;t care about all of the details, you just want the functionality that you need.
Murano is an OpenStack project that provides an application catalog, like the AppStore for iOS or GooglePlay for Android. Murano lets you easily browse for cloud applications you need by their name or category, and then enables you to rapidly deploy them to the cloud with just a few clicks.
For example, if you want a web server, rather than having to create a VM, find the software, deploy it, manage IP addresses and ports, and so on, Murano enables you to simply choose a web server application, name it, and go; Murano does the rest of the work.
Murano also makes it possible to easily deploy applications with multiple components.  For example, what if you didn&8217;t just want a web server, but you wanted a WordPress application, which includes a web server database, and web application? A pre-existing WordPress Murano app would make it possible for you to simply choose the app, specify a few parameters, and go.  (In fact, later in this series we&8217;ll look at creating an app for an even more complex CMS, Plone.)
Because it&8217;s so straightforward to deploy the applications, users can do it themselves, rather than relying on administrators.
Moreover, not only does Murano let users and administrators easily deploy complex cloud applications, it also completely manages application lifecycles such as auto scaling-up and scaling-down clusters, providing self-healing and more.
Murano’s main end users are:

Independant cloud users, who can use Murano to easily find and deploy applications themselves.
Cloud Service Owners, who can use Murano to save time when deploying and configuring applications to multiple instances or when deploying complex distributed applications with many dependent applications and services.
Developers, who can use Murano to easily deploy and redeploy on-demand applications, many times without cloud administrators, for their own purposes (for example for hosting a web-site, or for the development and testing of applications). They can also use Murano to make their applications available to other end users.

In short, Murano turns application deployment and managing processes into a very simple process that can be performed by administrators and users of all levels. It does this by encapsulating all of the deployment logic and dependencies for the application into a Murano App, which is a single zip file with a specific structure. You just need to upload it to your cloud and it&8217;s ready.
Why should I create a Murano app?
OK, so now that we know what a Murano app is, why should we create one?  Well, ask yourself these questions:

Do I want to spend less time deploying my applications?
Do I want my users to spend less time (and aggravation) deploying my applications?
Do I want my employees to spend more time actually getting work done and less time struggling with software deployment?

(Do you notice a theme here?)
There are also reasons for creating Murano Apps that aren&8217;t necessarily related to saving time or being more efficient:

You can make it easier for users to find your application by publishing it to the OpenStack Community Application Catalog, which provides access to a whole ecosystem of people  across fast-growing OpenStack markets around the world. (Take a look how huge it is by exploring OpenStack User-stories.)
You can develop your app as a robust and re-usable solution in your private OpenStack сloud to avoid error-prone manual work.

All you need to do to make these things possible is to develop a Murano App for your own application.
Where we go from here
OK, so now we know what a Murano App is, and why you&8217;d want to create one. Join us tomorrow to find out how to create the OpenStack and developer environment you&8217;ll need to make it work.
And let us know in the comments what you&8217;d like to see out of this series!
 
The post Develop Cloud Applications for OpenStack on Murano, Part 1: What is Murano, and why do I need it? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

What’s the big deal about running OpenStack in containers?

The post What&;s the big deal about running OpenStack in containers? appeared first on Mirantis | The Pure Play OpenStack Company.
Ever since containers began their meteoric rise in the technical consciousness, people have been wondering what it would mean for OpenStack. Some of the predictions were dire (that OpenStack would cease to be relevant), some were more practical (that containers are not mini VMs, and anyway, they need resources to run on, and OpenStack still existed to manage those resources).
But there were a few people who realized that there was yet another possibility: that containers could actually save OpenStack.
Look, it&8217;s no secret that deploying and managing OpenStack is difficult at best, and frustratingly impossible at worst. So what if I told you that using Kubernetes and containers could make it easy?
Mirantis has been experimenting with container-based OpenStack for the past several years &; since before it was &;cool&; &8212; and lately we&8217;d decided on an architecture that would enable us to take advantage of the management capabilities and scalability that comes with the Kubernetes container orchestration engine.  (You might have seen the news that we&8217;ve also acquired TCP Cloud, which will help us jump our R&D forward about 9 months.)
Specifically, using Kubernetes as an OpenStack underlay lets us turn a monolithic software package into discrete services with well-defined APIs that can be freely distributed, orchestrated, recovered, upgraded and replaced &8212; often automatically based on configured business logic.
That said, it&8217;s more than just dropping OpenStack into containers, and talk is cheap. It&8217;s one thing for me to say that Kubernetes makes it easy to deploy OpenStack services.  And frankly, almost anything would be easier than deploying, say, a new controller with today&8217;s systems.
But what if I told you you could turn an empty bare metal node into an OpenStack controller just by adding a couple of tags to it?
Have a look at this video (you&8217;ll have to drop your information in the form, but it just takes a second):
Containerizing the OpenStack Control Plane on Kubernetes: auto-scaling OpenStack services
I know, right? Are you as excited about this as I am?
The post What&8217;s the big deal about running OpenStack in containers? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Microsoft server hosting on IBM Cloud

Did you know that tens of thousands of Microsoft workloads are running on IBM Cloud? Here are some of the reasons why organizations of all sizes are choosing cloud to support their Microsoft servers.
Why choose cloud
Businesses are looking for new ways to engage customers, drive digital transformation and make operations faster and more flexible. With cloud, it’s easier to design and implement these ideas to create competitive advantage. Choose from multiple models &; public, private, and hybrid cloud – that deliver choice and flexibility as the competitive landscape changes and your business needs evolve.

Across public, private and hybrid cloud, IBM Cloud can provide seamless integration and support for the latest versions of applications such as Microsoft SQL Server 2016. The infrastructure is secure, scalable, and flexible, providing the solid foundation that has made IBM Cloud the hybrid cloud market leader.
Success factors

Configure the cloud your way – Can you trust the cloud with critical Microsoft workloads? One secure and widely used approach is to implement bare metal servers, creating a custom, dedicated cloud. With bare metal, the server is designed to your specifications. You select and approve what goes on it.
Stay in control – When your Microsoft workloads are running in the cloud, you want to manage them like an extension of your data center. Look for ways to use APIs and a single management system across workloads.
Go global – No matter the size of your business, you need to consider the flexibility of global data access and storage when choosing a cloud provider to support your growth plans.
Manage costs – Check to understand cost visibility across the software and server resources. Evaluate each element, from how Microsoft workloads are hosted to infrastructure. IBM Cloud offers clear, competitive pricing on hourly or monthly terms for cloud services and Microsoft software so you can easily meet all of your Microsoft Windows workload requirements.

Also, each time the software inside your core applications is headed for end of life, see if cloud can help you move to the newest version. For example, moving from an older version of SQL Server onto SQL Server 2016 may be faster using cloud hosting.
Get started
Stop by the IBM Booth at Microsoft Ignite 2016 in Atlanta, Georgia, September 26-30, to speak to advisors about Microsoft on IBM Cloud.
Learn more about IBM Cloud and Microsoft workload hosting.
The post Microsoft server hosting on IBM Cloud appeared first on news.
Quelle: Thoughts on Cloud