Tieto’s path to containerized OpenStack, or How I learned to stop worrying and love containers

The post Tieto&;s path to containerized OpenStack, or How I learned to stop worrying and love containers appeared first on Mirantis | The Pure Play OpenStack Company.
Tieto is the cloud service provider in Northern Europe, with over 150 cloud customers in the region and revenues in the neighborhood of €1.5 billion (with a &;b&;). So when the company decided to take the leap into OpenStack, it was a decision that wasn&8217;t taken lightly &; or without very strict requirements.
Now, we&8217;ve been talking a lot about containerized OpenStack here at Mirantis lately, and at the OpenStack Summit in Barcelona, our Director of Product Engineering will get with Tieto&8217;s Cloud Architect  Lukáš Kubín to explain the company&8217;s journey from a traditional architecture to a fully adaptable cloud infrastructure, so we wanted to take a moment and ask the question:
How does a company decide that containerized OpenStack is a good idea?
What Tieto wanted
At its heart, Tieto wanted to deliver a bimodal multicloud solution that would help customers digitize their businesses. In order to do that, it needed an infrastructure in which it could have confidence, and OpenStack was chosen as the platform for cloud native applications delivery.  The company had the following goals:

Remove vendor lock-in
Achieve the elasticity of a seamless on-demand capacity fulfillment
Rely on robust automation and orchestration
Adopt innovative open source solutions
Implement Infrastructure as Code

It was this last item, implementing Infrastructure as Code, that was perhaps the biggest challenge from an OpenStack standpoint.
Where we started
In fact, Tieto had been working with OpenStack since 2013, creating internal projects to evaluate OpenStack Havana and Icehouse using internal software development projects; at that time, the target architecture included Neutron and Open vSwitch. 
By 2015, the company was providing scale-up focused IaaS cloud offerings and unique application-focused PaaS services, but what was lacking was a shared platform with full API controlled infrastructure for horizontally scalable workload.
Finally, this year, the company announced its OpenStack Cloud offering, based on the OpenStack distribution of tcp cloud (now part of Mirantis), and OpenContrail rather than Open vSwitch.
Why OpenContrail? The company cited several reasons:

Licensing: OpenContrail is an open source solution, but commercial support is available from vendors such as Mirantis.
High Availability: OpenContrail includes native HA support.
Cloud gateway routing: North-South traffic must be routed on physical edge routers  instead of software gateways to work with existing solutions
Performance: OpenContrail provides excellent pps, bandwidth, scalability, and so on (up to 9.6 Gbps)
Interconnection between SDN and Fabric: OpenContrail supports the dynamic legacy connections through EVPN or ToR switches
Containers: OpenContrail includes support for containers, making it possible to use one networking framework for multiple environments.

Once completed, the Tieto Proof of Concept cloud included;

OpenContrail 2.21
20 compute nodes
Glance and Cinder running on Ceph
Heat orchestration

Tieto had achieved Infrastructure as Code, in that deployment and operations were controlled through OpenStack Salt formulas. This architecture enabled the company to use DevOps principles, in that they could use declarative configurations that could be stored in a repository and re-used as necessary.
What&8217;s more, the company had an architecture that worked, and that included commercial support for OpenContrail (through Mirantis).
But there was still something missing.
What was missing
With operations support and Infrastructure as Code, Tieto&8217;s OpenStack Cloud was already beyond what many deployments ever achieve, but it still wasn&8217;t as straightforward as the company would have liked.  
As designed, the OpenStack architecture consisted of almost two dozen VMs on at least 3 physical KVM nodes &8212; and that was just the control plane!

As you might imagine, trying to keep all of those VMs up to date through operating system updates and other changes made operations more complex that it needed to be.  Any time an update needed to be applied, it had to be applied to each and every VM. Sure, that process was easier because of the DevOps advantages introduced by the OpenStack-Salt formulas that were already in the repository, but that was still an awful lot of moving parts.
There had to be a better way.
How to meet that challenge
That &8220;better way&8221; involves treating OpenStack as a containerized application in order to take advantage of the efficiencies this architecture enables, including:

Easier operations, because each service no longer has its own VM, with it own operating system to worry about
Better reliability and easier manageability, because containers and docker files can be tested as part of a CI/CD workflow
Easier upgrades, because once OpenStack has been converted to a microservices architecture, it&8217;s much easier to simply replace one service
Better performance and scalability, because the containerized OpenStack services can be orchestrated by a tool such as Kubernetes.

So that&8217;s the &8220;why&8221;.  But what about the &8220;how&8221;?  Well, that&8217;s a tale for another day, but if you&8217;ll be in Barcelona, join us at 12:15pm on Wednesday to get the full story and maybe even see a demo of the new system in action!
The post Tieto&8217;s path to containerized OpenStack, or How I learned to stop worrying and love containers appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Total Cost of Ownership: AWS TCO vs OpenStack TCO Q&A

The post Total Cost of Ownership: AWS TCO vs OpenStack TCO Q&;A appeared first on Mirantis | The Pure Play OpenStack Company.
Last month, Amar Kapadia led a lively discussion about the Total Cost of Ownership of OpenStack clouds versus running infrastructure on Amazon Web Services.  Here are some of the questions we got from the audience, along with the answers.
Q: Which AWS cost model do you use? Reserved? As you go?
A: Both. We have a field that can say what % are reserved, and what discount you are getting on reserved instances. For the webinar, we assumed 30% reserved instances at 32% discount. The rest are pay-as-you-go.
Q: How does this comparison look when considering VMware&;s newly announced support for OpenStack? Is that OpenStack support with VMware only with regards to supporting OpenStack in a &;Hybrid Cloud&; model? Please touch on this additional comparison. Thanks.
A: In general, a VMware Integrated OpenStack (VIO) comparison will look very different (and show a much higher cost) because they support only vSphere.
Q: Can Opex be detailed as per the needs of the customer? For example, if he doesn&8217;t want an IT/Ops team and datacenter fees included as the customer would provide their own?
A: Yes, please contact us if you would like to customize the calculator for your needs.
Q: Do you have any data on how Opex changes with the scale of the system?
A: It scales linearly. Most of the Opex costs are variable costs that grow with scale.
Q: What parameters were defined for this comparison, and were the results validated by any third party, or on just user/orgnaisatuon experience?
A: Parameters are in the slide. Since there is so much variability in customers&8217; environments, we don&8217;t think a formal third party validation makes sense. So the validation is really through 5-10 customers.
Q: How realistic is it to estimate IT costs? Size of company, size of deployment, existing IT staff (both firing and hiring), each of these will have an impact on the cost for IT/OPs teams.
A: The calculator assumes a net new IT/OPS team. It&8217;s not linked to the company size, but rather the OpenStack cloud size. We assume a minimum team size of about 3.5 people and linear growth after that as your cloud scales.
Q: Should the Sparing not be adding more into the cost, as you will need more hardware for HA for high availability?
A: Yes, sparing is included.
Q: AWS recommends using 90% utilization, and if you are using 60%, it&8217;s better to downgrade the VM to ensure 90% utilization. In the case of provisioning 2500 vms with autoscaling, this should help.
A: Great point, however, we see a large number of customers who do not do this, or do not even know what percentage of their VMs are underutilized. Some customers even have zombie VMs that are not used at all, but they are still paying for them.
Q: With the hypothesis that all applications can be &8220;containerized&8221;, will the comparison outcomes remain the same?
A: We don&8217;t have this yet, but a private cloud will turn out to have a much better TCO. The reason is that we believe private clouds can run containers on bare-metal while public clouds have to run containers in VMs for security reasons. So a private cloud will be a lot more efficient.
Q: This is interesting. Can you please add replication cost? This is what AWS does free of cost within an availability zone. In the case of OpenStack, we need to take care of replication.
A: I assume you mean for storage. Yes we already include a 3x factor to convert from raw storage to usable storage to factor in replication (3-way).
Q: Just wondering how secure is the solution as you have mentioned for a credit card company? AWS is PCI DSS certified.
A: Yes this solution is PCI certified.
Q: Has this TCO calculator been validated against a real customer workload?
A: Yes, 5-10 customers have validated this calculator.
Q: Do you think that these costs apply to another countries, or this is US based?
A: These calculations are US based. Both AWS and private cloud costs could go up internationally.
Q: Hi, thank you for your time in this webinar. How many servers (computes, controllers, storage servers) are you using, and which model do you use for your calculations ? Thanks.
A: The node count is variable. For this webinar, we assumed 54 compute nodes, 6 controllers, and 1080GB of block storage. We assumed commodity Intel and SuperMicro hardware with 3 year warranty.
Q: Can we compare different models, such as AWS vs VMware private cloud/public cloud with another vendor (not AWS)?
A: These require customizations. Please contact us.
The post Total Cost of Ownership: AWS TCO vs OpenStack TCO Q&038;A appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Full Stack Automation with Ansible and OpenStack

Ansible offers great flexibility. Because of this the community has figured out many useful ways to leverage Ansible modules and playbook structures to automate frequent operations on multiple layers, including using it with OpenStack.
In this blog we’ll cover the many use-cases Ansible, the most popular automation software, with OpenStack, the most popular cloud infrastructure software. We’ll help you understand here how and why you should use Ansible to make your life easier, in what we like to call Full-Stack Automation.

Let’s begin by analyzing the layers of Full-Stack Automation, shown in the diagram above. At the bottom, we have the hardware resources (servers, storage area networks, and networking gear). Above, is the operating system (Linux or Windows). On the Linux side, you can install OpenStack to abstract all of your datacenter resources and offer a software-defined version of your compute, network, and storage resources. On top of OpenStack, are the tenant-defined services needed to create the virtual machines where the applications will reside. Finally, you have to manage the operating system (Linux or Windows) to deploy the actual applications and workloads that you really care about (databases, web servers, mobile application backends, etc.). If you use containers (like Docker or Rkt), you’ll package those applications in images that will be deployed on top of your Guest OS. In addition to that, some languages introduce the concept of application servers, which adds another layer (i.e. J2EE).
Ansible management possibilities
With Ansible, you have a module to manage every layer. This is true even for the networking hardware, although technically speaking it’s for the network operating system, like IOS or NXOS (see the full list of Ansible network modules here).

General interaction with the Operating System: install packages, change or enforce file content or permissions, manage services, create/remove users and groups, etc.

Linux and BSD via SSH (the first and most popular use-case)
Windows via PowerShell (since 1.7)

IaaS Software: install the IaaS software and its dependencies (databases, load balancers, configuration files, services, and other helper tools)

OpenStack-ansible installer https://github.com/openstack/openstack-ansible, as used in some upstream-based OpenStack distributions from other vendors. Note that the Red Hat OpenStack Platform does not use Ansible, but Heat and Puppet. Future releases will leverage Ansible to perform certain validations and to help operators perform their updates and upgrades.
CloudStack installer is also an Ansible-based project.

Virtual Resources: define the resource, like a Virtual Machine or Instance, in terms of how big it is, who can access it, what content should it have, what security profile and network access it requires, etc.

OpenStack Ansible modules (since Ansible 2.0): for instance, Nova or Neutron. It&;s based on the OpenStack &;shade&; library, a common tool for all CLI tools in OpenStack.
It can also manage not so virtual network resources, via netconf (since 2.2) https://docs.ansible.com/ansible/netconf_config_module.html
VmWare vSphere Ansible modules
RHV or oVirt or Libvirt for bare KVM
It also has modules for public cloud providers, like Amazon, Google Cloud, Azure and Digital Ocean

Guest OS: the same components as described for the Host OS. But how do you discover how many Guests you have?

Ansible Dynamic Inventory will dynamically interrogate the IaaS/VM layer and discover which instances are currently available. It detects their hostname, IPs, and security settings and replaces the static Inventory concept. This is especially useful if you leverage Auto Scaling Groups in your cloud infrastructure, which makes your list of instances very variable over time.

Containers Engine (optional)

Docker: Note that the old Docker module is deprecated for a new, native version, in Ansible 2.1.
Kubernetes
Atomic Host

Tenant Software: databases, web servers, load balancers, data processing engines, etc.

Ansible Galaxy is the repository of recipes (playbooks) to deploy the most popular software, and it’s the result of the contributions of thousands of community members.
You can also manage web Infrastructure such as JBoss, allowing Ansible to define how an app is deployed in the application server.

How to install the latest Ansible on a Python virtual environment
As you have seen, some features are only available with very recent Ansible versions, like 2.2. However, your OS may not ship it yet. For example, RHEL 7 or CentOS 7 only comes with Ansible 1.9.
Given that Ansible is a command-line tool written in Python, which supports multiple versions on a system, you may not need the security hardening in Ansible that your distribution offers, and you may want to try the latest version instead.
However, as any other Python software, there are many dependencies, and it’s very dangerous to mix untested upstream libraries with your system-provided ones. Those libraries may be shared and used in other parts of your system, and untested newer libraries can break other applications. The quick solution is to install the latest Ansible version, with all its dependencies, in a isolated folder under your non-privileged user account. This is called a Python Virtual Environment (virtualenv), and if done properly, allows you to safely play with the latest Ansible modules for a full-stack orchestration. Of course, we do not recommend this practice for any production use-case; consider it a learning exercise to improve your DevOps skills.
1) Install prerequisites (pip, virtualenv)
The only system-wide python library we need here is “virtualenvwrapper”. Other than that, you should not do “sudo pip install” as it will replace system python libraries with untested, newer ones. We only trust one here, “virtualenvwrapper”. The virtual environment method is a good mechanism for installing and testing newer python modules in your non-privileged user account.
$ sudo yum install python-pip
$ sudo pip install virtualenvwrapper
$ sudo yum install python-heatclient python-openstackclient python2-shade
2) Setup a fresh virtualenv, where we’ll install the latest Ansible release
First, create a directory to hold the virtual environments.
$ mkdir $HOME/.virtualenvs
Then, add a line like &8220;export WORKON_HOME=$HOME/.virtualenvs&8221; to your .bashrc. Also, add a line like &8220;source /usr/bin/virtualenvwrapper.sh&8221; to your .bashrc. Now source it.
$ source ~/.bashrc
At this point, wrapper links are created, but only the first time you run it. To see the list of environments, just execute &8220;workon&8221;. Next, we&8217;ll create a new virtualenv named “ansible2” , which will be automatically enabled, with access to the default RPM-installed packages.
$ workon
$ mkvirtualenv ansible2 –system-site-packages
To exit the virtualenv, type &8220;deactivate&8221;, and to re-enter again, use &8220;workon&8221;.
$ deactivate
$ workon ansible2
3) Enter the new virtualenv and install Ansible2 via PIP (as regular user, not root)
You can notice your shell prompt has changed and it shows the virtualenv name in brackets.
(ansible2) $ pip install ansible
The above command will install just the ansible 2 dependencies, leveraging your system-wide RPM-provided python packages (thanks to the &;system-site-packages flag we used earlier). Alternatively, if you want to try the development branch:
(ansible2) $ pip install git+git://github.com/ansible/ansible.git@devel
(ansible2) $ ansible –version
If you ever want to remove the virtualenv, and all its dependencies, just use use &8220;rmvirtualenv ansible2&8221;.
4) Install OpenStack client dependencies
The first command below ensures you have the latest stable OpenStack API versions, although you can also try a pip install to get the latest CLI. The second command provides the latest python “shade” library to connect to latest OpenStack API versions using ansible, regardless of the CLI tool.
(ansible2) $ yum install python-openstackclient python-heatclient
(ansible2) $ pip install shade –upgrade
5) Test it
(ansible2) $ ansible -m ping localhost

localhost | SUCCESS => {

“changed”: false,

“ping”: “pong”

}
NOTE: you cannot run this version of ansible outside the virtualenv, so always remember to do “workon ansible2” before usi.

Using Ansible to orchestrate OpenStack
Our savvy readers will notice that using Ansible to orchestrate OpenStack seems to ignore the fact that Heat is the official orchestration module for OpenStack. Indeed, an Ansible Playbook will do almost the same as a HOT template (HOT is the YAML-based syntax for Heat, an evolution of AWS CloudFormation). However, there are many DevOps professionals out there who don’t like to learn new syntax, and they are already consolidating all their process for their hybrid infrastructure.
The Ansible team recognized that and leveraged Shade, the official library from the OpenStack project, to build interfaces to OpenStack APIs. At the time of this writing, Ansible 2.2 includes modules to call the following APIs

Keystone: users, groups, roles, projects
Nova: servers, keypairs, security-groups, flavors
Neutron: ports, network, subnets, routers, floating IPs
Ironic: nodes, introspection
Swift Objects
Cinder volumes
Glance images

From an Ansible perspective, it needs to interact with a server where it can load the OpenStack credentials and open an HTTP connection to the OpenStack APIs. If that server is your machine (localhost), then it will work locally, load the Keystone credentials, and start talking to OpenStack.
Let’s see an example. We’ll use Ansible OpenStack modules to connect to Nova and start a small instance with the Cirros image. But we’ll first upload the latest Cirros image, if not present. We’ll use an existing SSH key from our current user. You can download this playbook from this github link.

# Setup according to Blogpost “Full Stack automation with Ansible and OpenStack”. Execute with “ansible-playbook ansible-openstack-blogpost.yml -c local -vv”
# #
# #
# #
– name: Execute the Blogpost demo tasks
hosts: localhost
tasks:
– name: Download cirros image
get_url:
url: http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
dest: /tmp/cirros-0.3.4-x86_64-disk.img
– name: Upload cirros image to openstack
os_image:
name: cirros
container_format: bare
disk_format: qcow2
state: present
filename: /tmp/cirros-0.3.4-x86_64-disk.img

– name: Create new keypair from current user’s default SSH key
os_keypair:
state: present
name: ansible_key
public_key_file: “{{ ‘~’ | expanduser }}/.ssh/id_rsa.pub”

– name: Create the test network
os_network:
state: present
name: testnet
external: False
shared: False
: vlan
: datacentre
register: testnet_network

– name: Create the test subnet
os_subnet:
state: present
network_name: “{{ testnet_network.id }}”
name: testnet_sub
ip_version: 4
cidr: 192.168.0.0/24
gateway_ip: 192.168.0.1
enable_dhcp: yes
dns_nameservers:
– 8.8.8.8
register: testnet_sub

– name: Create the test router
ignore_errors: yes for some reasons, re-running this task gives errors
os_router:
state: present
name: testnet_router
network: nova
external_fixed_ips:
– subnet: nova
interfaces:
– testnet_sub

– name: Create a new security group
os_security_group:
state: present
name: secgr
– name: Create a new security group allowing any ICMP
os_security_group_rule:
security_group: secgr
protocol: icmp
remote_ip_prefix: 0.0.0.0/0
– name: Create a new security group allowing any SSH connection
os_security_group_rule:
security_group: secgr
protocol: tcp
port_range_min: 22
port_range_max: 22
remote_ip_prefix: 0.0.0.0/0

– name: Create server instance
os_server:
state: present
name: testServer
image: cirros
flavor: m1.small
security_groups: secgr
key_name: ansible_key
nics:
– net-id: “{{ testnet_network.id }}”
register: testServer

– name: Show Server’s IP
debug: var=testServer.openstack.public_v4

After the execution, we see the IP of the instance. We write it down, and we can now use Ansible to connect into it via SSH. We assume Nova’s default network allows connections from our workstation, in our case via a provider network.

Comparison with OpenStack Heat
Using Ansible instead of Heat has it&8217;s advantages and disadvantages. For instance, with Ansible you must keep track of the resources you create, and manually delete them (in reverse order) once you are done with them. This is especially tricky with Neutron ports, floating IPs and routers. With Heat, you just delete the stack, and all the created resources will be properly deleted.
Compare the above with a similar (but not equivalent) Heat Template, that can be downloaded from this github gist:
heat_template_version: 2015-04-30

description: >
Node template. Launch with “openstack stack create –parameter public_network=nova –parameter ctrl_network=default –parameter secgroups=default –parameter image=cirros –parameter key=ansible_key –parameter flavor=m1.small –parameter name=myserver -t openstack-blogpost-heat.yaml testStack”

parameters:
name:
type: string
description: Name of node
key:
type: string
description: Name of keypair to assign to server
secgroups:
type: comma_delimited_list
description: List of security group to assign to server
image:
type: string
description: Name of image to use for servers
flavor:
type: string
description: Flavor to use for server
availability_zone:
type: string
description: Availability zone for server
default: nova
ctrl_network:
type: string
label: Private network name or ID
description: Network to attach instance to.
public_network:
type: string
label: Public network name or ID
description: Network to attach instance to.

resources:

ctrl_port:
type: OS::Neutron::Port
properties:
network: { get_param: ctrl_network }
security_groups: { get_param: secgroups }

floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network: { get_param: public_network }
port_id: { get_resource: ctrl_port }

instance:
type: OS::Nova::Server
properties:
name: { get_param: name }
image: { get_param: image }
flavor: { get_param: flavor }
availability_zone: { get_param: availability_zone }
key_name: { get_param: key }
networks:
– port: { get_resource: ctrl_port }

Combining Dynamic Inventory with the OpenStack modules
Now let’s see what happens when we create many instances, but forget to write down their IP’s. The perfect example to leverage Dynamic Inventory for OpenStack is to learn the current state of our tenant virtualized resources, and gather all server IP’s so we can check their kernel version, for instance. This is transparently done by Ansible Tower, for instance, which will periodically run the inventory and collect the updated list of OpenStack servers to manage.
Before you execute this, you don’t have stale cloud.yaml files in either ~/.config/openstack, /etc/openstack, or /etc/ansible. The Dynamic Inventory script will look for environment variables first (OS_*), and then it will search for those files.
ensure you are using latest ansible version

$ workon ansible2
$ wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/openstack.py
$ chmod +x openstack.py
$ ansible -i openstack.py all -m ping
bdef428a-10fe-4af7-ae70-c78a0aba7a42 | SUCCESS => {
    “changed”: false,
    “ping”: “pong”
}
343c6e76-b3f6-4e78-ae59-a7cf31f8cc44 | SUCCESS => {
    “changed”: false,
    “ping”: “pong”
}
You can have fun by looking at all the information that the Inventory script above returns if you just executed as follows:
$ ./openstack.py &8211;list
{
 “”: [
    “777a3e02-a7e1-4bec-86b7-47ae7679d214″,
    “bdef428a-10fe-4af7-ae70-c78a0aba7a42″,
    “0a0c2f0e-4ac6-422d-8d9b-12b7a87daa72″,
    “9d4ee5c0-b53d-4cdb-be0f-c77fece0a8b9″,
    “343c6e76-b3f6-4e78-ae59-a7cf31f8cc44″
 ],
 “_meta”: {
    “hostvars”: {
     “0a0c2f0e-4ac6-422d-8d9b-12b7a87daa72″: {
       “ansible_ssh_host”: “172.31.1.42”,
       “openstack”: {
         “HUMAN_ID”: true,
         “NAME_ATTR”: “name”,
         “OS-DCF:diskConfig”: “MANUAL”,
         “OS-EXT-AZ:availability_zone”: “nova”,
         “OS-EXT-SRV-ATTR:host”: “compute-0.localdomain”,
         “OS-EXT-SRV-ATTR:hypervisor_hostname”: “compute-0.localdomain”,
         “OS-EXT-SRV-ATTR:instance_name”: “instance-000003e7″,
         “OS-EXT-STS:power_state”: 1,
         “OS-EXT-STS:task_state”: null,
         “OS-EXT-STS:vm_state”: “active”,
         “OS-SRV-USG:launched_at”: “2016-10-10T21:13:24.000000″,
         “OS-SRV-USG:terminated_at”: null,
         “accessIPv4″: “172.31.1.42”,
         “accessIPv6″: “”,
(….)

Conclusion
Even though Heat is very useful, some people may prefer to learn Ansible to do their workload orchestration, as it offers a common language to define and automate the full stack of I.T. resources. I hope this article has provided you with a practical example, with a very basic use case for Ansible to launch OpenStack resources. If you are interested in trying Ansible and Ansible Tower, please visit https://www.ansible.com/openstack. A good starting point would be connecting Heat with Ansible Tower callbacks, as described in this other blog post
Also, if you want to learn more about Red Hat OpenStack Platform, you&8217;ll find lots of valuable resources (including videos and whitepapers) on our website. https://www.redhat.com/en/technologies/linux-platforms/openstack-platform
 
Quelle: RedHat Stack

Why businesses shouldn’t settle on a storage solution

To date, the business community, including startups and entrepreneurs, have had only simple storage solutions to choose from on the cloud. Or they’ve had outdated, pricey software, hardware and appliance solutions from legacy storage providers.
In today’s business world, this no longer works. Not with IDC’s predicted data growth of 44 zettabytes by 2020, fueled by the increased use of cloud, mobile, analytics, social and even cognitive to drive digital transformation. Additionally, unstructured content (images, video, audio, documents, and so on) outnumbers structured content by a factor of four.
In this world, simple storage solutions fall short. Governments and clients have increased pressure to assure compliance and residency requirements for content and applications. Transparency and coverage are not always strengths of cloud solutions.
Businesses shouldn’t have to settle for a simple cloud storage option. That’s why IBM Cloud Object Storage offers flexibility, scalability and simplicity. Solutions can be deployed on premises and across the IBM Cloud with more than 45 data centers around the world. Users get full transparency and control.
That’s essential because business is intrinsically hybrid. Elements of hybrid business processes require that some applications and content run on premises for performance, compliance or simply colocation with compute resources. Other business processes are well supported with either a dedicated or shared object storage deployment on IBM Cloud. IBM Cloud Object Storage supports both Amazon S3 and OpenStack Swift across deployment models, so there’s a consistent technology platform to support your applications and initiatives.
Additionally, there’s a higher level of availability and security. IBM Cloud Object Storage takes data that lands on one region on the IBM Cloud, then slices, erasure-codes and disperses the slices across three regions using something called SecureSlice.
Why does that matter? Two reasons:

If security is compromised in a region, the full content will not be exposed.
If one region is offline, your applications continue to run without disruption and without you having to intervene.

The IBM Cloud Storage approach translates to significantly better economics. Prices are over 25 percent less than other cloud storage providers*.
But the really exciting part goes beyond IBM Cloud Object Storage and layers on other IBM capabilities. Think of the exciting technology emanating from IBM Watson, IBM Bluemix and IBM Cloud Video Services. Cognitive will be essential as data grows from tera- to peta- to exa- to zettabytes, in the process taxing out ability to manage and utilize this growing mountain of content.  There is even broader value if you look at the IBM Spectrum family, with transparent cloud tiering and beyond. It is truly an exciting tapestry that you can weave together to elevate your .
Our ecosystem of partners delivers even more innovation and value. Our channel is broad, but to understand what’s possible, just look at what  the likes of Panzura, Nasuni, Mark III and CTERA are doing in bringing our portfolio, along with their expertise and IP, to deliver even greater value.
Learn more about IBM Cloud Object Storage and how it can be employed in your organization.
* Comparison is between IBM Cloud Object Storage Vault Cross-Region and S3 Infrequent Access bucket in AWS US East with Cross Region Replication to S3 Infrequent Access bucket in US West Oregon. Pricing is based on published IBM and Amazon US list prices as of October 13, 2016. Price includes storage capacity, API operations, internet data transfer charges, and cross-region data replication charges (s3 only). Pricing will vary depending on workload capacity, object size, data access patterns, and configuration. Pricing for this comparison based on the following workload assumptions:

Mixed footprint of 50 percent &;small&; and 50 percent &8220;large&8221; objects (by capacity). Average object sizes: small = 1GB, large = 5GB.
Monthly access pattern for all &8220;small&8221; and “large” objects: 10 percent read, 50 percent written, 5 percent listed, All objects assumed retained at least 30 days.
All object reads assumed outbound to internet (internet data transfer charges apply for all GETS).

The post Why businesses shouldn’t settle on a storage solution appeared first on news.
Quelle: Thoughts on Cloud

Cloud-based Project DataWorks aims to make data accessible, organized

Increasingly, data is a form of currency in business. Not just the data itself, but the ability to find just the right piece of information at just the right time.
As organizations amass more and more data reaching into petabyte sizes, it can sometimes become diffuse, which can make it hard for someone to quickly find exactly the right key to unlock a barrier to progress.
To solve that challenge, IBM unveiled Project DataWorks last week, a cloud-based data organization catalog which puts all of a company’s data in one easy-to-access, intuitive dashboard. Here’s how TechCrunch describes Project DataWorks:
With natural language search, users can pull up specific data sets from those catalogs much more quickly than with traditional methods. DataWorks also touts data ingestion at speeds of 50 to 100s of Gbps.
The tool is available through the IBM Bluemix platform and uses Watson cognitive technology to raise its speed and usability.
In an interview with PCWorld, Derek Scholette, general manager of cloud data services for IBM Analytics, explained: &;Analytics is no longer something in isolation for IT to solve. In the world we&;re entering, it&8217;s a team sport where data professionals all want to be able to operate on a platform that lets them collaborate securely in a governed manner.&;
Project DataWorks is open to enterprise customers but it’s also open to small businesses. It’s currently available as a pay-as-you-go service.
For more, read the full articles at TechCrunch and PCWorld.
The post Cloud-based Project DataWorks aims to make data accessible, organized appeared first on news.
Quelle: Thoughts on Cloud

Develop Cloud Applications for OpenStack on Murano, Part 3: The application, part 1: Understanding Plone deployment

The post Develop Cloud Applications for OpenStack on Murano, Part 3: The application, part 1: Understanding Plone deployment appeared first on Mirantis | The Pure Play OpenStack Company.
OK, so so far, in Part 1 we talked about what Murano is and why you need it, and in Part 2 we put together the development environment, which consists of a text editor and a small OpenStack cluster with Murano.  Now let&;s start building the actual Murano App.
What we&8217;re trying to accomplish
In our case, we&8217;re going to create a Murano App that enables the user to easily install the Plone CMS. We&8217;ll call it PloneServerApp.
Plone is an enterprise level CMS (think WordPress on steroids).  It comes with its own installer, but it also needs a variety of libraries and other resources to be available to that installer.
Our task will be to create a Murano App that provides an opportunity for the user to provide information the installer needs, then creates the necessary resources (such as a VM), configures it properly, and then executes the installer.
To do that, we&8217;ll start by looking at the installer itself, so we understand what&8217;s going on behind the scenes.  Once we&8217;ve verified that we have a working script, we can go ahead and build a Murano package around it.
Plone Server Requirements
First of all, let’s clarify the resources needed to install the Plone server in terms of the host VM and preinstalled software and libraries. We can find this information in the official Plone Installation Requirements.
Host VM Requirements
Plone supports nearly all Operating Systems, but for the purposes of our tutorial, let’s suppose that our Plone Server needs to run on a VM under Ubuntu.
As far as hardware requirements, the Plone server requires the following:
Minimum requirements:

A minimum of 256 MB RAM and 512 MB of swap space per Plone site
A minimum of 512 MB hard disk space

Recommended requirements:

2 GB or more of RAM per Plone site
40 GB or more of hard disk space

The Plone Server also requires the following to be preinstalled:

Python 2.7 (dev), built with support for expat (xml.parsers.expat), zlib and ssl.
Libraries:

libz (dev),
libjpeg (dev),
readline (dev),
libexpat (dev),
libssl or openssl (dev),
libxml2 >= 2.7.8 (dev),
libxslt >= 1.1.26 (dev).

The PloneServerApp will need to make sure that all of this is available.
Defining what the PloneServerApp does
Next we are going to define the deployment plan. The PloneServerApp executes all necessary steps in a completely automatic way to get the Plone Server working and to make it available outside of your OpenStack Cloud, so we need to know how to make that happen.
The PloneServerApp should follow these steps:

Ask the user to specify the host VM, such as number of CPUs, RAM, disk space, OS image file, etc. The app should then check that the requested VM meets all of the minimum hardware requirements for Plone.
Ask the user to provide values for the mandatory and optional Plone Server installation parameter.
Spawn a single Host VM, according to the user&8217;s chosen VM flavor.
Install the Plone Server and all of its required software and libraries on the spawned host VM. Well have PloneServerApp do this by launching an installation script (runPloneDeploy.sh).

Let&8217;s start at the bottom and make sure we have a working runPloneDeploy.sh script; we can then look at incorporating that into the PloneServerApp.
Creating and debugging a script that fully deploys the Plone Server on a single VM
We&8217;ll need to build and test our script on a Ubuntu machine; if you don&8217;t have one handy, go ahead and deploy one in your new OpenStack cluster. (When we&8217;re done debugging, you can then terminate it to clean up the mess.)
Our runPloneDeploy.sh will be based on the Universal Plone UNIX Installer. You can get more details about it in the official Plone Installation Documentation, but the easiest way is to follow these steps:

Download the latest version of Plone:
$ wget –no-check-certificate https://launchpad.net/plone/5.0/5.0.4/+download/Plone-5.0.4-UnifiedInstaller.tgz

Unzip the archive:
<pre?$ tar -xf Plone-5.0.4-UnifiedInstaller.tgz
Go to the folder containing the installation script&;
$ cd Plone-5.0.4-UnifiedInstaller

&8230;and see all installation options provided by the Universal UNIX Plone Installer:
$ ./install.sh –help

The Universal UNIX Installer lets you choose an installation mode:

a standalone mode &; single Zope web application server will be installed, or
a ZEO cluster mode &8211; ZEO Server and Zope instances will be installed.

It also lets you set several optional installation parameters. If you don’t set these, default values will be used.
In this tutorial let’s choose standalone installation mode and make it possible to configure the most significant parameters for standalone installation. These most significant parameters are the:

administrative user password
top level path on Host VM to install the Plone Server.
TCP port from which the Plone site will be available from outside the VM and outside your OpenStack Cloud

Now, if we were installing Plone manually, we would feed these values into the script on the command line, or set them in configuration files.  To automate the process, we&8217;re going to create a new script, runPloneDeploy.sh, which gets those values from the user, then feeds them to the installer programmatically.
So our script should be invoked as follows:
$ ./runPloneDeploy.sh <InstallationPath> <AdminstrativePassword> <TCPPort>
For example:
$ ./runPloneDeploy.sh “/opt/plone/” “YetAnotherAdminPassword” “8080”
The runPloneDeploy.sh script
Let&8217;s start by taking a look at the final version of the install script, and then we&8217;ll pick it apart.
1. #!/bin/bash
2. #
3. #  Plone uses GPL version 2 as its license. As of summer 2009, there are
4. #  no active plans to upgrade to GPL version 3.
5. #  You may obtain a copy of the License at
6. #
7. #       http://www.gnu.org
8. #
9.
10. PL_PATH=”$1″
11. PL_PASS=”$2″
12. PL_PORT=”$3″
13.
14. # Write log. Redirect stdout & stderr into log file:
15. exec &> /var/log/runPloneDeploy.log
16.
17. # echo “Installing all packages.”
18. sudo apt-get update
19.
20. # Install the operating system software and libraries needed to run Plone:
21. sudo apt-get -y install python-setuptools python-dev build-essential libssl-dev libxml2-dev libxslt1-dev libbz2-dev libjpeg62-dev
22.
23. # Install optional system packages for the handling of PDF and Office files. Can be omitted:
24. sudo apt-get -y install libreadline-dev wv poppler-utils
25.
26. # Download the latest Plone unified installer:
27. wget –no-check-certificate https://launchpad.net/plone/5.0/5.0.4/+download/Plone-5.0.4-UnifiedInstaller.tgz
28.
29. # Unzip the latest Plone unified installer:
30. tar -xvf Plone-5.0.4-UnifiedInstaller.tgz
31. cd Plone-5.0.4-UnifiedInstaller
32.
33. # Set the port that Plone will listen to on available network interfaces. Editing “http-address” param in buildout.cfg file:
34. sed -i “s/^http-address = [0-9]*$/http-address = ${PL_PORT}/” buildout_templates/buildout.cfg
35.
36. # Run the Plone installer in standalone mode
37. ./install.sh –password=”${PL_PASS}” –target=”${PL_PATH}” standalone
38.
39. # Start Plone
40. cd “${PL_PATH}/zinstance”
41. bin/plonectl start
The first line states which shell should be execute the various commands commands:
#!/bin/bash
Lines 2-8 are comments describing the license under which Plone is distributed:
#
#  Plone uses GPL version 2 as its license. As of summer 2009, there are
#  no active plans to upgrade to GPL version 3.
#  You may obtain a copy of the License at
#
#       http://www.gnu.org
#
The next three lines contain commands assigning input script arguments to their corresponding variables:
PL_PATH=”$1″
PL_PASS=”$2″
PL_PORT=”$3″
It’s almost impossible to write a script with no errors, so Line 15 sets up logging. It redirects both stdout and stderr outputs of each command to a log-file for later analysis:
exec &> /var/log/runPloneDeploy.log
Lines 18-31 (inclusive) are taken straight from the Plone Installation Guide:
sudo apt-get update

# Install the operating system software and libraries needed to run Plone:
sudo apt-get -y install python-setuptools python-dev build-essential libssl-dev libxml2-dev libxslt1-dev libbz2-dev libjpeg62-dev

# Install optional system packages for the handling of PDF and Office files. Can be omitted:
sudo apt-get -y install libreadline-dev wv poppler-utils

# Download the latest Plone unified installer:
wget –no-check-certificate https://launchpad.net/plone/5.0/5.0.4/+download/Plone-5.0.4-UnifiedInstaller.tgz

# Unzip the latest Plone unified installer:
tar -xvf Plone-5.0.4-UnifiedInstaller.tgz
cd Plone-5.0.4-UnifiedInstaller
Unfortunately, the Unified UNIX Installer doesn’t give us the ability to configure a TCP Port as a default argument of the install.sh script. We need to edit it in buildout.cfg before carrying out the main install.sh script.
At line 34 we set the desired port using a sed command:
sed -i “s/^http-address = [0-9]*$/http-address = ${PL_PORT}/” buildout_templates/buildout.cfg
Then at line 37 we launch the Plone Server installation in standalone mode, passing in the other two parameters:
./install.sh –password=”${PL_PASS}” –target=”${PL_PATH}” standalone
After setup is done, on line 40, we change to the directory where Plone was installed:
cd “${PL_PATH}/zinstance”
And finally, the last action is to launch the Plone service on line 40.
bin/plonectl start
Also, please don’t forget to leave comments before every executed command in order to make your script easy to read and understand. (This is especially important if you&8217;ll be distributing your app.)
Run the deployment script
Check your script, then spawn a standalone VM with an appropriate OS (in our case it is Ubuntu OS 14.04) and execute the runPloneDeply.sh script to test and debug it. (Make sure to set it as executable, and if necessary, to run it as root (or using sudo)!)
You&8217;ll use the same format we discussed earlier:
$ ./runPloneDeploy.sh <InstallationPath> <AdminstrativePassword> <TCPPort>
For example:
$ ./runPloneDeploy.sh “/opt/plone/” “YetAnotherAdminPassword” “8080”
Once the script is finished, check the outcome:

Find where Plone Server was installed on your VM using the find command, or by checking the directory you specified on the command line.
Try to visit the address http://127.0.0.1:[Port] &8211; where [Port] is the TCP Port that you point to as an argument of the runPloneDeploy.sh script.
Try to login to Plone using the &;admin&; username and [Password] that you point to as an argument of the runPloneDeploy.sh script.

If something doesn’t seem to be right check the runPloneDeploy.log file for errors.
As you can see, our scenario has a pretty small number of lines but it really does the whole installation work on a single VM. Undoubtedly, there are several ways in which you can improve the script, like smart error handling, passing more customizations or enabling Plone autostart. It’s all up to you.
In part 4, we&8217;ll turn this script into an actual Murano App.
The post Develop Cloud Applications for OpenStack on Murano, Part 3: The application, part 1: Understanding Plone deployment appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

How to Develop Cloud Applications for OpenStack using Murano, Part 2: Creating the Development Environment

The post How to Develop Cloud Applications for OpenStack using Murano, Part 2: Creating the Development Environment appeared first on Mirantis | The Pure Play OpenStack Company.
In part 1 of this series, we talked about what Murano is, and why you&;d want to use it as a platform for developing end user applications. Now in part 2 we&8217;ll help you get set up for doing the actual development,.
All that you need to develop your Murano App is:

A text editor to edit source code. There is no special IDE required; a plain text editor will do.
OpenStack with Murano. You will, of course, want to test your Murano App, so you&8217;ll need an environment in which to run it.

Since there&8217;s no special setup for the text editor, let&8217;s move on to getting a functional OpenStack cluster with Murano.
Where to find OpenStack Murano
If you don&8217;t already have access to a cloud with Murano deployed, that&8217;ll be your first task.  (You&8217;ll know Murano is available if you see an &;Application&; tab in Horizon.)
There are two possible ways to deploy OpenStack and Murano:

You can Install vanilla OpenStack (raw upstream code) using the DevStack scripts, but you&8217;ll need to do some manual configuration for Murano. If you want to take this route, you can find out how to install DevStack with Murano here.
You can take the easy way out and use one of the ready-to-use commercial distros that come with Murano to install OpenStack.

If this is your first time, I recommend that you start with one of the ready-to-use commercial OpenStack distros, for several reasons:

A distro is more stable and has fewer bugs, so you won’t waste your time on OpenStack deployment troubleshooting.
A distro will let you see how a correctly configured OpenStack cloud should look.
A distro doesn’t require a deep dive into OpenStack deployment, which means you can fully concentrate on developing your Murano App.

I recommend that you install the Mirantis OpenStack distro (MOS) because deploying Murano with  it can’t be more simple; you just need to click on one checkbox before deploying OpenStack and that’s all. (You can choose any other commercial distro, but the most of them are not able to install Murano in an automatic way. You can find out how to install Murano manually on an already deployed OpenStack Cloud here.)
Deploying OpenStack with Murano
You can get all of the details about Mirantis OpenStack in the Official Mirantis OpenStack Documentation, but here are the basic steps. You can follow them on Windows, Mac, or Linux; in my case, I&8217;m using a laptop running Mac OS X with 8GB RAM; we&8217;ll create virtual machines rather than trying to cobble together multiple pieces of hardware:

If you don&8217;t already have it installed, download and install Oracle VirtualBox. In this tutorial we’ll use VirtualBox 5.1.2 for OS X (VirtualBox-5.1.2-108956-OSX.dmg).
Download and install the Oracle VM VirtualBox Extension Pack. (Make sure you use the right download for your version of VirtualBox. In my case, that meansOracle_VM_VirtualBox_Extension_Pack-5.1.2-108956.vbox-extpack.)
Download the Mirantis OpenStack image.
Download the Mirantis OpenStack VirtualBox Scripts..
Unzip the script archive and copy the Mirantis OpenStack .ISO image to thevirtualbox/iso folder.
You can optionally edit config.sh if you want to set up a custom password or edit network settings. There are a lot of detailed comments, so it will not be a problem to configure your main parameters.
From the command line, launch the launch.sh script.
Unless you&8217;ve changed your configuration, when the scripts finish you’ll have one Fuel Master Node VM and three slave VMs running in VirtualBox.

Next we&8217;ll create the actual OpenStack cluster itself.
Creating the OpenStack cluster
At this point we&8217;ve installed Fuel, but we haven&8217;t actually deployed the OpenStack cluster itself. To do that, follow these steps:

Point your browser to http://10.20.0.2:8000/ and log in as an administrator using “admin” as your password (or the address and credentials you added in configure.sh).

Once you’ve logged into Fuel Master Node it lets you deploy the OpenStack Cloud and you can begin to explore it.

Click New OpenStack Environment.

Choose a name for your OpenStack Cloud and click Next:

Don’t change anything on the Compute tab, just click Next:

Don’t change anything on the Networking Setup tab, just click Next:

Don’t change anything on the Storage Backends tab, just click Next:

On the Additional Services tab tick the “Install Murano” checkbox and click Next:

On the Finish tab click Create:

From here you&8217;ll see the cluster&8217;s Dashboard.  Click Add Nodes.

Here you can see that the launch script automatically created three VirtualBox VMs, and that Fuel has automatically discovered them:

The next step is to assign roles to your nodes. In this tutorial you need at least two nodes:

The Controller Node &; This node manages all of the operations within an OpenStack environment and provides an external API.
The Compute Node &8211; This node provides processing resources to accommodate virtual machine workloads and it creates, manages and terminates VM instances. The VMs, or instances, that you create in Murano run on the compute nodes.
Assign a controller role to a node with 2GB RAM.

 Click Apply Changes and follow the same steps to add a 1 GB compute node. The last node will not be needed in our case, so you can remove it and give more hardware resources to other nodes later if you like.
Leave all of the other settings at their default values, but before you deploy, you will want to check your networking to make sure everything is configured properly.  (Fuel configures networking automatically, but it&8217;s always good to check.)  Click the Networks tab, then Connectivity Check in the left-hand pane. Click Verify Networks and wait a few moments.

Go to the Dashboard tab and click Deploy Changes to deploy your OpenStack Cloud.

When Fuel has finished you can login into the Horizon UI, http://172.16.0.3/horizon by default, or you can click the link on the Dashboard tab. (You also can go to the Health Check tab and run tests to ensure that your OpenStack Cloud was deployed properly.)

Log into Horizon using the credentials admin/admin (unless you changed them in the Fuel Settings tab).

As you can see by the Applications tab at the bottom of the left-hand pane, the Murano Application Catalog has been installed.
Tomorrow we&8217;ll talk about creating an application you can deploy with it.
The post How to Develop Cloud Applications for OpenStack using Murano, Part 2: Creating the Development Environment appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Develop Cloud Applications for OpenStack on Murano, Part 1: What is Murano, and why do I need it?

The post Develop Cloud Applications for OpenStack on Murano, Part 1: What is Murano, and why do I need it? appeared first on Mirantis | The Pure Play OpenStack Company.
So many apps, so little time.
Developing applications for cloud can be a complicated process; you need to think about resources, placement, scheduling, creating virtual machines, networking&; or do you?  The OpenStack Murano project makes it possible for you to create an application without having to worry about directly doing any of that.  Instead, you can create your application, package it with instructions, and let Murano do the rest.
In other worlds, Murano lets you much more easily distribute your applications &; users just have to click a few buttons to use them.
Every day this week we&;re going to look at the process of creating OpenStack Murano apps so that you can make your life easier &8212; and get your work out there for people to use without having to beg an administrator to install it for them.
We&8217;ll cover the following topics:

Day 1: What is Murano, and why do I need it?
In this article, we&8217;ll talk about what Murano is, who it helps, and how. We&8217;ll also start with the basic concepts you need to understand and let you know what you&8217;ll need for the rest of the series.
Day 2:  Creating the development environment
In this article, we&8217;ll look at deploying an OpenStack cluster with Murano so that you&8217;ve got the framework to work with.
Day 3:  The application, part 1:  Understanding Plone deployment
In our example, we&8217;ll show you how to use Murano to easily deploy the Plone enterprise CMS; in this article, we&8217;ll go over what Murano will actually have to do to install it.
Day 4:  The application, part 2:  Creating the Murano App
Next we&8217;ll go ahead and create the actual Murano App that will deploy Plone.
Day 5:  Uploading and troubleshooting the app
Now that we&8217;ve created the Plone Murano App, we&8217;ll go ahead and add it to the application catalog so that users can deploy it. We&8217;ll also look at some common issues and how to solve them.

Interested in seeing more? We&8217;ll showing you how to automate Plone deployments for OpenStack at Boston Plone October 17-23, 2016.
Before you start
Before you get started, let&8217;s make sure you&8217;re ready to go.
What you should know
Before we start, let&8217;s get the lay of the land. There&8217;s really not that much you need to know before building a Murano app, but it helps if you are familiar with the following concepts:

Virtualization: Wikipedia says that &;Hardware virtualization or platform virtualization refers to the creation of a virtual machine that acts like a real computer with an operating system.&; Perhaps that&8217;s an oversimplification, but it&8217;ll work for us here. For this series, it helps to have an understanding of virtualization fundamentals, as well as experience in the creation, configuration and deployment of virtual machines, and the creation and restoration of VM snapshots.
OpenStack: OpenStack is, of course, a platform that helps to orchestrate and manage these virtual resources for you; Murano is a project that runs on OpenStack.
UNIX-like OS fundamentals: It also helps to understand command line, basic commands and the structure of Unix-like systems. If you are not familiar with the UNIX command line you might want to study this Linux shell tutorial first.
SSH: It helps to know how to generate and manage multiple SSH keys, and how to connect to a remote host via SSH using SSH keys.
Networks: Finally, although you don&8217;t need to be a networking expert, it is useful if you are familiar with these concepts: IP, CIDR, Port, VPN, DNS, DHCP, and NAT.

If you are not familiar with these concepts, don&8217;t worry; you will be able to learn more about them as we move forward.
What you should have
In order to run the software we&8217;ll be talking about, your environment must meet certain prerequisites. You&8217;ll need a 64-bit host operating system with:

At least 8 GB RAM
300 GB of free disk space. It doesn’t matter if you have less than 300 GB of real free disk space, as it will be taken by demand. So, if you are going to deploy a lightweight application then maybe even 128 GB will be enough. It’s up to your application requirements. In the case of Plone, the recommendation is 40MB per site to be deployed.
Virtualization enabled in BIOS
Internet access

What is OpenStack Murano?
Imagine you&8217;re a cloud user. You just want to get things done. You don&8217;t care about all of the details, you just want the functionality that you need.
Murano is an OpenStack project that provides an application catalog, like the AppStore for iOS or GooglePlay for Android. Murano lets you easily browse for cloud applications you need by their name or category, and then enables you to rapidly deploy them to the cloud with just a few clicks.
For example, if you want a web server, rather than having to create a VM, find the software, deploy it, manage IP addresses and ports, and so on, Murano enables you to simply choose a web server application, name it, and go; Murano does the rest of the work.
Murano also makes it possible to easily deploy applications with multiple components.  For example, what if you didn&8217;t just want a web server, but you wanted a WordPress application, which includes a web server database, and web application? A pre-existing WordPress Murano app would make it possible for you to simply choose the app, specify a few parameters, and go.  (In fact, later in this series we&8217;ll look at creating an app for an even more complex CMS, Plone.)
Because it&8217;s so straightforward to deploy the applications, users can do it themselves, rather than relying on administrators.
Moreover, not only does Murano let users and administrators easily deploy complex cloud applications, it also completely manages application lifecycles such as auto scaling-up and scaling-down clusters, providing self-healing and more.
Murano’s main end users are:

Independant cloud users, who can use Murano to easily find and deploy applications themselves.
Cloud Service Owners, who can use Murano to save time when deploying and configuring applications to multiple instances or when deploying complex distributed applications with many dependent applications and services.
Developers, who can use Murano to easily deploy and redeploy on-demand applications, many times without cloud administrators, for their own purposes (for example for hosting a web-site, or for the development and testing of applications). They can also use Murano to make their applications available to other end users.

In short, Murano turns application deployment and managing processes into a very simple process that can be performed by administrators and users of all levels. It does this by encapsulating all of the deployment logic and dependencies for the application into a Murano App, which is a single zip file with a specific structure. You just need to upload it to your cloud and it&8217;s ready.
Why should I create a Murano app?
OK, so now that we know what a Murano app is, why should we create one?  Well, ask yourself these questions:

Do I want to spend less time deploying my applications?
Do I want my users to spend less time (and aggravation) deploying my applications?
Do I want my employees to spend more time actually getting work done and less time struggling with software deployment?

(Do you notice a theme here?)
There are also reasons for creating Murano Apps that aren&8217;t necessarily related to saving time or being more efficient:

You can make it easier for users to find your application by publishing it to the OpenStack Community Application Catalog, which provides access to a whole ecosystem of people  across fast-growing OpenStack markets around the world. (Take a look how huge it is by exploring OpenStack User-stories.)
You can develop your app as a robust and re-usable solution in your private OpenStack сloud to avoid error-prone manual work.

All you need to do to make these things possible is to develop a Murano App for your own application.
Where we go from here
OK, so now we know what a Murano App is, and why you&8217;d want to create one. Join us tomorrow to find out how to create the OpenStack and developer environment you&8217;ll need to make it work.
And let us know in the comments what you&8217;d like to see out of this series!
 
The post Develop Cloud Applications for OpenStack on Murano, Part 1: What is Murano, and why do I need it? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Mirantis at EDGE 2016 – Unlocked Private Clouds on IBM Power8

The post Mirantis at EDGE 2016 &; Unlocked Private Clouds on IBM Power8 appeared first on Mirantis | The Pure Play OpenStack Company.
On September 22, Mirantis&; Senior Technical Director, Greg Elkinbard, spoke at IBM&8217;s Edge 2016 IT infrastructure conference in Las Vegas. His short talk described Mirantis&8217; mission: to create clouds using OpenStack and Kubernetes under a &;Build, Operate, Transfer&; model. He enumerated some of the benefits Mirantis customers like Volkswagen are gaining from their large-scale clouds, including more-engaged developers, faster release cycles, platform delivery times reduced from months to hours, and significantly lower costs.
Greg wrapped up the session with a progress report on IBM and Mirantis&8217; recent collaboration to produce a reference architecture for compute node placement on IBM Power8 systems: a solution aimed at lowering costs and raising performance for database and similar demanding workloads. Mirantis is also validating Murano applications and other methods for deploying a wide range of apps on IBM Power hardware, including important container orchestration frameworks, NFV apps, Big Data tools, webservers and proxies, popular databases and developer toolchain elements.

Mirantis IBM Partner Page: https://www.mirantis.com/partners/ibm/
For more on IBM Power8 servers, please visit http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=POB03046USEN

The post Mirantis at EDGE 2016 &8211; Unlocked Private Clouds on IBM Power8 appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis