OpenStack Summit Austin: Day 4

 
Hello again from Austin, Texas where the fourth day of the main OpenStack Summit has come to a close. While there are quite a few working sessions and contributor meet-ups on Friday, Thursday marks the last official day of the main summit event. The exhibition hall closed its doors around lunch time, and the last of the vendor sessions occurred later in the afternoon. As the day concluded, many attendees were already discussing travel plans for OpenStack Summit Barcelona in October!
Before we get ahead of ourselves however, day 4 was still jam-packed with a busy agenda. Like the first three days of the event, Red Hat speakers led quite a few interesting&;and well attended&8211;sessions.

To start, Al Kari, Kambiz Aghaiepour, and Will Foster combined to give a talk entitled Deploying Microservices Architecture on OpenStack Using Kubernetes, Docker, Flannel and etcd. The hands-on lab provided a step by step demonstration of how to deploy these services in a variety of environments.
Lars Herrmann, General Manager of Red Hat’s Integrated Solutions Business Unit then led a talk called Orchestrated Containerization with OpenStack. In the session, Lars explored how to leverage container standards, like Kubernetes, in implementing hybrid containerization strategies. He also discussed a variety of architectural designs for hybrid containerization and revealed how to use Ansible in these scenarios.
Ihar Hrachyshka then teamed with Kevin Benton and Sean Collins from Mirantis, as well as Matthew Kassawara from IBM in a presentation entitled The Notorious M.T.U. (Maximum Transmission Unit). The presenters examined impacts of improper MTU parameters on both physical and virtual networks, neutron MTU problems, and how to properly configure neutron MTU in various environments.
Just before lunch, in his presentation on CephFS, Greg Farnum, a long-standing member of the core Ceph development group, detailed why CephFS is more stable and feature-rich than ever. He then summarized which key new functions were introduced in the recent Jewel release, and also provided a glimpse of what’s to come in future iterations.
Later, Sridhar Gaddam joined with Bin Hu from AT&T and Prakash Ramchandran from Huawei Technology to discuss IPv6 capabilities in Telco environments. Among other things, the trio examined scenarios enabled by the IPv6 platform, its current state, and future expectations.
And finally, Miguel Angel Ajo, a Red Hat developer focused on Neutron, collaborated with Victor Howard from Comcast and Sławek Kapłoński from OVH in a presentation called Neutron Quality of Service, New Features, and Future Roadmap. The presenters detailed the Quality of Service (QoS) framework introduced in the Liberty release, and how it serves to provide QoS settings on the Neutron networking API.  They also covered DSCP rules, role based access control (RBAC) for QoS policies, and much more.
As you probably can imagine, it was a busy final day at OpenStack Summit. Like all OpenStack Summits, it was an extremely informative event, and also lots of fun! If you missed our previous daily recaps, we encourage you to read our blog posts from Day 1, Day 2, and Day 3. And for those who were present, we hope you enjoyed the event and found time to visit the Red Hat booth, as well as network with friends and colleagues from around the world. Like you, we’re already counting down the days until the next OpenStack Summit in Barcelona!
For continued Red Hat and industry cloud news, we invite you to follow us on Twitter at @RedHatCloud or @RedHatNews.
Quelle: RedHat Stack

Who is Testing Your Cloud?

Co-Authored with Dan Sheppard, Product Manager, Rackspace
 
With test driven development, continuous integration/continuous deployment and devops practices now the norm, most organizations understand the importance of testing their applications.
But what about the cloud those applications are going to live on? Too many companies miss this critical step, leading to gaps in their operations, which can lead to production issues, API outages, inability to upgrade, problems when trying to upgrade and general instability of the cloud.
It all begs the question: &;Do you even test?&;
At Rackspace, our industry leading support teams use a proactive approach to operations, and that begins with detailed and comprehensive testing, so that not only your applications but your cloud is ready for your production workload.
Critical Collaboration
For Rackspace Private Cloud Powered by Red Hat, we collaborate closely with Red Hat; we test the upstream OpenStack code as well as the open sourced projects we leverage for our deployment, such as Ceph and Red Hat OpenStack Platform Director. This is done in a variety of ways, like sharing test cases upstream with the community via Tempest, creating and tracking bugs, and creating bug fixes upstream.

The Rackspace and Red Hat team also work together on larger scale and performance tests at the OpenStack Innovation Center, which we launched last year in conjunction with Intel to advance the capabilities of OpenStack.
Recent tests have included performance improvements in relation to offloading VXLAN onto network cards, scaled upgrade testing from Red Hat OpenStack Platform version 7 to version 8, and testing of scaled out Ceph deployments. Data from this testing will be made available to the community as the detailed results are analyzed.
Building on the upstream testing, the Rackspace operations team leverages Rally and Tempest to execute 1,300 test cases prior to handing the cloud over to the customer. This testing serves as a &8220;1,300 point inspection&8221; of the cloud to give you the confidence that your cloud is production ready and a report of this testing is handed over to you with a guide to help you get started with your new cloud. These test cases serve to validate and demonstrate the functionality of the OpenStack APIs, with specific scripts testing things such as (just to name a few):

administration functions
creating instances and cinder volumes
creating software defined networks
testing keystone functions and user management

Upgrades Made Easy
One of the key requirements for enterprises is the ability to upgrade software without impacting the business.
These upgrades have been challenging in OpenStack in the past, but thanks to the Rackspace/Red Hat collaboration, we can now make those upgrades with limited downtime to your guests on the Rackspace Private Cloud Powered by Red Hat.
To deliver this, the Rackspace team runs the latest version of OpenStack code through our lab and executes the 1,300 point inspection. When we are satisfied with that, we test upgrading our lab to the latest version and execute our 1,300 point test again, thus confirming that the new version of OpenStack meets your requirements and that the code is safe for your environment.
Our team doesn’t stop there.
So that the code deploys properly to your cloud, our operations team executes a 500-script regression test at the start of a scheduled upgrade window. Then our team upgrades your cloud and executes the regression test again. The final step in the scheduled upgrade window is to compare our pre- and post-regression test results to validate that the upgrade was successful.
Since the launch of Rackspace Private Cloud Powered by Red Hat, the Red Hat and Rackspace team has been working to refine that process by incorporating Red Hat’s Distributed Continuous Integration project.

Distributed Continuous Integration User Interface
Extended Testing
With Distributed Continuous Integration, Red Hat extends the testing process related to building Red Hat OpenStack Platform to Rackspace’s in-house environment. Instead of waiting for a general availability release of Red Hat OpenStack Platform to start testing Rackspace scenarios, pre-release versions are delivered and tested following a CI process. Test results are automatically shared with Red Hat’s experts and, along with Rackspace, new features are debugged and improved taking into consideration new scenarios.
Using DCI to test pre-released versions of Red Hat OpenStack Platform helps ensure we’re ready for the new general release just after launch. Why? Because we have been running incremental changes of the software in preparation for general availability.
DCI also helps existing Rackspace private cloud customers, by allowing the Rackspace operations team to test code changes from Red Hat while they’re being developed, allowing us a shorten the feedback loop back to Red Hat engineering, and giving us a supported CI/CD environment for your cloud at a scale not possible without a considerable investment in talent and resources.
So, if you are one of the 81 percent of senior IT professionals leveraging or planning to leverage OpenStack, ask your team, &8220;How do we test our OpenStack?&8221; — then give Rackspace a call to talk about a better way.
 
 
 
Quelle: RedHat Stack

How connection tracking in Open vSwitch helps OpenStack performance

Written by Jiri Benc,  Senior Software Engineer, Networking Services, Linux kernel, and Open vSwitch
 
 
By introducing a connection tracking feature in Open vSwitch, thanks to the latest Linux kernel, we greatly simplified the maze of virtual network interfaces on OpenStack compute nodes and improved its networking performance. This feature will appear soon in Red Hat OpenStack Platform.
Introduction
It goes without question that in the modern world, we need firewalling to protect machines from hostile environments. Any non-trivial firewalling requires you keep track of the connections to and from the machine. This is called &;stateful firewalling&;. Indeed, even such basic rule as &8220;don&;t allow machines from the Internet to connect to the machine while allowing the machine itself to connect to servers on the Internet&8221; requires stateful firewall. This applies also to virtual machines. And obviously, any serious cloud platform needs such protection.

Stateful Firewall in OpenStack
It&8217;s of no surprise that OpenStack implements stateful firewalling for guest VMs. It&8217;s the core of its Security Groups feature. It allows the hypervisor to protect virtual machines from unwanted traffic.
As mentioned, for stateful firewalling the host (OpenStack node) needs to keep track of individual connections and be able to match packets to the connections. This is called connection tracking, or &8220;conntrack&8221;. Note that connections are a different concept to flows: connections are bidirectional and need to be established, while flows are unidirectional and stateless.
Let&8217;s add Open vSwitch to the picture. Open vSwitch is an advanced programmable software switch. Neutron uses it for OpenStack networking &; to connect virtual machines together and to create overlay network connecting the nodes. (For completeness, there are other backends than Open vSwitch available; however, Open vSwitch offers the most features and performance due to its flexibility and it&8217;s considered the &8220;main&8221; backend by many.)
However, packet switching in Open vSwitch datapath is based on flows and solely on flows. It had been traditionally stateless. Not a good situation when we need a stateful firewall.
Bending iptables to Our Will
There&8217;s a way out of this. The Linux kernel contains connection tracking module and it can be used to implement a stateful firewall. However, these features had been available only to the Linux kernel firewall at the IP protocol layer (called &8220;iptables&8221;). And that&8217;s a problem: Open vSwitch does not operate at the IP protocol layer (also called L3), it&8217;s at one layer below (called L2). In other words, not all packets processed by the kernel are subject to iptables processing. In order for a packet to be processed by iptables, it needs either to be destined to an IP address local to the host or routed by the host. Packets which are switched (either by Linux bridge or Open vSwitch) are not processed by iptables.
OpenStack needs the VMs to be on the same L2 segment, i.e. packet between them are switched. In order to still make use of iptables to implement a stateful firewall, it used a trick.
The Linux bridge (traditional software switch included in the Linux kernel) contains its own filtering mechanism called ebtables. While connection tracking cannot be used from within ebtables, by setting appropriate system config parameters it&8217;s possible to call iptables chains from ebtables. By using this technique, it&8217;s possible to make use of connection tracking even when doing L2 packet switching.
Now, the obvious question is where to put this on the OpenStack packet traversal path.
The heart of every OpenStack node is so called &8220;integration bridge&8221;, br-int. In a typical deployment, br-int is implemented using Open vSwitch. It&8217;s responsible for directing packets between VMs, tunneling them between nodes and some other tasks. Thus, every VM is connected to an integration bridge.
The stateful firewall needs to be inserted between the VM and the integration bridge. We want to make use of iptables, which means inserting a Linux bridge between the VM and the integration bridge. That bridge needs to have the correct settings applied to call iptables and iptables rules need to be populated to utilize conntrack and do the necessary firewalling.
How It Looks
Looking at the picture below, let&8217;s examine how a packet from VM to VM traverses the network stack.
The first VM is connected to the host through the tap1 interface. A packet coming out of the VM is then directed to the Linux bridge qbr1. On that bridge, ebtables call into iptables where the incoming packet is matched according to configured rules. If the packet is approved, it passes the bridge and is sent out to the second interface connected to the bridge. That&8217;s qvb1 which is one side of the veth pair.
Veth pair is a pair of interfaces that are internally connected to each other. Whatever is sent to one of the interfaces is received by the other one and vice versa. Why the veth pair is needed here? Because we need something that could interconnect the Linux bridge and the Open vSwitch integration bridge.
Now the packet reached br-int and is directed to the second VM. It goes out of br-int to qvo2, then through qvb2 it reaches the bridge qbr2. The packet goes through ebtables and iptables and finally reaches tap2 which is the target VM.
This is obviously very complex. All those bridges and interfaces add cost in extra CPU processing and extra latency. The performance suffers.
Connection Tracking in Open vSwitch to the Rescue
All of this can be dramatically simplified. If only we could include the connection tracking directly in Open vSwitch&;
And that&8217;s exactly what happened. Recently, the connection tracking code in the kernel was decoupled from iptables and Open vSwitch got support for conntrack. Now it&8217;s possible to match not only on flows but also on connections. Jakub Libosvar (Red Hat) made use of this new feature in Neutron.
Now, VMs can connect directly to the integration bridge and stateful firewall is implemented just using Open vSwitch rules alone.
Let&8217;s examine the new, improved situation in the second picture below.

A packet coming out of the first VM (tap1) is directed to br-int. It&8217;s examined using the configured rules and either dropped or directly output to the second VM (tap2).
This substantially saves packet processing costs and thus increases performance. The following overhead was eliminated:

Packet enqueueing on veth pair: The packet sent to a veth endpoint is put to a queue and dequeued and processed later.
Bridge processing on per-VM bridge:. Each packet traversing the bridge is subject to FDB (forwarding database) processing.
ebtables overhead: We measured that just enabling ebtables without any rules configured has performance costs on the bridge throughput. Generally, ebtables are considered obsolete and don&8217;t receive much work, especially not performance work.
iptables overhead: There is no concept of per-interface rules in iptables, iptables rules are global. This means that for every packet, incoming interface needs to be checked and execution of rules branched to the set of rules appropriate for the particular interface. This means linear search using interface name matches which is very costly, especially with a high number of VMs.

In contrast, by using Open vSwitch conntrack, 1.-3. are gone instantly. Open vSwitch has only global rules, thus we still need to match for the incoming interface in Open vSwitch but unlike iptables, the lookup is done using port number (not textual interface name) and more importantly, using a hash table. The overhead in 4. is thus completely eliminated, too.
The only remaining overhead is of the firewall rules themselves.
In Summary
Without Open vSwitch conntrack:

A Linux bridge needs to be inserted between a VM and the integration bridge.
This bridge is connected to the integration bridge by a veth pair.
Packets traversing the bridge are processed by ebtables and iptables, implementing the stateful firewall.
There&8217;s substantial performance penalty caused by veth, bridge, ebtables and iptables overhead.

With Open vSwitch conntrack:

VMs are connected directly to the integration bridge.
The stateful firewall is implemented directly at the integration bridge using hash tables.

Images were captured on a real system using plotnetcfg and simplified to better illustrate the points of this article.
Quelle: RedHat Stack

Introduction to Red Hat OpenStack Platform Director

Those familiar with OpenStack already know that deployment has historically been a bit challenging. That&;s mainly because deployment includes a lot more than just getting the software installed &; it’s about architecting your platform to use existing infrastructure as well as planning for future scalability and flexibility. OpenStack is designed to be a massively scalable platform, with distributed components on a shared message bus and database backend. For most deployments, this distributed architecture consists of Controller nodes for cluster management, resource orchestration, and networking services, Compute nodes where the virtual machines (the workloads) are executed, and Storage nodes where persistent storage is managed.
The Red Hat recommended architecture for fully operational OpenStack clouds include predefined and configurable roles that are robust, resilient, ready to scale, and capable of integrating with a wide variety of existing 3rd party technologies. We do this with by leveraging the logic embedded in Red Hat OpenStack Platform Director (based on the upstream TripleO project).
With Director, you&8217;ll use OpenStack language to create a truly Software Defined Data Center. You’ll use Ironic drivers for your initial bootstrapping of servers, and Neutron networking to define management IPs and provisioning networks. You will use Heat to document the setup of your server room, and Nova to monitor the status of your control nodes. Because Director comes with pre-defined scenarios optimized from our 20 years of Linux know-how and best practices, you will also learn how OpenStack is configured out of the box for scalability, performance, and resilience.
Why do kids in primary school learn multiplication tables when we all have calculators? Why should you learn how to use OpenStack in order to install OpenStack? Mastering these pieces is a good thing for your IT department and your own career, because they provide a solid foundation for your organization’s path to a Software Defined Data Center. Eventually, you’ll have all your Data Center configuration in text files stored on a Git repository or on a USB drive that you can easily replicate within another data center.
In a series of coming blog posts, we’ll explain how Director has been built to accommodate the business requirements and the challenges of deploying OpenStack and its long-term management. If you are really impatient, remember that we publish all of our documentation in the Red Hat OpenStack Platform documentation portal (link to version 8).

Lifecycle of your OpenStack cloud
Director is defined as a lifecycle management platform for OpenStack. It has been designed from the ground up to bridge the gap between the planning and design (day-0), the installation tasks themselves (day-1), and the ongoing operation, administration and management of the environment (day-2).

Firstly, the pre-deployment planning stage (day-0). Director provides configuration files to define the target architecture including networking and storage topologies, OpenStack service parameters, integrations to third party plugins, etc. All the required items to suit the requirements of an organisation. It also verifies that target hardware nodes are ready to be deployed and their performance is equivalent (we call that “black-sheep detection”).
Secondly, the deployment stage (day-1). This is where the bulk of the Director functionality is executed. One of the most important steps is verifying that the proposed configuration is sane, there’s no point in trying to deploy a configuration if we are sure it will fail due to pre-flight validation checking. Assuming that the configuration is valid, Director needs to take care of the end to end orchestration of the deployment, including hardware preparation, software deployment, and once up and running, configuring the OpenStack environment to perform as expected.
Lastly, the operations stage in the long-run (day-2). Red Hat has listened to our OpenStack customers and their Operations teams, and designed Director accordingly. It can check the health of an environment, and perform changes, such as adding or replacing OpenStack nodes, updating minor releases (security updates) and also automatically upgrading between major versions, for example from Kilo to Liberty.

Despite being a relatively new offering from Red Hat, Director has strong technology foundations, a convergence of many years of upstream engineering work, established technology for Linux and Cloud administration, and newer DevOps automation tools. This has allowed us to create a powerful, best of breed deployment tool that&8217;s in-line with the overall direction of the OpenStack project (with TripleO), as well as the OPNFV installation projects (with Arno).
Feature Overview
Upon initial creation of the Red Hat OpenStack Platform Director, we improved all the major TripleO components and extended them to perform tasks that go beyond just the deployment. Currently, Director is able to perform the following tasks:

Deploy a management node (called undercloud) as the bootstrap OpenStack cloud. From there, we define the organisation’s production-use Overcloud combining our reference configurations and user-provided customisations. Director provides command line utilities (and a graphical web interface) as a shortcut to access the undercloud OpenStack RESTful API’s.
The undercloud interacts with bare metal hardware via Ironic (to do PXE boot and power management), which relies on an extensive array of supported drivers. Red Hat collaborates with vendors so that their hardware will be compatible with Ironic, giving customers flexibility in the hardware platforms they choose to consume.
During overcloud deployment, Director can inspect the hardware and automatically assign roles to specific nodes, so nodes are chosen based on their system specification and performance profile. This vastly simplifies the administrative overhead, especially with large scale deployments.
Director ships with a number of validation tools to verify that any user-provided templates are correct (like the networking files), that also be useful when performing updates or upgrades. For that, we leverage Ansible in the upgrade sanity check scripts. Once deployed you can automatically test a deployed overcloud using Director’s Tempest toolset. Tempest verifies that the overcloud is working as expected with hundreds of end-to-end tests, in a way that it conforms to the upstream API specification. Red Hat is committed to shipping the standard API specification and not breaking update and upgrade paths for customers and therefore providing an automated mechanism for compatibility is of paramount importance.
In terms of the deployment architecture itself, Red Hat has built a highly available reference architecture containing our recommended practices for availability, resiliency, and scalability. The default Heat templates as shipped within Director have been engineered with this reference architecture in-mind, and therefore a customer deploying OpenStack with Director can leverage our extensive work with customers and partners to provide maximum stability, reliability, and security features for their platform. For instance, Director can deploy SSL/TLS based OpenStack endpoints for better security via encrypted communications.
The majority of our production customers are using Ceph with OpenStack. That’s why Ceph is the default storage backend within Director, and automatically deploys Ceph monitors on controller nodes, and Ceph OSDs on dedicated storage nodes. Alternatively, it can connect the OpenStack installation to an existing Ceph cluster. Director supports a wide variety of Ceph configurations, all based on our recommended best practices.
Last, but not least, the overcloud networks defined within Director can now be configured as either IPv4 or IPv6. Feel free to check our OpenStack IPv6 networking guide. Some exceptions, only doable in IPv4, are the provisioning network (PXE) and the VXLAN/GRE tunnel endpoints, which can only be IPv4 at this stage. Dual stack IPv4 and IPv6 networking is available only for non-infrastructure networks, for example, tenant, provider, and external networks.
For 3rd-party plugin support, our partners are working with the upstream OpenStack TripleO community to add their components, like other SDN or SDS solutions. The Red Hat Partner Program of certified extensions allows our customer to enable and automatically install those plugins via Director (for more information, visit our documentation on Partner integrations with Director)

In our next post, we’ll explain the components of Director (TripleO) in further detail, how does it help you deploy and manage the Red Hat OpenStack Platform, and a deep dive on how do they work together. This will help you understand, in our opinion, the most important feature of all: Automated OpenStack Updates and Upgrades. Stay tuned!
 
 
Quelle: RedHat Stack

TripleO (Director) Components in Detail

In our previous post we introduced Red Hat OpenStack Platform Director. We showed how at the heart of Director is TripleO, short for “OpenStack on OpenStack”. TripleO is an OpenStack project that aims to utilise OpenStack itself as the foundations for deploying OpenStack. To clarify, TripleO advocates the use of native OpenStack components, and their respective API’s to configure, deploy, and manage OpenStack environments itself.
The major benefit of utilising these existing API&;s with Director is that they&8217;re well documented, they go through extensive integration testing upstream, and are the most mature components in OpenStack. For those that are already familiar with the way that OpenStack works, it&8217;s a lot easier to understand how TripleO (and therefore, Director) works. Feature enhancements, security patches, and bug fixes are therefore automatically inherited into Director, without us having to play catch up with the community.
With TripleO, we refer to two clouds: The first to consider is the undercloud, this is the command and control cloud in which a smaller OpenStack environment exists that&8217;s sole purpose is to bootstrap a larger production cloud. This is known as the overcloud, where tenants and their respective workloads reside. Director sometimes is treated as a synonymous to the undercloud; Director bootstraps the undercloud OpenStack deployment and provides the necessary tooling to deploy an overcloud.

Ironic+Nova+Glance: baremetal management of overcloud nodes
For proper baremetal management during a deployment, Nova and Ironic need to be in perfect coordination. Nova is responsible for the orchestration, deployment, and lifecycle management of compute resources, for example, virtual machines. Nova relies on a set of plugins and drivers to establish compute resources requested by a tenant, such as the utilisation of the KVM hypervisor.
Ironic started life as an alternative Nova &;baremetal driver&;. Now, Ironic has its own OpenStack project and compliments Nova using its own respective API and command line utilities. Once the overcloud is deployed,  Ironic can be offered to customers that want to offer the baremetal nodes to its tenants using dedicated hardware outside of Nova’s compute pools. Here, in Director’s context, Ironic is a key core component of the undercloud, controlling, and deploying the physical nodes that are required for the overcloud deployment.
But first Director has to register the nodes with Ironic. One has to catalog the IPMI (out-of-band management), it’s IP, username and password, although there are also vendor-specific drivers, for example HP iLO, Cisco UCS, Dell DRAC. Ironic will manage the power-state of bare metal nodes used for the overcloud deployment, as well as the deployment of the operating system (via a PXE-bootable installer image)
The disk image used during hardware bootstrap is taken from the undercloud Glance image service. Red Hat provides the required images to be deployed in the overcloud nodes. These disk images typically contain Red Hat Enterprise Linux and all OpenStack components, which minimises any post-deployment software installation. They can, of course, be customised further prior to upload into Glance. For example, customers often want to integrate additional software or configurations as per their requirements.
Neutron: network management of the overcloud
As you may already know, Neutron provides network access to tenants via a self-service interface to define networks, ports, and IP addresses that can be attached to instances. It also provides supporting services for booting instances such as DHCP, DNS, and routing. Within Director, we firstly use Neutron as an API for defining all overcloud networks, any required VLAN isolation, and associated IP addresses for the nodes (IP address management).
Secondly we use Neutron in the undercloud as a mechanism for managing the network provisioning of the overcloud nodes during deployment. Neutron will detect booting nodes and instruct them to do PXE boot via a special DHCP offer, and then Ironic takes over responsibility for image deployment. Once deployed, the ironic deployment image reboots the machine to boot from hardrive, so it&8217;s the first time the node boots by itself. Then, it will execute os-net-apply (from the TripleO project) to statically configure the operating system with the IP address. Despite that IP being managed in the undercloud&8217;s Neutron DHCP server, it is actually set as a static IP in the overcloud&8217;s interface configuration. This allows for configuration of VLAN tagging, LACP or failover bonding, MTU settings and other advance parameters, from the Director network configuration. Visit this tutorial for more information on os-net-config.
Heat: orchestrating the overcloud deployment steps
The most important component in Director is Heat, which is OpenStack’s generic orchestration engine. Users define stack templates using plain YAML text documents, listing the required resources (for example, instances, networks, storage volumes) along with a set of parameters for configuration. Heat deploys the resources based on a given dependency chain, sorting out which resources need to be built before the others. Heat can then monitor such resources for availability, and scale them out where necessary. These templates enable application stacks to become portable and to achieve repeatability and predictability.
Heat is used extensively within Director as the core orchestration engine for overcloud deployment.  Heat takes care of the provisioning and management of any required resources, including the physical servers and networks, and the deployment and configuration of the dozens of OpenStack software components. Director’s Heat stack template describe the overcloud environment in intimate detail, including quantities and any necessary configuration parameters. It also makes the templates versionable and programmatically understood &; a truly Software Defined Infrastructure.
Deployment templates: customizable reference architectures
Whilst not an OpenStack service, one of the most important components to look at are the actual templates that we use for deployment with Heat. The templates come from the upstream TripleO community in a sub-project known as tripleo-heat-templates (read an introduction here). The tripleo-heat-templates repository comprises of a directory of Heat templates and the required puppet manifests and scripts to perform certain advanced tasks.
Red Hat relies on these templates with Director and works heavily to enhance them to provide additional features that customers request, this includes working with certified partners to confirm that their value add technology can be automatically enabled via Director, thus minimising any post-deployment effort (for more information, visit our Partner&8217;s instructions to integrate with Director). The default templates will stand up a vanilla Red Hat OpenStack Platform environment, with all default parameters and backends (KVM, OVS, LVM or Ceph if enabled, etc).
Director offers customers the ability to easily set their own configuration by simply overriding the defaults in their own templates, and also provides hooks in the default templates to easily call additional code that organisations may want to run, this could include the installation and configuration of additional software, make non-standard configuration changes that the templates aren’t aware of, or to enable a plugin not supported by Director.
 
In our next blog post we’ll explain the Reference Architecture that Director provides out of the box, and how to plan for a successful deployment.
Quelle: RedHat Stack

Have You Voted for OpenStack Summit Barcelona Proposals Yet?

The post Have You Voted for OpenStack Summit Barcelona Proposals Yet? appeared first on Mirantis | The Pure Play OpenStack Company.
As you know, it&;s that time again: voting is open for proposals for the OpenStack Summit in Barcelona this coming October 25-28, 2016.  Although the schedule is ultimately determined by a group of subject matter experts known as the &;track chairs&;, it&8217;s important to vote on sessions so they have an idea of what you want to see.
And not just the sessions that you get bombarded with over social media, either. We&8217;d like to encourage you to take a look at as many of the sessions listed at the summit voting site as you can &; or at least, in the tracks that interest you.  
Meanwhile, we&8217;re proud to present you with a list of proposals submitted by our experts. If they interest you, please use the search and vote for them.
And even if they don&8217;t interest you, please go to the site and look over the proposals that are there; your opinion matters! And do it before the deadline—August 8 at 11:59 p.m. PT.
Architectural Decisions

Big Data &8212; Big Deal? (Christian Huebner, Mirantis; Thomas Lichtenstein, Mirantis)
The Final Word on Availability Zones (Ernest de Leon, Mirantis)
Identity Management at Scale (Florin Stingaciu, Mirantis; Katarina Valalikova, Evolveum)
OpenStack: You Can Take it to the Bank! (Ivan Krovyakov, Mirantis; Vsevolod Pluzhnikov, Sberbank)
When Not to Share &8212; a Global Cloud Model (Ernest de Leon, Mirantis; Craig Anderson, Mirantis)
Using Monitoring to Guide Migration (Frank Karlsberger, Dynatrace; John Jainschigg, Mirantis)
Big Brother is Watching You! Or How to Audit the Cloud (Oleksii Kolodiazhnyi, Mirantis)

Case Studies

Case Study: Enabling CI/CD for a Future of Connected Cars at One of the World’s Largest Automakers (Adrian Steer, Mirantis; Praveen Yalagandula, Avi Networks)
How Four Superusers Measure the Business Value of their OpenStack Cloud (Kamesh Pemmaraju, Mirantis; Amar Kapadia, Mirantis)
Sharing Resources with OpenStack (Atze de Vries, Naturalis)
Using RUP, XP, and Kanban For OpenStack Development (Bruce Basil Mathews, Mirantis)
No Team? No problem! (How a Single Admin Manages 70 OpenStack Nodes) (Atze de Vries, Naturalis)

Cloud App Development

App Delivery to App Catalog: Deploying Direct to Murano for More Consumable Solutions (Nick Gulrajani, Mirantis)
Building Bridges to Fill Gaps between AWS and OpenStack (Bruce Basil Mathews, Mirantis; Jun Park, Adobe)
Application Catalogs: Understanding Glare, Murano, and Community App Catalog (Alexander Tivelkov, Murano; Kirill Zaitsev, Murano)
Applications for OpenStack, Developing, and Consuming Using Community Application Catalog (Igor Marnat, Mirantis; Christopher Aedo, IBM)
Openstack Murano and Puppet: An easy way to bring your manifests into the cloud (Alexey Khivin, Mirantis; Sergey Kraynev, Mirantis)

Cloud Models & Economics

Hybrid Private Cloud &8212; The Only Way to Cloud (Ernest de Leon, Mirantis; Craig Anderson, Mirantis)
Do OpenStack Private Clouds Provide Cost Savings Over Public Clouds (Nicolas Brousse, TubeMogul; Christian Carrasco, Cloud Advisor; Peter Lopez, Technicolor; Amar Kapadia, Mirantis)

Community Building

How We Built Fuel Community: Challenges of Big Tent Projects (Evgeniya Schuhmacher, Mirantis)
100% Organic Talkshow Tips (John Jainschigg, Mirantis; Nick Chase, Mirantis)

Developers/Containers

Container Orchestration Tapas: Kubernetes, Magnum, Swarm on OpenStack (Ayrat Khayretdinov, CloudOps; Ihor Dvoretskyi, Mirantis; Stacy Véronneau, CloudOps)

Evaluating OpenStack

Using OpenStack Personas to Build an Effective Cloud Strategy (Svetlana Karslioglu, Mirantis; Dmitriy Novakovskiy, Mirantis)

Field Experiences

VW Car Configurator: Case Study in Running Cloud-Friendly Applications on OpenStack (Ricardo Ameixca, Volkswagen AG; Craig Peters, Mirantis)

How To & Best Practices

Day 2 Operations: How to Constantly Improve the Upgrade Process (Gabriel Capisizu, Symatec; Mykyta Gubenko, Mirantis; Alexander Sakhnov, Mirantis)
Building a Fortress: The Easiest Way to Get Full Role-based Access Control in Openstack Keystone (Kseniya Tychkova, Mirantis)
Keystone and WebSSO: A Unified Login System for OpenStack and other web services (Kseniya Tychkova, Mirantis)
m1.Boaty.McBoatface: The joys of flavor planning by popular vote (Craig Anderson, Mirantis; Ben Silverman, OnX Enterprise Solutions)
Great Cloud Migrations: Do we need them? What options do we have? (Evgeniya Shuhmacher, Mirantis; Ayrat Khayretdinov, CloudOps; Roman Verchikov, Mirantis; Octavian Ciuhandu, Cloudbase Solutions; Hashir Abdi, Linux Integration Services)
How to be an OpenStack Operator and still sleep (Raul Flores, Mirantis)
Let’s Make Live Migrations Great Again (Timofey Durakov, Mirantis; Volodymyr Nykytiuk, Mirantis)
Navigating in OpenStack: sources of truth about everything (Evgeniya Schuhmacher, Mirantis; Ilya Stechkin, Mirantis)
How to calculate the transition between two states of your OpenStack cluster (Alexey Shtokolov, Mirantis; Vladimir Kuklin, Mirantis)

IT Strategy

Playing with the Slinky: Elastic Capacity Planning for OpenStack Clouds (Ben Silverman, OnX Enterprise Solutions)
Is your cloud scaling forecast a bit foggy? (Christian Huebner, Mirantis; Colin Burns, Mirantis)
Enterprise IT in the Land of the Ephemeral Cow (Chris Bingham, Mirantis)

Networking

Neutron Power Vacuum (Todd Bowman, Mirantis)
Is OpenStack Neutron production ready for large scale deployments? (Satish Salagame, Mirantis; Oleg Bondarev, Mirantis; Elena Ezhova, Mirantis)
The race conditions of Neutron L3 HA&8217;s scheduler under scale performace (Kevin Benton, Mirantis; John Schwarz, Red Hat; Ann Taraday)

Operations War Stories

One control plane to rule them all &; Managing physical and virtual infrastructure (Alexander Sakhnov, Mirantis; Mykyta Gubenko, Mirantis)
Making OpenStack your own (Atze de Vries, Naturalis)

Ops Tools

A Monitoring Architecture for OpenStack on Kubernetes (Eric Lemoine, Mirantis; Patrick Petit, Mirantis; Olivier Bourdon, Mirantis)
Sleep better at night: Openstack Cloud Auto-Healing (Mykyta Gubenko, Mirantis; Alexander Sakhnov, Mirantis)
Augmented Reality for OpenStack (John Jainschigg, Mirantis)

Products & Services

Horizon UI Modifications in the Context of Short-lived, Complex, Interactive, Network centric Stacks (Jeff Johnson, Riverbed)

Project Updates

What&8217;s new in OpenStack File Share Services (Manila) (Akshai Parthasarathy, NetApp; Gregory Elkinbard, Mirantis)
Glare &8211; unified binary repository for OpenStack (Mike Fedosin, Mirantis; Kairat Kushaev, Mirantis)

Security

Maintaining Privacy and Security on Your OpenStack Cloud (Bruce Basil Mathews, Mirantis; Peter Lopez, Technicolor)
Digital Forensics vs. OpenStack (Panel) (Alexander Adamov, Mirantis; Johan Christenson, City Network; Anders Carlsson, Blekinge Institute of Technology; Mariano Cunietti, Enter.it)

Storage

OpenStack’s Storage Performance: Can you handle the truth? (Paul Roberts, Mirantis; Ryan Day, Mirantis)
Converge and Conquer: OpenStack Converged Compute with Ceph SDS (Jacob Caspi, AT&T; Kiko Reis, Canonical; Christian Huebner, Mirantis)

Telecom/NFV Operations

Successful rapid NFVi deployment &8211; take 2 (Vincent Jardin, 6Wind; Irina Povolotskaya, Mirantis)
Accelerating Enterprise Cloud Adoption with Murano: AT&T Telco Use Cases and Lessons Learned (Gnanavelkandan Kathirvel, AT&T; Craig Peters, Mirantis)

Upstream Development

YAQL, The heart behind MuranoPL, Mistral Workflows, Fuel custom graph and Heat 2016-10-14 format (Kirill Zaitsev, Mirantis; Dmitrii Dovbii, Mirantis; Stan Lagun, Mirantis)
Oslo.Messaging ZeroMQ driver update and messaging drivers benchmarking (Oleksii Zamiatin, Mirantis; Dmitry Mescheryakov, Mirantis)
Shipping OpenStack Fast and Furious (Thomas Goirand, Mirantis; Haikel Guèmar, Red Hat)
Switching to oslo.db EngineFacade in OpenStack projects (Pavel Kholkin, Mirantis; Sergey Nikitin, Mirantis)
Neutron L3 Flavors (Kevin Benton, Mirantis; Armando Migliaccio)

The post Have You Voted for OpenStack Summit Barcelona Proposals Yet? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

OpenStack:Unlocked podcast Ep 19: Sumeet Singh, AppFormix

The post OpenStack:Unlocked podcast Ep 19: Sumeet Singh, AppFormix appeared first on Mirantis | The Pure Play OpenStack Company.

Those of you who have been watching these podcasts from the start may recognise this podcast’s guest – today we are talking to Sumeet Singh, the CEO of AppFormix, about cloud management and optimization, and the use of AI for these applications, and the OpenStack Days Silicon Valley event next week.
AppFormix’s software
We started off by getting Sumeet to quickly describe the software that AppFormix has developed. Traditionally, most people operate their cloud through a process of break and fix. However, AppFormix has developed software that optimizes your cloud use by layering on top of your cloud and constantly monitoring and analyzing it. While the product presents itself as a self-service experience it can be turned on to an automated mode once developers have learned to trust it.
AppFormix and Rackspace
When we last spoke to Sumeet, AppFormix was just launching their products and they had picked up some great clients. Sumeet said that one particularly great client they had worked with liked them enough to recommend them to Rackspace. This resulted in Rackspace calling them and ultimately resulted in a partnership between the two companies, whereby all of Rackspace’s customers have their accounts optimized through the AppFormix software.
Points of optimization
Next up Sumeet discussed how applications are constantly changing and the demands that are being placed on infrastructure are constantly changing. As new applications are developed infrastructure needs to be more and more automated and needs to work in real-time.
Sumeet described two types of optimization: One is top-down – if you want to deploy an application you use an algorithm or load balancing to figure out how many pods that application needs. But, he stated that what AppFormix does is bottom up – optimizing and orchestrating the resource layer. Figuring out how to deploy pods to nodes, where to, and then monitoring them to see how they use resources, and to ensure nodes aren’t physically damaged.
The Nirvana Outcome
Sumeet described to us what he considered to be a Nirvana outcome &; the best DevOps experience. He said that fundamentally IT teams want to get out of the way of the developers. For this you want an infrastructure that is easy to use and on which they can easily deploy their apps and that the infrastructure will run reliably. In addition, you want to enable IT teams to run in a more efficient way and give them a self-service way to understand the performance of their application and take action on it themselves.
Sumeet also said that AppFormix allows developers to do just this, and said that developers can consume the analysis in real-time, seeing how loaded a pod is and how many transactions are being run on the pod, in order to make better decisions.
Two Separate Futures
John suggested we were heading in two separate directions: one where applications don’t care about infrastructure at all, and, another where applications very much do care about infrastructure and make decisions based on software like AppFormix.
Sumeet agreed with this but said he felt the two could co-exist. For people who use entirely public clouds it’s a binary decision, but if you operate private infrastructure there’s an extra layer you can optimize.
The customer experience
We also quickly discussed how AppFormix is an example of just how important a great product and great customer experience is, as it was through a client of theirs who was a Rackspace customer that they got off the ground. Sumeet discussed how he hadn’t known that the original customer was a Rackspace customer and didn’t realize for a long time how the situation had evolved, but agreed that your product is really important; OpenStack is driven by the community. If the community doesn’t love your product, you&;re sunk.
The upcoming OpenStack Days Silicon Valley conference
All of us discussed the OpenStack Days Silicon Valley taking place next week at the Computer History Museum in Mountain View, CA. It’s an exciting event and tickets are still available, but the event does sell out every year – so get in quick.
Sumeet suggested that as an incentive to come (in addition to great food from the food trucks) he’ll be leading a discussion with Craig McLuckie from Google and Brandon Phillips from CoreOS talking about OpenStack and Kubernetes. If you have questions you’d like to Sumeet to ask tweet them @mirantisIT or @NickChase.
Thank you for listening and reading!
To find out more about Appformix you can visit their website http://www.appformix.com or email Sumeet directly at sumeet at appformix dot com.
If you have any comments about today’s podcast or have suggestions about things you’d like to hear about on future podcast episodes, send your suggestions through – we’d love to hear from you. You can contact us via news at openstacknow dot com,  or tweet us via @mirantisIT or @NickChase.
The post OpenStack:Unlocked podcast Ep 19: Sumeet Singh, AppFormix appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Mirantis joins Google, Intel to flip OpenStack container landscape

The post Mirantis joins Google, Intel to flip OpenStack container landscape appeared first on Mirantis | The Pure Play OpenStack Company.

While OpenStack has been working on orchestrating containers for the past year or so, this past week Mirantis announced a joint initiative with Google and Intel to flip the script, and run OpenStack as containers, orchestrating its control plane with the Kubernetes container orchestration tool. The announcement heralded a major change in the way OpenStack would be handled in both Mirantis&; version of OpenStack, and in the OpenStack Fuel deployment tool.
The idea is to provide a way for enterprises to take advantage of a robust infrastructure management pattern that has been proven by Google at a massive scale. The resulting software will give users fine-grained control over the placement of services used for the OpenStack control plane and the ability to do rolling updates of OpenStack, make the OpenStack control plane self-healing and more resilient, and smooth the path for creating microservices-based applications on OpenStack. &;If OpenStack users ever wanted an easy on-ramp to working in cloud the Google way,&; TechWeekEurope wrote, &8220;they just got it.&8221;
&8220;A poorly-kept secret in the industry is that OpenStack is not itself cloud-native, so users cannot roll out real-time updates and patches,&8221; Chris Clason, CTO, Cloud Platform Engineering Group, told VMBlog. &8220;The changes we plan to make in Fuel will turn OpenStack into a true microservices application, bridging the gap between legacy infrastructure software and the next generation of application development.&8221;
The changes will do more than bridge that gap, however. Running OpenStack on a standard container fabric makes it possible to scale much more easily, and not just in an individual deployment. Because enterprises no longer have to worry about deployment snowflakes, adoption can move more quickly through the use of standard logistics tools such as Kubernetes.  This way, the applications and workloads running on OpenStack become the snowflakes &; they&8217;re all different, but running in a standard way. Solving the logistical problem will expand the market for OpenStack and cloud to include every workload that can run this way.
Mirantis, Google, and Intel all have a vested interest in making this project successful.  Mirantis has always been a &8220;pure play&8221; OpenStack vendor, and while it joined the Cloud Native Computing Foundation and has plans to become a top contributor to Kubernetes within a year, those contributions are aimed at enhancing the success of OpenStack in order to enable its core business.  
That said, while the new code will be available for other companies to incorporate into their own distributions, “we’re not going to do anything in the community that will effectively preclude anybody who doesn’t want to use Kubernetes, from not using it,” Renski told DatacenterKnowledge. “That’s simply not possible to do.”
Which isn&8217;t to say that everyone in the community will be happy with Mirantis&8217; new direction. &8220;We&8217;ll probably piss off a few people in the community because it&8217;s not aligned with some projects, such as Magnum,&8221; Boris Renski, Mirantis co-founder and CMO, told Light Reading. &8220;But we nonetheless feel this is something that needs to be done.&8221;
&8220;OpenStack was built by people with marginal experience running large, distributed systems. It&8217;s a good tool, and the day one problem of installing OpenStack has been solved,&8221; Renski told Light Reading. &8220;But when it comes to running it, installing incremental patches, upgrades and restarting at scale, there is no one way to do it. The operations problem is acute, and if it&8217;s not solved it can throw OpenStack into oblivion.&8221;
The publication also noted that &8220;Intel gets to sell chips no matter where the cloud lands, but thinks that any organization that has between 1,200 and 1,500 servers should be building their own private clouds, and says that at that scale they can operate efficiently enough to justify the investment in systems and datacenters. But as Renski put it, and we would concur, the real issue here might be that Intel doesn’t want to end up with only a handful of customers who have all the buying power. It would rather have a dozen customers that command 20 percent of Xeon chip revenues and another 50, that make up the other 80 percent. This will minimize computing and economic efficiencies, perhaps, but it will preserve Intel’s revenue and profit growth.&8221;
“Combined open source leadership of Intel and Mirantis will be instrumental in bridging the OpenStack and Kubernetes communities,” said Jonathan Donaldson, vice president and general manager, for Intel’s Software Defined Infrastructure Group. “Our joint efforts will marry two complementary and powerful open source communities, making it simpler for enterprises to manage private clouds.&8221;
According to DatacenterKnowledge, &8220;The CPU maker is expected to grant Mirantis early access to its rack scale architecture projects, along with Intel’s next-generation monitoring libraries and tools, which involve new on-chip technologies being built into Xeon processors.&8221; The publication also notes that,  &8220;As Renski understands things, some Intel engineers who work on OpenStack, along with others who contribute to Kubernetes, will be delegated responsibilities for driving the merged architecture going forward.&8221;
And Google?  Gina Longoria, analyst at Moor Insights & Strategy, told TechWeekEurope that “OpenStack is becoming the defacto standard for the deployment of open source private clouds and Google’s partnership with this community is a way to increase their relevance in the private cloud.”
The business side isn&8217;t the only reason, however, as Kubernetes product manager Martin Buhr pointed out after detailing the advantages OpenStack users will get from using Kubernetes. &8220;Conversely, incorporating Kubernetes into OpenStack will give Kubernetes users access to a robust framework for deploying and managing applications built on virtual machines. As users move to the cloud-native model, they will be faced with the challenge of managing hybrid application architectures that contain some mix of virtual machines and Linux containers. The combination of Kubernetes and OpenStack means that they can do so on the same platform using a common set of tools.&8221;
Renski also talked to Techrepublic about additional motives Google may have for getting involved with a huge open source project such as OpenStack. &8220;Google is going after [the private cloud] market by taking technologies they&8217;ve innovated on, like Kubernetes, and giving them to cloud developers and operators who will say, &;this is the coolest thing ever!&8217; They want to make sure there&8217;s mindshare around their stuff—dominating the private cloud. They did this with Android. They didn&8217;t want everyone to run Apple iOS and have Apple be the gateway to all mobile. By open sourcing Android, Google moved to front and center in mobile.
&8220;With Kubernetes, they&8217;re doing the same thing with containers and cloud infrastructure. OpenStack is primarily on-premises computing. It&8217;s a dominant open source fabric for on-premises infrastructure. That&8217;s the whole point to why Google is supporting this. The next frontier to winning the public cloud wars between Google, Microsoft, and Amazon is capturing the on-premises cloud mindshare.&8221;
Once Mirantis OpenStack 10 is released in the first quarter of 2017, the company will no longer be doing 6-month releases, as it does now. Instead, with OpenStack containerized, the company will implement a CI/CD pipeline that enables them to send out updated containers incrementally. These will be sent to clients&8217; staging sites, where they can then be pushed to production.
“We’re enabling the customers to now have a single platform that is based on APIs for both containers and VMs, which is this Nirvana state our customers have been asking for,” Renski told SDXCentral.
Mirantis has created a Docker and Kubernetes boot camp to help educate developers and IT operations team on best practices around running containers at scale. The initial class will be given to select students for free at the OpenStack Days Silicon Valley conference next week in Mountain View, California. That class will include basic Linux container concepts; installing, integrating and running Docker; and orchestrating containers using Kubernetes.  OpenStack Days Silicon Valley, which will be held August 9-10, 2016,  will include discussions of these topics and the container ecosystem as it relates to OpenStack. Tickets are still available.
Resources

Google Fosters Another OpenStack Kubernetes Mashup
Google&8217;s Made Its Biggest Move Yet For OpenStack Users
How Google and OpenStack Vendor Mirantis Are Trying to Bridge Clouds &; Fortune
With Kubernetes, Mirantis Containerizes OpenStack to Ease Operational Challenges &8211; The New Stack
Kubernetes to Fuel OpenStack orchestration &8211; Enterprise Times
Kubernetes: Why OpenStack&8217;s embrace of Kubernetes is great for both communities
Mirantis Pegs OpenStack&8217;s Future to Kubernetes
Mirantis to Fuse Kubernetes, CI/CD with Commercial OpenStack | Data Center Knowledge
Mirantis: &8216;We&8217;ll Probably Piss Off&8217; OpenStack | Light Reading
Capitulation? Mirantis refactors OpenStack on top of Kubernetes | Computerworld
OpenStack embraces Kubernetes to become a whole lot more like Google &8211; TechRepublic
OpenStack will soon be able to run in containers on top of Kubernetes | TechCrunch
Q&A: Mirantis Fuels Collaboration with Intel and Google via OpenStack and Kubernetes : @VMblog
Unlocked: OpenStack + ScaleIO &8211; DZone Cloud
UPDATE &8211; Mirantis Collaborates With Intel and Google to Enable OpenStack on Kubernetes
Kubernetes and OpenStack to collide in Silicon Valley | CIO

The post Mirantis joins Google, Intel to flip OpenStack container landscape appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis