In our previous post we introduced Red Hat OpenStack Platform Director. We showed how at the heart of Director is TripleO, short for “OpenStack on OpenStack”. TripleO is an OpenStack project that aims to utilise OpenStack itself as the foundations for deploying OpenStack. To clarify, TripleO advocates the use of native OpenStack components, and their respective API’s to configure, deploy, and manage OpenStack environments itself.
The major benefit of utilising these existing API’s with Director is that they&8217;re well documented, they go through extensive integration testing upstream, and are the most mature components in OpenStack. For those that are already familiar with the way that OpenStack works, it&8217;s a lot easier to understand how TripleO (and therefore, Director) works. Feature enhancements, security patches, and bug fixes are therefore automatically inherited into Director, without us having to play catch up with the community.
With TripleO, we refer to two clouds: The first to consider is the undercloud, this is the command and control cloud in which a smaller OpenStack environment exists that&8217;s sole purpose is to bootstrap a larger production cloud. This is known as the overcloud, where tenants and their respective workloads reside. Director sometimes is treated as a synonymous to the undercloud; Director bootstraps the undercloud OpenStack deployment and provides the necessary tooling to deploy an overcloud.
Ironic+Nova+Glance: baremetal management of overcloud nodes
For proper baremetal management during a deployment, Nova and Ironic need to be in perfect coordination. Nova is responsible for the orchestration, deployment, and lifecycle management of compute resources, for example, virtual machines. Nova relies on a set of plugins and drivers to establish compute resources requested by a tenant, such as the utilisation of the KVM hypervisor.
Ironic started life as an alternative Nova “baremetal driver”. Now, Ironic has its own OpenStack project and compliments Nova using its own respective API and command line utilities. Once the overcloud is deployed, Ironic can be offered to customers that want to offer the baremetal nodes to its tenants using dedicated hardware outside of Nova’s compute pools. Here, in Director’s context, Ironic is a key core component of the undercloud, controlling, and deploying the physical nodes that are required for the overcloud deployment.
But first Director has to register the nodes with Ironic. One has to catalog the IPMI (out-of-band management), it’s IP, username and password, although there are also vendor-specific drivers, for example HP iLO, Cisco UCS, Dell DRAC. Ironic will manage the power-state of bare metal nodes used for the overcloud deployment, as well as the deployment of the operating system (via a PXE-bootable installer image)
The disk image used during hardware bootstrap is taken from the undercloud Glance image service. Red Hat provides the required images to be deployed in the overcloud nodes. These disk images typically contain Red Hat Enterprise Linux and all OpenStack components, which minimises any post-deployment software installation. They can, of course, be customised further prior to upload into Glance. For example, customers often want to integrate additional software or configurations as per their requirements.
Neutron: network management of the overcloud
As you may already know, Neutron provides network access to tenants via a self-service interface to define networks, ports, and IP addresses that can be attached to instances. It also provides supporting services for booting instances such as DHCP, DNS, and routing. Within Director, we firstly use Neutron as an API for defining all overcloud networks, any required VLAN isolation, and associated IP addresses for the nodes (IP address management).
Secondly we use Neutron in the undercloud as a mechanism for managing the network provisioning of the overcloud nodes during deployment. Neutron will detect booting nodes and instruct them to do PXE boot via a special DHCP offer, and then Ironic takes over responsibility for image deployment. Once deployed, the ironic deployment image reboots the machine to boot from hardrive, so it&8217;s the first time the node boots by itself. Then, it will execute os-net-apply (from the TripleO project) to statically configure the operating system with the IP address. Despite that IP being managed in the undercloud&8217;s Neutron DHCP server, it is actually set as a static IP in the overcloud&8217;s interface configuration. This allows for configuration of VLAN tagging, LACP or failover bonding, MTU settings and other advance parameters, from the Director network configuration. Visit this tutorial for more information on os-net-config.
Heat: orchestrating the overcloud deployment steps
The most important component in Director is Heat, which is OpenStack’s generic orchestration engine. Users define stack templates using plain YAML text documents, listing the required resources (for example, instances, networks, storage volumes) along with a set of parameters for configuration. Heat deploys the resources based on a given dependency chain, sorting out which resources need to be built before the others. Heat can then monitor such resources for availability, and scale them out where necessary. These templates enable application stacks to become portable and to achieve repeatability and predictability.
Heat is used extensively within Director as the core orchestration engine for overcloud deployment. Heat takes care of the provisioning and management of any required resources, including the physical servers and networks, and the deployment and configuration of the dozens of OpenStack software components. Director’s Heat stack template describe the overcloud environment in intimate detail, including quantities and any necessary configuration parameters. It also makes the templates versionable and programmatically understood – a truly Software Defined Infrastructure.
Deployment templates: customizable reference architectures
Whilst not an OpenStack service, one of the most important components to look at are the actual templates that we use for deployment with Heat. The templates come from the upstream TripleO community in a sub-project known as tripleo-heat-templates (read an introduction here). The tripleo-heat-templates repository comprises of a directory of Heat templates and the required puppet manifests and scripts to perform certain advanced tasks.
Red Hat relies on these templates with Director and works heavily to enhance them to provide additional features that customers request, this includes working with certified partners to confirm that their value add technology can be automatically enabled via Director, thus minimising any post-deployment effort (for more information, visit our Partner&8217;s instructions to integrate with Director). The default templates will stand up a vanilla Red Hat OpenStack Platform environment, with all default parameters and backends (KVM, OVS, LVM or Ceph if enabled, etc).
Director offers customers the ability to easily set their own configuration by simply overriding the defaults in their own templates, and also provides hooks in the default templates to easily call additional code that organisations may want to run, this could include the installation and configuration of additional software, make non-standard configuration changes that the templates aren’t aware of, or to enable a plugin not supported by Director.
In our next blog post we’ll explain the Reference Architecture that Director provides out of the box, and how to plan for a successful deployment.
Quelle: RedHat Stack
Published by