Mirantis Launches First Vendor-Agnostic Kubernetes and Docker Certification

The post Mirantis Launches First Vendor-Agnostic Kubernetes and Docker Certification appeared first on Mirantis | The Pure Play OpenStack Company.
Company also adds self-paced training course to Kubernetes and Docker training offerings

SUNNYVALE, CA – Dec. 1, 2016 – Mirantis today launched the first vendor-agnostic Kubernetes and Docker certification, giving enterprises a way to identify container skills in a competitive cloud market. Professionals preparing for the certification are recommended to take the Kubernetes and Docker bootcamp. The company also announced a new online, self-paced KD100 training for self learners looking for economy pricing and additional flexibility.

skills have progressed from being niche to mainstream as the world’s most in-demand skill set. LinkedIn named cloud computing as the hottest skill in demand in France, India, and the United States in 2015. Within cloud computing, Kubernetes and containers have grown in popularity. The OpenStack User Survey shows Kubernetes taking the lead as the top Platform-as-a-Service (PaaS) tool, while 451 Research has called containers the “future of virtualization,” predicting strong container growth across on-premises, hosted and public clouds.

“As interest in Kubernetes and containers gains momentum across the industry, Mirantis felt it vital to add a true vendor-agnostic certification for Kubernetes and Docker,” said Lee Xie, Sr. Director, Educational Services, Mirantis. “Mirantis offers several formats to train professionals on the automated deployment, scaling, management, and running of container applications. This provides maximum flexibility to prepare for the KDC100 certification exam.”

Pricing and Availability

The proctored Kubernetes and Docker certification (KDC100), is a hands-on, 30-task exam, priced at $600. This includes a certificate, listing on Mirantis’ verification portal for prospective employers, and certification signature logos for those that pass the exam. The first session is scheduled for December 29 in Sunnyvale, California, with an attached virtual session. For those interested in a packaged offering, the KD110 bundle includes the KD100 bootcamp and the KDC100 exam for $2,395. The KD100 bootcamp, available in classroom and live virtual formats, is the official recommended training for the KDC100 certification exam.

Mirantis Online Training

The company also announced a new online, self-paced KD100 training. The online course will include one-year access to the KD100 course content and videos, 72 hours of online hands-on labs, as well as a completion certificate that will be provided upon finishing the class. The new class is coming in January 2017. For a limited time, it will be available for preregistration at the discounted price of $195 (regularly $395).

&;This [KD100] class has given me the confidence to say I understand the technology behind Docker and Kubernetes. It also provided me with a lot of use cases that I will be able to use from my perspective as a CIO of a large web hosting company,&; said Nickola Naous, chief information officer, TMDHosting, Inc.

For more information on these and other Mirantis training courses, visit: https://training.mirantis.com/.

About Mirantis

Mirantis helps top enterprises build and manage private cloud infrastructure using OpenStack and related open source technologies. The company is the top contributor of open source code to OpenStack project and follows a build-operate-transfer model to deliver its OpenStack distribution and cloud management services, empowering customers to take advantage of open source innovation with no vendor lock-in. To date Mirantis has helped over 200 enterprises build and operate some of the largest OpenStack clouds in the world. Its customers include iconic brands like AT&T, Comcast, Shenzhen Stock Exchange, eBay, Wells Fargo Bank and Volkswagen. Learn more at www.mirantis.com.

 

Contact information:

Sarah Bennett

Mirantis PR Manager

sbennett@mirantis.comThe post Mirantis Launches First Vendor-Agnostic Kubernetes and Docker Certification appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Automate bare metal server provisioning using Ironic (bifrost) and the ansible deploy driver

The post Automate bare metal server provisioning using Ironic (bifrost) and the ansible deploy driver appeared first on Mirantis | The Pure Play OpenStack Company.
On our team, we mostly conduct various research in OpenStack, so we use bare metal machines extensively. To make our lives somewhat easier, we&;ve developed a set of simple scripts that enables us to backup and restore the current state of the file system on the server. It also enables us to switch between different backups very easily. The set of scripts is called multi-root (https://github.com/vnogin/multi-root).
Unfortunately, we had a problem; in order to use this tool, we had to have our servers configured in a particular way, and we faced different issues with manual provisioning:

It is not possible to set up more than one bare metal server at a time using a Java-based IPMI application
The Java-based IPMI application does not properly handle disconnection from the remote host due to connectivity problems (you have to start installation from the very beginning)
The bare metal server provisioning procedure was really time consuming
For our particular case, in order to use multi-root functionality we needed to create software RAID and make required LVM configurations prior to operating system installation

To solve these problems, we decided to automate bare metal node setup, and since we are part of the OpenStack community, we decided to use bifrost instead of other provisioning tools. Bifrost was a good choice for us as it does not require other OpenStack components.
Lab structure
This is how we manage disk partitions and how we use software RAID on our machines:

As you can see here, we have the example of a bare metal server, which includes two physical disks.  Those disks are combined using RAID1, then partitioned by the operating system.  The LVM partition then gets further partitioned, with each copy of an operating system image assigned to its own partition.
This is our network diagram:

In this case we have one network to which our bare metal nodes are attached. Also attached to that network is the IRONIC server. A DHCP server assigns IP addresses for the various instances as they&8217;re provisioned on the bare metal nodes, or prior to the deployment procedure (so that we can bootstrap the destination server).
Now let&8217;s look at how to make this work.
How to set up bifrost with ironic-ansible-driver
So let&8217;s get started.

First, add the following line to the /root/.bashrc file:
# export LC_ALL=”en_US.UTF-8″

Ensure the operating system is up to date:
# apt-get -y update && apt-get -y upgrade

To avoid issues related to MySQL, we decided to ins tall it prior to bifrost and set the MySQL password to &;secret&;:
# apt-get install git python-setuptools mysql-server -y

Using the following guideline, install and configure bifrost:
# mkdir -p /opt/stack
# cd /opt/stack
# git clone https://git.openstack.org/openstack/bifrost.git
# cd bifrost

We need to configure a few parameters related to localhost prior to the bifrost installation. Below, you can find an example of an /opt/stack/bifrost/playbooks/inventory/group_vars/localhost file:
echo “—
ironic_url: “http://localhost:6385/”
network_interface: “p1p1″
ironic_db_password: aSecretPassword473z
mysql_username: root
mysql_password: secret
ssh_public_key_path: “/root/.ssh/id_rsa.pub”
deploy_image_filename: “user_image.qcow2″
create_image_via_dib: false
transform_boot_image: false
create_ipa_image: false
dnsmasq_dns_servers: 8.8.8.8,8.8.4.4
dnsmasq_router: 172.16.166.14
dhcp_pool_start: 172.16.166.20
dhcp_pool_end: 172.16.166.50
dhcp_lease_time: 12h
dhcp_static_mask: 255.255.255.0″ > /opt/stack/bifrost/playbooks/inventory/group_vars/localhost
As you can see, we&8217;re telling Ansible where to find Ironic and how to access it, as well as the authentication information for the database so state information can be retrieved and saved. We&8217;re specifying the image to use, and the networking information.
Notice that there&8217;s no default gateway for DHCP in the configuration above, so I&8217;m going to fix it manually after the install.yaml playbook execution.
Install ansible and all of bifrost&8217;s dependencies:
# bash ./scripts/env-setup.sh
# source /opt/stack/bifrost/env-vars
# source /opt/stack/ansible/hacking/env-setup
# cd playbooks

After that, let&8217;s install all packages that we need for bifrost (Ironic, MySQL, rabbitmq, and so on) &;
# ansible-playbook -v -i inventory/localhost install.yaml

&8230; and the Ironic staging drivers with already merged patches for enabling Ironic ansible driver functionality:
# cd /opt/stack/
# git clone git://git.openstack.org/openstack/ironic-staging-drivers
# cd ironic-staging-drivers/

Now you&8217;re ready to do the actual installation.
# pip install -e .
# pip install “ansible>=2.1.0″
You should see typical &8220;installation&8221; output.
In the /etc/ironic/ironic.conf configuration file, add the &8220;pxe_ipmitool_ansible&8221; value to the list of enabled drivers. In our case, it&8217;s the only driver we need, so let&8217;s remove the other drivers:
# sed -i ‘/enabled_drivers =*/cenabled_drivers = pxe_ipmitool_ansible’ /etc/ironic/ironic.conf

If you want to enable cleaning and disable disk shredding during the cleaning procedure, add these options to /etc/ironic/ironic.conf:
automated_clean = true
erase_devices_priority = 0

Finally, restart the Ironic conductor service:
# service ironic-conductor restart

To check that everything was installed properly, execute the following command:
# ironic driver-list | grep ansible
| pxe_ipmitool_ansible | test |
You should see the pxe_ipmitool_ansible driver in the output.
Finally, add the default gateway to /etc/dnsmasq.conf (be sure to use the IP address for your own gateway).
# sed -i ‘/dhcp-option=3,*/cdhcp-option=3,172.16.166.1′ /etc/dnsmasq.conf

Now that everything&8217;s set up, let&8217;s look at actually doing the provisioning.
How to use ironic-ansible-driver to provision bare-metal servers with custom configurations
Now let&8217;s look at actually provisioning the servers. Normally, we&8217;d use a custom ansible deployment role that satisfies Ansible&8217;s requirements regarding idempotency to prevent issues that can arise if a role is executed more than once, but because this is essentially a spike solution for us to use in the lab, we&8217;ve relaxed that requirement.  (We&8217;ve also hard-coded a number of values that you certainly wouldn&8217;t in production.)  Still, by walking through the process you can see how it works.

Download the custom ansible deployment role:
curl -Lk https://github.com/vnogin/Ansible-role-for-baremetal-node-provision/archive/master.tar.gz | tar xz -C /opt/stack/ironic-staging-drivers/ironic_staging_drivers/ansible/playbooks/ –strip-components 1

Next, create an inventory file for the bare metal server(s) that need to be provisioned:
# echo “—
 server1:
   ipa_kernel_url: “http://172.16.166.14:8080/ansible_ubuntu.vmlinuz”
   ipa_ramdisk_url: “http://172.16.166.14:8080/ansible_ubuntu.initramfs”
   uuid: 00000000-0000-0000-0000-000000000001
   driver_info:
     power:
       ipmi_username: IPMI_USERNAME
       ipmi_address: IPMI_IP_ADDRESS
       ipmi_password: IPMI_PASSWORD
       ansible_deploy_playbook: deploy_custom.yaml
   nics:
     –
       mac: 00:25:90:a6:13:ea
   driver: pxe_ipmitool_ansible
   ipv4_address: 172.16.166.22
   properties:
     cpu_arch: x86_64
     ram: 16000
     disk_size: 60
     cpus: 8
   name: server1
   instance_info:
     image_source: “http://172.16.166.14:8080/user_image.qcow2″” > /opt/stack/bifrost/playbooks/inventory/baremetal.yml

# export BIFROST_INVENTORY_SOURCE=/opt/stack/bifrost/playbooks/inventory/baremetal.yml
As you can see the above we have added all required information for bare-metal node provisioning using IPMI. If needed you can add information about various number of bare-metal servers here and all of them will be enrolled and deployed later.
Finally, you&8217;ll need to build a ramdisk for the Ironic ansible deploy driver and create a deploy image using DIB (disk image builder). Start by creating an RSA key that will be used for connectivity from the Ironic ansible driver to the provisioning bare metal host:
# su – ironic
# ssh-keygen
# exit

Next set environment variables for DIB:
# export ELEMENTS_PATH=/opt/stack/ironic-staging-drivers/imagebuild
# export DIB_DEV_USER_USERNAME=ansible
# export DIB_DEV_USER_AUTHORIZED_KEYS=/home/ironic/.ssh/id_rsa.pub
# export DIB_DEV_USER_PASSWORD=secret
# export DIB_DEV_USER_PWDLESS_SUDO=yes

Install DIB:
# cd /opt/stack/diskimage-builder/
# pip install .

Create the bootstrap and deployment images using DIB, and move them to the web folder:
# disk-image-create -a amd64 -t qcow2 ubuntu baremetal grub2 ironic-ansible -o ansible_ubuntu
# mv ansible_ubuntu.vmlinuz ansible_ubuntu.initramfs /httpboot/
# disk-image-create -a amd64 -t qcow2 ubuntu baremetal grub2 devuser cloud-init-nocloud -o user_image
# mv user_image.qcow2 /httpboot/

Fix file permissions:
# cd /httpboot/
# chown ironic:ironic *

Now we can enroll anddeploy our bare metal node using ansible:
# cd /opt/stack/bifrost/playbooks/
# ansible-playbook -vvvv -i inventory/bifrost_inventory.py enroll-dynamic.yaml
Wait for the provisioning state to read &8220;available&8221;, as a bare metal server needs to cycle through a few states and could be cleared, if needed. During the enrollment procedure, the node can be cleared by the shred command. This process does take a significant amount of time time, so you can disable or fine tune it in the Ironic configuration (as you saw above where we enabled it).
Now we can start the actual deployment procedure:
# ansible-playbook -vvvv -i inventory/bifrost_inventory.py deploy-dynamic.yaml
If deployment completes properly, you will see the provisioning state for your server as &8220;active&8221; in the Ironic node-list.
+————————————————————–+———+——————–+—————–+————————-+——————+
| UUID                                                    | Name  | Instance UUID | Power State | Provisioning State | Maintenance |
+————————————————————–+———+——————–+—————–+————————-+——————+
| 00000000-0000-0000-0000-000000000001   | server1| None          | power on      | active                     | False            |
+————————————————————–+———+——————–+—————–+————————-+——————+

Now you can log in to the deployed server via ssh using the login and password that we defined above during image creation (ansible/secret) and then, because the infrastructure to use it has now been created, clone the multi-root tool from Github.
Conclusion
As you can see, bare metal server provisioning isn&8217;t such a complicated procedure. Using the Ironic standalone server (bifrost) with the Ironic ansible driver, you can easily develop a custom ansible role for your specific deployment case and simultaneously deploy any number of bare metal servers in automation mode.
I want to say thank you to Pavlo Shchelokovskyy and Ihor Pukha for your help and support throughout the entire process. I am very grateful to you guys.
The post Automate bare metal server provisioning using Ironic (bifrost) and the ansible deploy driver appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Why IBM is tripling its cloud data center capacity in the UK

The need for cloud data centers in Europe continues to grow.
UK cloud adoption rates have increased to 84 percent over the last five years, according to Cloud Industry Forum.
That’s why I am thrilled to announce a major expansion of IBM UK cloud data centers, tripling the capacity in the United Kingdom to meet this growing customer demand. The investment expands the number of IBM cloud data centers in the country from two to six.

It is the largest commitment IBM Cloud has made to one country at one time. Expanding the cloud data center foot print in UK that has started over five years ago, IBM will have more UK data centers than any other vendor.
Meeting demand in highly regulated industries
Highly regulated industries, such as the public sector and financial services, have nuanced and sensitive infrastructure and security needs.
The UK government&;s Digital Transformation Plan to boost productivity has put digital technologies at the heart of the UK&8217;s economic future.
The Government Digital Service (GDS) leading the digital transformation of government, runs GOV.UK, helping millions of people find the government services and information they need every day. To make public services simpler, better and safer, the UK&8217;s national infrastructure and digital services require innovative solutions, strong cyber security defenses and high availability platforms. is thus essential to embrace the digital intelligence that will deliver outstanding services to UK citizens.
In response, IBM is further building out its capabilities through its partnership with Ark Data Centres, the majority owner in a joint venture with the UK government. Together, we’re delivering public data center services that are already being used at scale by high-profile, public-sector agencies.
It is all about choice
The IBM point of view is to design a cloud that brings greater flexibility, transparency and control over how clients manage data, run businesses and deploy IT operations.
Hybrid is the reality of cloud migration. Clients don’t want to move everything to the public cloud or keep everything in the private cloud. They want to have a choice.
For example, IBM offers the opportunity to keep data local in client locations to those enterprises with fears about data residency and compliance with regulations for migration of sensitive workloads. Data locality is certainly a factor for European businesses, but even more businesses want the ability to move existing workloads to the cloud and provide cognitive tools and services that allow them to fuel new cloud innovations.
From cost savings to innovation platform
Data is the game changer in cloud.
IBM is optimizing its cloud for data and analytics, infused with services including Watson, blockchain and Internet of Things (IoT) so that clients can take advantage of higher-value services in the cloud. This is not just about storage and compute. If clients can’t analyze and gain deeper insights from the data they have in the cloud, they are not using cloud technology to its full potential.
Besides, our customers are focusing more and more on value creation and innovation. That&8217;s why travel innovators are adopting IBM Cloud, fueled by Watson&8217;s cognitive intelligence, to transform interactions with customers and speed the delivery of new services.
Thomson, part of TUI UK & Ireland, one of the UK’s largest travel operators, taps into one of IBM’s UK cloud data centers to run its new tool developed in IBM’s London Bluemix Garage. The app uses Watson APIs such as Conversation, Natural Language Classifier and Elasticsearch on Bluemix to enable customers to receive holiday destination matches based on natural language requests like &;I want to visit local markets” or “I want to see exotic animals.&;
Other major brands, including Dixons Carphone, National Express, National Grid, Shop Direct, Travis Perkins PLC, Wimbledon, Finnair, EVRY and Lufthansa, are entrusting IBM Cloud to transform their business to create more seamless, personalized experiences for customers and accelerate their digital transformation.
By the end of 2017, IBM will have 16 fully operational cloud data centers across Europe, representing the largest and most comprehensive European cloud data center network. Overall, IBM has now the largest cloud data center footprint globally with more than 50.
These new IBM Cloud data centers will help businesses in industries such as retail, banking, government and healthcare meet customer needs.
The post Why IBM is tripling its cloud data center capacity in the UK appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Three Considerations for Planning your Docker Datacenter Deployment

Congratulations! You&;ve decided to make the change your application environment with Docker Datacenter. You&8217;re now on your way to greater agility, portability and control within your environment. But what do you need to get started? In this blog, we will cover things you need to consider (strategy, infrastructure, migration) to ensure a smooth POC and migration to production.
1. Strategy
Strategy involves doing a little work up-front to get everyone on the same page. This stage is critical to align expectations and set clear success criteria for exiting the project. The key focus areas are to determining your objective, plan out how to achieve it and know who should be involved.
Set the objective &; This is a critical step as it helps to set clear expectations, define a use case and outline the success criteria for exiting a POC. A common objective is to enable developer productivity by implementing a Continuous Integration environment with Docker Datacenter.
Plan how to achieve it &8211; With a clear use case and outcome identified, the next step is to look at what is required to complete this project. For a CI pipeline, Docker is able to standardize the development environment, provide isolation of the applications and their dependencies and eliminate any &;works on my machine&; issues to facilitate the CI automation. When outlining the plan, make sure to select the pilot application. The work involved will vary depending on whether it is a legacy application refactoring or new application development.
Integration between source control and CI allows Docker image builds to be automatically triggered from a standard Git workflow.  This will drive the automated building of Docker images. After Docker images are built they are shipped to the secure Docker registry to store them (Docker Trusted Registry) and role based access controls enable secure collaboration. Images can then be pulled and deployed across a secure cluster as running applications via the management layer of Docker Datacenter (Universal Control Plane).
Know who should be involved &8211; The solution will involve multiple teams and it is important to include the correct people early to avoid any potential barriers later on. These teams can include the following teams, depending on the initial project: development, middleware, security, architects, networking, database, and operations. Understand their requirements and address them early and gain consensus through collaboration.
PRO TIP &8211; Most first successes tend to be web applications with some sort of data tier that can either utilize traditional databases or be containerized with persistent data being stored in volumes.
 
2. Infrastructure
Now that you understand the basics of building a strategy for your deployment, it’s time to think about infrastructure.  In order to install Docker Datacenter (DDC) in a highly available (HA) deployment, the minimum base infrastructure is six nodes.  This will allow for the installation of three UCP managers and three DTR replicas on worker nodes in addition to the worker nodes where the workloads will be deployed. An HA set up is not required for an evaluation but we recommend a minimum of 3 replicas and managers for production deployments so your system can handle failures.
PRO TIP &8211; A best practice is to not deploy and run any container workloads on the UCP managers and DTR replicas. These nodes perform critical functions within DDC and are best if they only run the UCP or DTR services.
Nodes are defined as cloud, virtual or physical servers with Commercially Supported (CS) Docker Engine installed as a base configuration.
Each node should consist of a minimum of:

4GB of RAM
16GB storage space
For RHEL/CentOS with devicemapper: separate block device OR additional free space on the root volume group should be available for Docker storage.
Unrestricted network connectivity between nodes
OPTIONAL Internet access to Docker Hub to ease the initial downloads of the UCP/DTR and base content images
Installed with Docker supported operating system 
Sudo access credentials to each node

Other nodes may be required for related CI tooling. For a POC built around DDC in a HA deployment with CI/CD, ten nodes are recommended. For a POC built around DDC in a non-HA deployment with CI/CD, five nodes are recommended.
Below are specific requirements for the individual components of the DDC platform:
Universal Control Plane

Commercially Supported (CS) Docker Engine must be used in conjunction with DDC.
TCP Load balancer should be available for UCP in an HA configuration.
A valid DNS entry should be created for the load balancer VIP.
SSL certificate from a trusted root CA should be created (a self-signed certificate is created for UCP and may be used but additional configuration is required).
DDC License for 30 day trial or annual subscription must be obtained or purchased for the POC.

Docker Trusted Registry

Commercially Supported (CS) Docker Engine must be used in conjunction with DDC.
TCP Load balancer should be available for DTR in an HA configuration.
A valid DNS entry should be created for the load balancer VIP.
Image Storage options include a clustered filesystem for HA or blob storage (AWS S3, Azure, S3 compatible storage, or OpenStack Swift)
SSL certificate from a trusted root CA should be created (a self-signed certificate is created for DTR and may be used but additional configuration is required).
LDAP/AD is available for authentication; managed built-in authentication can also be used but requires additional configuration
DDC License for 30 day trial or annual subscription must be obtained or purchased for the POC.

The POC design phase is the ideal time to assess how Docker Datacenter will integrate into your existing IT infrastructure, from CI/CD, networking/load balancing, volumes for persistent data, configuration management, monitoring, and logging systems. During this phase, understand how  how the existing tools fit and discover any  gaps in your tooling. With the strategy and infrastructure prepared, begin the POC installation and testing. Installation docs can be found here.
 
3. Moving from POC Into Production
Once you have the built out your POC environment, how do you know if it’s ready for production use? Here are some suggested methods to handle the migration.

Perform the switchover from the non-Dockerized apps to Docker Datacenter in pre-production environments. Have Dev, Test, and Prod environments, switchover Dev and/or Test and run through a set burn in cycle to allow for the proper testing of the environment to look for any unexpected or missing functionality. Once non-production environments are stable, switch over to the production environment.

Start integrating Docker Datacenter alongside your existing application deployments. This method requires that the application can run with multiple instances running at the same time. For example, if your application is fronted by a load balancer, add the Dockerized application to the existing load balancer pool and begin sending traffic to the application running in Docker Datacenter. Should issues arise, remove the Dockerized application running  from the load balancer pool until issues can be resolved.

Completely cutover to a Dockerized environment all in one go. As additional applications begin to utilize Docker Datacenter, continue to use a tested pattern that works best for you to provide a standard path to production for your applications.

We hope these tips, learned from first hand experience with our customers help you in planning for your deployment. From standardizing your application environment and simultaneously adding more flexibility for your application teams, Docker Datacenter gives you a foundation to build, ship and run containerized applications anywhere.

3 Considerations for a successful deployment Click To Tweet

Enjoy your Docker Datacenter POC

Get started with your Docker Datacenter POC
See What’s New in Docker Datacenter
Learn more by visiting the Docker Datacenter webpage
Sign up for a free 30 day trial

The post Three Considerations for Planning your Docker Datacenter Deployment appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

8 steps to help your organization enter the cloud

Say you&;re a CIO or CTO who wants to make a fundamental shift in how digital technology can drive your enterprise to innovate and produce transformational business outcomes. Say you know how it can change not just the operations of your business, but its culture as well.
In essence, you&8217;re ready to enter the cloud.
As I talk to clients who are at this stage of their cloud journey, the big question then becomes, &;How?&;
Certainly cloud architecture, process and functionality are important ingredients for success, but consider stepping back and looking at the big picture. After all, you&8217;re making a fundamental shift in your enterprise. You want to ensure that cloud can support your business mission and one way to ensure that is to develop a cloud implementation strategy.
How do you form that strategy? At IBM, we&8217;re fond of the word “think,” and through our work with the research analysis firm Frost and Sullivan, we&8217;ve come up with some ways to help think through and plan your cloud journey:
1. Educate your IT team.
Make sure your team understands that moving to cloud technology is not outsourcing or a way to cut jobs, but rather an opportunity. By shifting the &8220;grunt work&8221; of infrastructure deployment and maintenance to a cloud provider, it will free up IT professionals to participate in more strategic work.
2. Make it “cloud first” for any new projects.
This simply means that when your business needs a new application, start by considering cloud-based solutions. With a &8220;cloud first&8221; policy, corporate developers become champions of strategy and heroes to their line of business colleagues.
3. Move test and development to the public cloud.
On-demand access to scalable resources and pay-as-you-go pricing enable developers to test, replicate, tweak, and test again in an environment that replicates the production environment. This simple move will free up hundreds of hours of IT operational resources to work on the cloud or other strategic projects.
4.  Review your IT maintenance schedule.
Check for planned hardware and software upgrades and refreshes. Major upgrades can be disruptive to users, as well as costly and time-consuming to implement. Where possible, you should synchronize planned upgrades with your cloud project. In some cases, you may decide that certain workloads should remain in your on-premises data center for the time being.
5. Organize a cross-functional project planning team.
Identify workloads to migrate. This is your opportunity to gain the trust of line-of-business managers who, in many companies, consider IT a roadblock. The term &8220;fast solutions&8221; will play very well to this audience.
6. Hire an expert provider to spearhead the project.
In setting out to build their cloud strategies, most businesses face two handicaps: a lack of expertise and few resources to spare. An outside expert can assist with tasks from risk assessment, to strategy development, to project planning, to management of the migration project. But remember, your provider should focus on a successful business outcome, not just a &8220;tech flash-cut.&8221;
7. Plan your ongoing cloud support needs.
The time to consider how you will manage your cloud is now, before you start moving strategic workloads. While you may be at the beginning of your cloud journey, you should look ahead to the inevitable time when the majority of workloads will be cloud-delivered. You may want to consider one of the few cloud service providers to offer a managed-service option.
8. Build your migration and integration project plan.
This is the essential on-ramp to your company’s cloud journey. Work with your experts and cross-functional team to identify two or three simple, low-risk workloads to move to the cloud. For most enterprises, the best bets are web-enabled workloads that are neither critical, nor strategic to the running of the business, and that require limited interaction with external data sources.
Those are the essentials. Use them to achieve your &8220;digital revolution.&8221;
To learn more, read “Stepping into the Cloud: A Practical Guide to Creating and Implementing a Successful Cloud Strategy.”
Image via FreeImages.com/Stephen Calsbeek
The post 8 steps to help your organization enter the cloud appeared first on news.
Quelle: Thoughts on Cloud

Creating and accessing a Kubernetes cluster on OpenStack, part 3: Run the application

The post Creating and accessing a Kubernetes cluster on OpenStack, part 3: Run the application appeared first on Mirantis | The Pure Play OpenStack Company.
Finally, you&;re ready to actually interact with the Kubernetes API that you installed. The general process goes like this:

Define the security credentials for accessing your applications.
Deploy a containerized app to the cluster.
Expose the app to the outside world so you can access it.

Let&8217;s see how that works.
Define security parameters for your Kubernetes app
The first thing that you need to understand is that while we have a cluster of machines that are tied together with the Kubernetes API, it can support multiple environments, or contexts, each with its own security credentials.
For example, if you were to create an application with a context that relies on a specific certificate authority, I could then create a second one that relies on another certificate authority. In this way, we both control our own destiny, but neither of us gets to see the other&8217;s application.
The process goes like this:

First, we need to create a new certificate authority which will be used to sign the rest of our certificates. Create it with these commands:
$ sudo openssl genrsa -out ca-key.pem 2048
$ sudo openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj “/CN=kube-ca”

At this point you should have two files: ca-key.pem and ca.pem. You&8217;ll use them to create the cluster administrator keypair. To do that, you&8217;ll create a private key (admin-key.pem), then create a certificate signing request (admin.csr), then sign it to create the public key (admin.pem).
$ sudo openssl genrsa -out admin-key.pem 2048
$ sudo openssl req -new -key admin-key.pem -out admin.csr -subj “/CN=kube-admin”sudo openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out admin.pem -days 365

Now that you have these files, you can use them to configure the Kubernetes client.
Download and configure the Kubernetes client

Start by downloading the kubectl client on your machine. In this case, we&8217;re using linux; adjust appropriately for your OS.
$ curl -O https://storage.googleapis.com/kubernetes-release/release/v1.4.3/bin/linux/amd64/kubectl

Make kubectl executable:
$ chmod +x kubectl

Move it to your path:
$ sudo mv kubectl /usr/local/bin/kubectl

Now it&8217;s time to set the default cluster. To do that, you&8217;ll want to use the URL that you got from the environment deployment log. Also, make sure you provide the full location of the ca.pem file, as in:
$ kubectl config set-cluster default-cluster –server=[KUBERNETES_API_URL] –certificate-authority=[FULL-PATH-TO]/ca.pem
In my case, this works out to:
$ kubectl config set-cluster default-cluster –server=http://172.18.237.137:8080 –certificate-authority=/home/ubuntu/ca.pem

Next you need to tell kubectl where to find the credentials, as in:
$ kubectl config set-credentials default-admin –certificate-authority=[FULL-PATH-TO]/ca.pem –client-key=[FULL-PATH-TO]/admin-key.pem –client-certificate=[FULL-PATH-TO]/admin.pem
Again, in my case this works out to:
$ kubectl config set-credentials default-admin –certificate-authority=/home/ubuntu/ca.pem –client-key=/home/ubuntu/admin-key.pem –client-certificate=/home/ubuntu/admin.pem

Now you need to set the context so kubectl knows to use those credentials:
$ kubectl config set-context default-system –cluster=default-cluster –user=default-admin
$ kubectl config use-context default-system

Now you should be able to see the cluster:
$ kubectl cluster-info

Kubernetes master is running at http://172.18.237.137:8080
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.

Terrific!  Now we just need to go ahead and run something on it.
Running an app on Kubernetes
Running an app on Kubernetes is pretty simple and is related to firing up a container. We&8217;ll go into the details of what everything means later, but for now, just follow along.

Start by creating a deployment that runs the nginx web server:
$ kubectl run my-nginx –image=nginx –replicas=2 –port=80

deployment “my-nginx” created

Be default, containers are only visible to other members of the cluster. To expose your service to the public internet, run:
$ kubectl expose deployment my-nginx –target-port=80 –type=NodePort

service “my-nginx” exposed

OK, so now it&8217;s exposed, but where?  We used the NodePort type, which means that the external IP is just the IP of the node that it&8217;s running on, as you can see if you get a list of services:
$kubectl get services

NAME         CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes   11.1.0.1      <none>        443/TCP   3d
my-nginx     11.1.116.61   <nodes>       80/TCP    18s

So we know that the &;nodes&; referenced here are kube-2 and kube-3 (remember, kube-1 is the API server), and we can get their IP addresses from the Instances page&;

&8230; but that doesn&8217;t tell us what the actual port number is.  To get that, we can describe the actual service itself:
$ kubectl describe services my-nginx

Name:                   my-nginx
Namespace:              default
Labels:                 run=my-nginx
Selector:               run=my-nginx
Type:                   NodePort
IP:                     11.1.116.61
Port:                   <unset> 80/TCP
NodePort:               <unset> 32386/TCP
Endpoints:              10.200.41.2:80,10.200.9.2:80
Session Affinity:       None
No events.

So the service is available on port 32386 of whatever machine you hit.  But if you try to access it, something&8217;s still not right:
$ curl http://172.18.237.138:32386

curl: (7) Failed to connect to 172.18.237.138 port 32386: Connection timed out

The problem here is that by default, this port is closed, blocked by the default security group.  To solve this problem, create a new security group you can apply to the Kubernetes nodes.  Start by choosing Project->Compute->Access & Security->+Create Security Group.
Specify a name for the group and click Create Security Group.
Click Manage Rules for the new group.

By default, there&8217;s no access in; we need to change that.  Click +Add Rule.

In this case, we want a Custom TCP Rule that allows Ingress on port 32386 (or whatever port Kubernetes assigned the NodePort). You  can specify access only from certain IP addresses, but we&8217;ll leave that open in this case. Click Add to finish adding the rule.

Now that you have a functioning security group you need to add it to the instances Kubernetes is using as worker nodes &; in this case, the kube-2 and kube-3 nodes.  Start by clicking the small triangle on the button at the end of the line for each instance and choosing Edit Security Groups.
You should see the new security group in the left-hand panel; click the plus sign (+) to add it to the instance:

Click Save to save the changes.

Add the security group to all worker nodes in the cluster.
Now you can try again:
$ curl http://172.18.237.138:32386

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
   body {
       width: 35em;
       margin: 0 auto;
       font-family: Tahoma, Verdana, Arial, sans-serif;
   }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href=”http://nginx.org/”>nginx.org</a>.<br/>
Commercial support is available at
<a href=”http://nginx.com/”>nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
As you can see, you can now access the Nginx container you deployed on the Kubernetes cluster.

Coming up, we&8217;ll look at some of the more useful things you can do with containers and with Kubernetes. Got something you&8217;d like to see?  Let us know in the comments below.
The post Creating and accessing a Kubernetes cluster on OpenStack, part 3: Run the application appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

CloudNativeCon and KubeCon: What we learned

Imagine yourself on a surfboard. You’re alone. You’re paddling out to farther into the sea and you’re ready to catch a giant wave. Only you look to your left, to your right and behind you, and you suddenly realize you’re not alone at all. There are countless other surfers who share your aim.
That’s how developers are feeling about cloud native application development and Kubernetes as excitement builds for the impending wave.
The excitement was apparent during the recent and KubeCon joint event in Seattle. More than 1,000 developers gathered to share ideas around the growing number of projects under the Cloud Native Compute Foundation (via Linux Foundation) banner. That includes Kubernetes, one of the foundation’s most significant and broadly adopted projects.

Despite the fact that it’s still relatively early days for Kube and cloud native computing, CNCF executive director Dan Kohn said there are plenty of reasons to be excited about cloud native.
In his opening keynote, Kohn highlighted these top advantages that cloud native offers:

Isolation. Containerizing applications ensures that you get the same version in development and production. Operations are simplified.
No lock-in. When you choose a vendor that relies on open technology, you’re not locked in to using that vendor.
Improved scalability. Cloud native provides the ability to scale your application to meet customer demand in real time.
Agility and maintainability. These factors are improved when applications are split into microservices.

It was apparent by the sessions alone that Kubernetes is already seeing enterprise adoption. Numerous big-name companies were presented as use cases.
Chris Aniszczyk, VP of developer programs for The Linux Foundation, shared some of the impressive growth numbers around the CNCF and Kube communities:

Now @cra wrapping up a busy 2 days with some impressive numbers! CloudNativeCon the hard way! @CloudNativeFdn @kelseyhightower pic.twitter.com/ySe5pNokjM
— Jeffrey Borek (@jeffborek) November 10, 2016

And if conference attendance is any indication, the community is poised to grow even more over the next few months. Next year’s CloudNativeCon events in Berlin and Austin are expected to double or triple the Seattle attendance number.
The IBM contribution to Kubernetes
The work IBM is doing with Kubernetes is twofold. First and foremost, IBM is helping the community understand its pain points and contribute its resources, as it does with dozens of open source projects. Second, IBM developers and technical leaders are working with internal product teams to fold in Kubernetes into the larger cloud ecosystem.
“Because Kubernetes is going to be such an important part of our infrastructure going forward, we want to make sure we contribute as much as we get out of it,” IBM Senior Technical Staff Member Doug Davis said at the CloudNativeCon conference. “We’re going to see more people coming to our team, and you’re going to see a bigger IBM presence within the community.”
IBM is also committed to helping the Kubernetes community interact and cooperate with other open source communities. Kubernetes technology provides plug points and extensibility points that allow it to be run on , for example.
Brad Topol, a Distinguished Engineer who leads IBM work in OpenStack, explained how the communities are working together:

At CloudNativeCon in Seattle @BradTopol discusses the relationship between OpenStack and CNCF. pic.twitter.com/o2wj8swTBo
— IBM Cloud (@IBMcloud) November 8, 2016

momentum continues
Serverless remained a hot topic at CloudNativeCon. IBMer Daniel Krook presented a keynote on the topic, including an overview of , the IBM open source, serverless offering that is available on Bluemix:

LIVE on : @DanielKrook talks OpenWhisk at CloudNativeCon. Slides: https://t.co/P51xrjVqFP https://t.co/dRJmHKiXcy
— IBM Cloud (@IBMcloud) November 9, 2016

Krook also joined in to provide a solid definition of “serverless,” something that tends to spark debate whenever the topic is broached:

The buzz around serverless continues at CloudNativeCon. @DanielKrook gives his definition of this emerging technology. pic.twitter.com/UzFhqtBnD0
— IBM Cloud (@IBMcloud) November 9, 2016

An update on the Open Container Initiative
In a lightning talk, Jeff Borek, Worldwide Program Director of Open Cloud Business Development, joined Microsoft Senior Program Manager Rob Dolin for an update on the OCI. The organization started in 2015 as a Linux Foundation project with the goal of creating open, industry standards around container formats and runtimes.
Watch their session here:

LIVE on Periscope: From CloudNativeCon, @JeffBorek & @RobDolin discuss the Open Container Initiative. https://t.co/rKpa4UpRcn
— IBM Cloud (@IBMcloud) November 9, 2016

Learn more: &;Why choose a serverless architecture?&;
The post CloudNativeCon and KubeCon: What we learned appeared first on news.
Quelle: Thoughts on Cloud

Dynamic advertising gets the cognitive treatment

Brands are spending more on native advertising than ever before — a lot more — to create targeted, minimally invasive online advertising experiences for consumers.
Business Insider Intelligence reports that native advertising, which assumes the look and feel of content that surrounds it, is the fastest-growing digital advertising category. The same report also projects that spending on native advertising will grow to $21 billion in 2018, up from $4.7 billion in 2013.
The real game-changer for brands that want to make a meaningful connection with audiences in digital channels will be the future marriage of artificial intelligence and native advertising in video content. In this future — likely only two to three years away — dynamic and highly personalized advertising takes on an entirely new meaning.
Advertising&;s giant leap toward science
IBM Watson&8217;s cognitive capabilities, once incorporated into advertisers&8217; video platforms, will enable advertisers to personalize marketing messages across channels, even within the video stream. The key is the ability to accumulate data about a specific viewer&8217;s preferences and integrate that data from external sources, such as social media and advertisers&8217; other marketing tools. ​​
If Watson knows a consumer recently bought a refrigerator, for instance, then it wouldn&8217;t show that consumer advertising for refrigerators. Instead, Watson might serve up an ad for a product to put in the new fridge, such as soda. And because Watson could determine — based on purchase history — loyalty to a certain soda brand, so the consumer won&8217;t see any rivals&8217; ads. Watson will be able to dynamically swap in a product they love — such as Coke for Pepsi — into the video the consumer is watching to create a powerful, personalized brand experience.
For brands, the value of such a scenario is clear: they can be seamlessly front-and-center in a consumer&8217;s entertainment experience, facilitating a positive and lasting association between brand and content. Media and entertainment companies will benefit, too, because consumers will feel more personally connected to the video content they create.
360-degree user profiles
The ability to deliver highly-targeted online video advertising is here. Brands can already use Watson analytics tools and intelligence to enable this for any business and campaign, creating direct advertising that will resonate with customers. Watson intelligence can also be integrated with other digital marketing tools, such as email or text, to deliver personalized advertising and marketing messages.
Many brands are already experimenting with Watson&8217;s cognitive capabilities — facial recognition, audio recognition, tone analytics, personality insights and more — to better understand the needs and perceptions of consumers. Chevrolet recently tapped Watson for a “global positivity system&; campaign to analyze people&8217;s social media feeds, for example. The North Face is among a growing list of retailers using Watson AI capabilities to make product recommendations. Video providers are now exploring ways to use Watson&8217;s intelligence to deliver more relevant content to viewers.
Through these efforts, brands are starting to develop 360-degree profiles of users that will help them better understand what their customers say, how they feel and how they interact with the company and its products. These comprehensive profiles are essential to making the dynamic and highly personalized advertising of the future a reality in all digital channels, including video.
Learn more about IBM Cloud Video.
The post Dynamic advertising gets the cognitive treatment appeared first on news.
Quelle: Thoughts on Cloud

Conquering impossible goals with real-time analytics

“Past data only said, &;go faster&; or &8216;ride better,’&; Kelly Catlin, Olympic Cyclist and Silver Medalist, shared with the audience at IBM World of Watson event on 24 October. In other words, the feedback generated from all her analytics data sources — the speed, cadence, power meters on her bicycle — was generally useless to this former mountain bike racer who wanted to improve her track cycling performance by 4.5 percent to capture a medal at a medal at the 2016 Rio Olympic Games.

USA Cycling Women&8217;s Team Pursuit

While I am by no means an Olympic level athlete, I knew exactly what Kelly meant. I’ve logged over 300 miles in running races over 8 years, and just in this past year started to see some small improvements in my 5Ks and half-marathons. Suddenly, I started asking, “How much faster could I run a half marathon? Could I translate these improvements to longer distances?” I downloaded all my historical race information into an excel chart. I looked at my Runkeeper and Strava training runs. Despite all this data, I was stuck. &;What should I do to improve?&8221; I asked a coach. He said, “Run more during the week.”
But I wanted to know more. How much capacity do I really have? How much does my asthma limit me? Should I only run in certain climates? During which segments of a race should I speed up or slow down? Just like Kelly, who spent four hours per session reviewing data, I understood how historical data had limited impact on improving current performance.
According to Derek Bouchard-Hall, CEO of USA Cycling, “At the elite level, a 3 percent performance improvement in 12 months is attainable but very difficult. For the USA Women’s Team Pursuit Team, they had only 11 months and needed 4.5 percent improvement which would require them to perform at a new world record time (4.12/15.4 Lap Average). The coach could account for the 3 percent in physiological improvement but needed technology to bring the other 1.5 percent. He focused in two areas: equipment (bike/tire, wind tunnel training) and real-time analytic training insights.”

How exactly could real-time analytics insight change performance?
According to Kelly, “Now, we can make executable changes.” She and her teammates now know when to make a transition of who is leading the group, how best to make that transition, and which times of the race to pick up cadence.
The result: USA Women’s Team Pursuit finished in the race in 4:12:454 to secure the silver medal behind Great Britain, finishing in 4:10:236.
The introduction of data sets and technology did not alone lead to Team USA’s incredible improvement. Instead, it was the combination of well-defined goals, strategic implementation of technology, and actionable, timely recommendations that led to their strong performance and results.
As you consider how to improve an area of your business, keep in mind these three things from the USA Cycling project with IBM Bluemix:

Set well-defined goals. Or, as business expert Stephen Covey would say, “always begin with the end in mind.” USA Cycling clearly articulated they needed to increase performance by 4.5 percent, and that would take more than a coach.
Choice and implementation of technology matters. Choose the tools that will not only deliver analytics data and insights, but do so in a timely and relevant manner for your business. Explore how to get started with IBM Bluemix.
Data alone doesn’t equal guidance. You must review the data, and with your colleagues, your coach, your running buddy, set clear, executable actions.

The IBM Bluemix Garage Method can help you define your ideas and bring a culture of innovation agility to your cloud development.
A version of this post originally appeared on the IBM Bluemix blog.
The post Conquering impossible goals with real-time analytics appeared first on news.
Quelle: Thoughts on Cloud

Service Catalogs and the User Self-Service Portal

One of the most interesting features of CloudForms is the ability to define services that can include one or more virtual machines (VMs) or instances and can be deployed across hybrid environments. Services can be made available to users through a self-service portal that allows users to order predefined IT services without IT operations getting involved, thereby delivering on one of the major promises of .
The intention of this post is to provide you with step-by-step instructions to get you started with a simple service catalog. After you have gone through the basic concepts, you should have the skills to dive deeper into more complex setups.

Getting started with Service Catalogs
Let’s set the stage for this post: You added your Amazon Web Services (AWS) account to CloudForms as a cloud provider. Your AWS account includes a Red Hat Enterprise Linux (RHEL) image ready to use. Now you want to give your users the ability to deploy RHEL instances on AWS but you want to limit or predefine most of the options they could choose when deploying these instances.
Service Basics
Four items are required to make a service available to users from the CloudForms self-service portal:

A Provisioning Dialog which presents the basic configuration options for a VM or instance.
A Service Dialog where you allow users to configure VM or instance options.
A Service Catalog which is used to group Services Dialogs together.
A Service Catalog Item (ie. the actual Service) which joins a Service Dialog with a Provisioning Dialog.

Provisioning Dialogs
To work with services in CloudForms it is important to understand the concept of Provisioning Dialogs. When you begin the process of provisioning a VM or instance via CloudForms, you are presented with a Provisioning Dialog where you set certain options for the VM or instance. The options presented are dependent on the provider you are using. For instance, a cloud provider might have &;flavors&; of instances, whereas an infrastructure provider might allow you to set the Memory size or number of CPUs on a VM.
Every provider in CloudForms comes with a sample provisioning dialog covering the options specific to that provider. To have a look at some sample Provisioning Dialogs, go to Automate > Customization > Provisioning Dialogs > VM Provision and select &8220;Sample Provisioning Dialogs&8221;. This is a textual representation of the dialog you will get when you provision a VM or instance.
For this post, we need to make sure instance provisioning to AWS is working, so go to Compute > Clouds > Instances and create a new AWS instance by choosing &8220;Provision Instances&8221; from the &8220;Lifecycle&8221; drop-down. Select the image you are going to use, click “Continue” and walk through the Provisioning Dialog.

Service Dialogs
A Service Dialog determines which options the users get to change. The choice of options that are presented to the user is up to you. You could just give them the option to set the service name, or you could have them change all of the Provisioning Dialog options. You have to create a Service Dialog to define the options users are allowed to see and set. To help with creating a Service Dialog, CloudForms includes a simple form designer.
Anatomy of a Service Dialog
A Service Dialog contains three components:

One or more &8220;Tabs&8221;
Inside the &8220;Tabs&8221;, one or more &8220;Boxes&8221;
Inside the &8220;Boxes&8221;, one or more &8220;Elements&8221;
The &8220;Elements&8221; contain methods, like check boxes, drop-down lists or text fields, to fill in the options on the Provisioning Dialog. Here is the most important part: The names of the Elements have to correspond to the options used in the Provisioning Dialog!

What are the Element Names?
Very good question. As mentioned the options and values we provide in the Service Dialog must match those used in the Provisioning Dialog. There are some rather generic names like &8220;vm_name&8221; or &8220;service_name&8221;, while others might be specific to the provider in question.
So how do you find the options and values you can pass in a Service Dialog? The easiest way is to look at the Provisioning Dialog. In this case, for our Amazon EC2 instance:

As an administrator, go to Automate > Customization
Open the &8220;Provisioning Dialogs&8221; accordion and locate the &8220;VM Provision&8221; folder
Find the appropriate dialog, &8220;Sample Amazon Instance Provisioning Dialog&8221;
Now you can use your browser’s search capabilities to find options and their potential values. For practice just look for e.g. “vm_name”.

Creating a Service Dialog
Enough theory, let&;s dive in and create our first simple Service Dialog. The Service Dialog should let users choose a service and instance name for an AWS instance.

As an administrator, go to Automate > Customization
Open the &8220;Service Dialogs&8221; accordion. You will find two example Service Dialogs.
Add a new Service Dialog: Configuration > Add a new Dialog
Type &8220;aws_single_rhel7_instance&8221; into the Label field, this will be the name of the Service Dialog in CloudForms. Add a description if you want, this is not mandatory but good practice.
For Buttons, check &8220;Submit&8221; and &8220;Cancel&8221;.

From this starting point, you can now add content to the Dialog:

From the drop-down with the &8220;+&8221; sign choose &8220;Add a new Tab to this Dialog&8221;.

For Label use &8220;instance_settings&8221;, as Description use &8220;Instance Settings&8221;.
With the &8220;instance_settings&8221; Tab selected choose &8220;Add a new Box to this Tab&8221; from the &8220;+&8221; drop-down.
Give the new Box a Label and Description of &8220;Instance and Service Name&8221;.
From the &8220;+&8221; drop-down choose &8220;Add a new Element to this Box&8221;.
Fill in Label and Description with &8220;Service Name&8221; and Name with &8220;service_name&8221;.
For the Type, choose &8220;Text Box&8221; with Value Type &8220;String&8221;.

Following the same procedure add a second Element to the Box. The Name field should be &8220;vm_name&8221; and the Label and Description fields should be &8220;Instance Name&8221;. Similarly, Type should be &8220;Text Box&8221; with Value Type &8220;String&8221;.

That’s it! Now you can finally hit the &8220;Add&8221; button at the lower right corner.
Create a Catalog
Now that you have created your Service Dialog, we can add it to a Service Catalog by creating its associated Catalog Item.
First, we will create a Catalog:

Go to Services > Catalogs and expand the &8220;Catalogs&8221; accordion.
Select the &8220;All Catalogs&8221; folder and click Configuration > Add a new Catalog.
For Name and Description fill in &8220;Amazon EC2&8221;.
We will assign Catalog Items to this Catalog later.

Create a Catalog Item
Now we have the Catalog without any content, the Service Dialog, and the Provisioning Dialog. To allow users to order the service from the self-service catalog, we have to create a Catalog Item. Let&8217;s create a Catalog Item to order a RHEL instance using our Service Dialog:

Go to Services > Catalogs and expand the &8220;Catalog Items&8221; accordion.
Select the &8220;Amazon EC2&8221; catalog and click Configuration > Add a new Catalog Item.
From the &8220;Catalog Item Type&8221; drop-down select &8220;Amazon&8221;.
For Name and Description use &8220;RHEL Instance&8221; and check the box labelled &8220;Display in Catalog&8221;
From the &8220;Catalog&8221; drop-down choose &8220;Amazon EC2&8221;
From the &8220;Dialog&8221; drop-down choose &8220;aws_single_rhel7_instance&8221;. This is the Service Dialog you created earlier.
The three fields below point to methods used when provisioning/reconfiguring or retiring the service. For now, just configure these to use built-in methods as follows:

Click into the “Provisioning Entry Point State Machine” field, you will be taken to the Datastore Explorer.
Under the “ManageIQ” subtree, navigate to the following method and hit &8220;Apply&8221;: &8220;/Service/Provisioning/StateMachines/ServiceProvision_Template/CatalogItemInitialization&8221;
Click into the “Retirement Entry Point State Machine” field, navigate to this method and hit apply: “/Service/Retirement/StateMachines/ServiceRetirement/Default”

Switch to the &8220;Details&8221; tab. In real life you would put a detailed description of your Service here. You could use HTML for better formatting, but for the purpose of this post &8220;Single Amazon EC2 instance&8221; will do.
Switch to the &8220;Request Info&8221; tab. Here you preset all of the options from the Provisioning Dialog. (Remember that the user is only allowed to set Service Name and the Instance Name options via the Service Dialog):

On the &8220;Catalog&8221; tab, set the image Name to your AWS image name (&8220;rhel7&8221; in this case) and the Instance Name to &8220;changeme&8221;

On the &8220;Properties&8221; tab set the Instance Type to &8220;T2 Micro&8221;. If you ever plan to access the instance you should of course select a &8220;Guest Access Key Pair&8221;, too.

On the &8220;Customize&8221; tab set the Root Password. And in Customize Template choose the &8220;Basic root pass template&8221; as a script for cloud-init.

Click Add at the bottom right.

As you can see your new Catalog Item is listed with a generic icon. Let’s change this by uploading an icon in the &8220;Custom Image&8221; section. You can pick any image you like.
Recap or &8220;What have we done so far&8221;?
We created a Provisioning Dialog that defines the options that can be set on a VM or instance. We created a Service Dialog which allows us to expose certain options to be set by the user. For our example, only the instance name and service name are configurable. Then we created a Service Catalog and finally a Catalog Item. The Catalog Item joins the Service Dialog with all of the options in the Provisioning Dialog. Now, users should be able to order RHEL instance from the self-service catalog.
Let’s Order a RHEL Instance
To order your new service:

Access the self-service portal on https://<your_cf_appliance>/self_service. You will be greeted by the self-service dashboard
Select &8220;Service Catalog&8221; on the menu bar.

You should now see your service. Select it and you will be taken to the form you have defined in your Service Dialog:

Fill in the &8220;Service Name&8221; and &8220;Instance Name&8221; fields. Recall that these are the only two options that you made available to users in your Service Dialog.
Click &8220;Add to Shopping Cart&8221; and access the &8220;Shopping Cart&8221; by clicking the icon on the top right (there should now be a number on it).
Click &8220;Order&8221;. You have created a new provisioning request. You can follow the request by selecting &8220;My Requests&8221; from the menu bar and selecting the specific request to see its progression and details.

Once the &8220;Request State&8221; is shown as &8220;finished&8221;, your AWS instance is provisioned.
Conclusion
As you can see, creating a basic service catalog and to use the self-service portal in CloudForms is not rocket science. Of course, there is a lot more to learn, but there are also a lot of good resources to help you on your journey. For example, articles on this blog, the official documentation, and of course the excellent “Mastering CloudForms Automation” book written by  Peter McGowan that I cannot recommend highly enough.
Quelle: CloudForms