American Airlines soars into cloud with IBM

American Airlines, the largest passenger air carrier in North America, announced this week that it has chosen IBM as a cloud provider.
Specifically, the airline intends to move some of its applications to the cloud and make use of IBM infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) tools. The agreement also means American will have access to the 50 IBM data centers around the world, the Bluemix development platform and analytics capabilities.
Patrick Grubbs, vice president of travel and transportation at IBM Cloud, said the work between IBM and American will include &;customer facing systems as well as back office.&;
American has been looking integrate and streamline its systems since merging with US Airways in 2013.
Robert LeBlanc, Senior Vice President of IBM Cloud, said, &8220;This partnership is about delivering the flexibility to American Airlines&; business, allowing them to enhance their customer relationships and further gain a competitive advantage.&8221;
The new IBM agreement with American Airlines extends a longstanding relationship. In the 1950s, the two companies collaborated to create the first-ever electronic reservation and ticketing system in the air travel industry.
For more, check out ZDNet&;s full story.
(Image via Wikimedia Commons)
The post American Airlines soars into cloud with IBM appeared first on news.
Quelle: Thoughts on Cloud

Mirantis Launches First Vendor-Agnostic Kubernetes and Docker Certification

The post Mirantis Launches First Vendor-Agnostic Kubernetes and Docker Certification appeared first on Mirantis | The Pure Play OpenStack Company.
Company also adds self-paced training course to Kubernetes and Docker training offerings

SUNNYVALE, CA – Dec. 1, 2016 – Mirantis today launched the first vendor-agnostic Kubernetes and Docker certification, giving enterprises a way to identify container skills in a competitive cloud market. Professionals preparing for the certification are recommended to take the Kubernetes and Docker bootcamp. The company also announced a new online, self-paced KD100 training for self learners looking for economy pricing and additional flexibility.

skills have progressed from being niche to mainstream as the world’s most in-demand skill set. LinkedIn named cloud computing as the hottest skill in demand in France, India, and the United States in 2015. Within cloud computing, Kubernetes and containers have grown in popularity. The OpenStack User Survey shows Kubernetes taking the lead as the top Platform-as-a-Service (PaaS) tool, while 451 Research has called containers the “future of virtualization,” predicting strong container growth across on-premises, hosted and public clouds.

“As interest in Kubernetes and containers gains momentum across the industry, Mirantis felt it vital to add a true vendor-agnostic certification for Kubernetes and Docker,” said Lee Xie, Sr. Director, Educational Services, Mirantis. “Mirantis offers several formats to train professionals on the automated deployment, scaling, management, and running of container applications. This provides maximum flexibility to prepare for the KDC100 certification exam.”

Pricing and Availability

The proctored Kubernetes and Docker certification (KDC100), is a hands-on, 30-task exam, priced at $600. This includes a certificate, listing on Mirantis’ verification portal for prospective employers, and certification signature logos for those that pass the exam. The first session is scheduled for December 29 in Sunnyvale, California, with an attached virtual session. For those interested in a packaged offering, the KD110 bundle includes the KD100 bootcamp and the KDC100 exam for $2,395. The KD100 bootcamp, available in classroom and live virtual formats, is the official recommended training for the KDC100 certification exam.

Mirantis Online Training

The company also announced a new online, self-paced KD100 training. The online course will include one-year access to the KD100 course content and videos, 72 hours of online hands-on labs, as well as a completion certificate that will be provided upon finishing the class. The new class is coming in January 2017. For a limited time, it will be available for preregistration at the discounted price of $195 (regularly $395).

&;This [KD100] class has given me the confidence to say I understand the technology behind Docker and Kubernetes. It also provided me with a lot of use cases that I will be able to use from my perspective as a CIO of a large web hosting company,&; said Nickola Naous, chief information officer, TMDHosting, Inc.

For more information on these and other Mirantis training courses, visit: https://training.mirantis.com/.

About Mirantis

Mirantis helps top enterprises build and manage private cloud infrastructure using OpenStack and related open source technologies. The company is the top contributor of open source code to OpenStack project and follows a build-operate-transfer model to deliver its OpenStack distribution and cloud management services, empowering customers to take advantage of open source innovation with no vendor lock-in. To date Mirantis has helped over 200 enterprises build and operate some of the largest OpenStack clouds in the world. Its customers include iconic brands like AT&T, Comcast, Shenzhen Stock Exchange, eBay, Wells Fargo Bank and Volkswagen. Learn more at www.mirantis.com.

 

Contact information:

Sarah Bennett

Mirantis PR Manager

sbennett@mirantis.comThe post Mirantis Launches First Vendor-Agnostic Kubernetes and Docker Certification appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Automate bare metal server provisioning using Ironic (bifrost) and the ansible deploy driver

The post Automate bare metal server provisioning using Ironic (bifrost) and the ansible deploy driver appeared first on Mirantis | The Pure Play OpenStack Company.
On our team, we mostly conduct various research in OpenStack, so we use bare metal machines extensively. To make our lives somewhat easier, we&;ve developed a set of simple scripts that enables us to backup and restore the current state of the file system on the server. It also enables us to switch between different backups very easily. The set of scripts is called multi-root (https://github.com/vnogin/multi-root).
Unfortunately, we had a problem; in order to use this tool, we had to have our servers configured in a particular way, and we faced different issues with manual provisioning:

It is not possible to set up more than one bare metal server at a time using a Java-based IPMI application
The Java-based IPMI application does not properly handle disconnection from the remote host due to connectivity problems (you have to start installation from the very beginning)
The bare metal server provisioning procedure was really time consuming
For our particular case, in order to use multi-root functionality we needed to create software RAID and make required LVM configurations prior to operating system installation

To solve these problems, we decided to automate bare metal node setup, and since we are part of the OpenStack community, we decided to use bifrost instead of other provisioning tools. Bifrost was a good choice for us as it does not require other OpenStack components.
Lab structure
This is how we manage disk partitions and how we use software RAID on our machines:

As you can see here, we have the example of a bare metal server, which includes two physical disks.  Those disks are combined using RAID1, then partitioned by the operating system.  The LVM partition then gets further partitioned, with each copy of an operating system image assigned to its own partition.
This is our network diagram:

In this case we have one network to which our bare metal nodes are attached. Also attached to that network is the IRONIC server. A DHCP server assigns IP addresses for the various instances as they&8217;re provisioned on the bare metal nodes, or prior to the deployment procedure (so that we can bootstrap the destination server).
Now let&8217;s look at how to make this work.
How to set up bifrost with ironic-ansible-driver
So let&8217;s get started.

First, add the following line to the /root/.bashrc file:
# export LC_ALL=”en_US.UTF-8″

Ensure the operating system is up to date:
# apt-get -y update && apt-get -y upgrade

To avoid issues related to MySQL, we decided to ins tall it prior to bifrost and set the MySQL password to &;secret&;:
# apt-get install git python-setuptools mysql-server -y

Using the following guideline, install and configure bifrost:
# mkdir -p /opt/stack
# cd /opt/stack
# git clone https://git.openstack.org/openstack/bifrost.git
# cd bifrost

We need to configure a few parameters related to localhost prior to the bifrost installation. Below, you can find an example of an /opt/stack/bifrost/playbooks/inventory/group_vars/localhost file:
echo “—
ironic_url: “http://localhost:6385/”
network_interface: “p1p1″
ironic_db_password: aSecretPassword473z
mysql_username: root
mysql_password: secret
ssh_public_key_path: “/root/.ssh/id_rsa.pub”
deploy_image_filename: “user_image.qcow2″
create_image_via_dib: false
transform_boot_image: false
create_ipa_image: false
dnsmasq_dns_servers: 8.8.8.8,8.8.4.4
dnsmasq_router: 172.16.166.14
dhcp_pool_start: 172.16.166.20
dhcp_pool_end: 172.16.166.50
dhcp_lease_time: 12h
dhcp_static_mask: 255.255.255.0″ > /opt/stack/bifrost/playbooks/inventory/group_vars/localhost
As you can see, we&8217;re telling Ansible where to find Ironic and how to access it, as well as the authentication information for the database so state information can be retrieved and saved. We&8217;re specifying the image to use, and the networking information.
Notice that there&8217;s no default gateway for DHCP in the configuration above, so I&8217;m going to fix it manually after the install.yaml playbook execution.
Install ansible and all of bifrost&8217;s dependencies:
# bash ./scripts/env-setup.sh
# source /opt/stack/bifrost/env-vars
# source /opt/stack/ansible/hacking/env-setup
# cd playbooks

After that, let&8217;s install all packages that we need for bifrost (Ironic, MySQL, rabbitmq, and so on) &;
# ansible-playbook -v -i inventory/localhost install.yaml

&8230; and the Ironic staging drivers with already merged patches for enabling Ironic ansible driver functionality:
# cd /opt/stack/
# git clone git://git.openstack.org/openstack/ironic-staging-drivers
# cd ironic-staging-drivers/

Now you&8217;re ready to do the actual installation.
# pip install -e .
# pip install “ansible>=2.1.0″
You should see typical &8220;installation&8221; output.
In the /etc/ironic/ironic.conf configuration file, add the &8220;pxe_ipmitool_ansible&8221; value to the list of enabled drivers. In our case, it&8217;s the only driver we need, so let&8217;s remove the other drivers:
# sed -i ‘/enabled_drivers =*/cenabled_drivers = pxe_ipmitool_ansible’ /etc/ironic/ironic.conf

If you want to enable cleaning and disable disk shredding during the cleaning procedure, add these options to /etc/ironic/ironic.conf:
automated_clean = true
erase_devices_priority = 0

Finally, restart the Ironic conductor service:
# service ironic-conductor restart

To check that everything was installed properly, execute the following command:
# ironic driver-list | grep ansible
| pxe_ipmitool_ansible | test |
You should see the pxe_ipmitool_ansible driver in the output.
Finally, add the default gateway to /etc/dnsmasq.conf (be sure to use the IP address for your own gateway).
# sed -i ‘/dhcp-option=3,*/cdhcp-option=3,172.16.166.1′ /etc/dnsmasq.conf

Now that everything&8217;s set up, let&8217;s look at actually doing the provisioning.
How to use ironic-ansible-driver to provision bare-metal servers with custom configurations
Now let&8217;s look at actually provisioning the servers. Normally, we&8217;d use a custom ansible deployment role that satisfies Ansible&8217;s requirements regarding idempotency to prevent issues that can arise if a role is executed more than once, but because this is essentially a spike solution for us to use in the lab, we&8217;ve relaxed that requirement.  (We&8217;ve also hard-coded a number of values that you certainly wouldn&8217;t in production.)  Still, by walking through the process you can see how it works.

Download the custom ansible deployment role:
curl -Lk https://github.com/vnogin/Ansible-role-for-baremetal-node-provision/archive/master.tar.gz | tar xz -C /opt/stack/ironic-staging-drivers/ironic_staging_drivers/ansible/playbooks/ –strip-components 1

Next, create an inventory file for the bare metal server(s) that need to be provisioned:
# echo “—
 server1:
   ipa_kernel_url: “http://172.16.166.14:8080/ansible_ubuntu.vmlinuz”
   ipa_ramdisk_url: “http://172.16.166.14:8080/ansible_ubuntu.initramfs”
   uuid: 00000000-0000-0000-0000-000000000001
   driver_info:
     power:
       ipmi_username: IPMI_USERNAME
       ipmi_address: IPMI_IP_ADDRESS
       ipmi_password: IPMI_PASSWORD
       ansible_deploy_playbook: deploy_custom.yaml
   nics:
     –
       mac: 00:25:90:a6:13:ea
   driver: pxe_ipmitool_ansible
   ipv4_address: 172.16.166.22
   properties:
     cpu_arch: x86_64
     ram: 16000
     disk_size: 60
     cpus: 8
   name: server1
   instance_info:
     image_source: “http://172.16.166.14:8080/user_image.qcow2″” > /opt/stack/bifrost/playbooks/inventory/baremetal.yml

# export BIFROST_INVENTORY_SOURCE=/opt/stack/bifrost/playbooks/inventory/baremetal.yml
As you can see the above we have added all required information for bare-metal node provisioning using IPMI. If needed you can add information about various number of bare-metal servers here and all of them will be enrolled and deployed later.
Finally, you&8217;ll need to build a ramdisk for the Ironic ansible deploy driver and create a deploy image using DIB (disk image builder). Start by creating an RSA key that will be used for connectivity from the Ironic ansible driver to the provisioning bare metal host:
# su – ironic
# ssh-keygen
# exit

Next set environment variables for DIB:
# export ELEMENTS_PATH=/opt/stack/ironic-staging-drivers/imagebuild
# export DIB_DEV_USER_USERNAME=ansible
# export DIB_DEV_USER_AUTHORIZED_KEYS=/home/ironic/.ssh/id_rsa.pub
# export DIB_DEV_USER_PASSWORD=secret
# export DIB_DEV_USER_PWDLESS_SUDO=yes

Install DIB:
# cd /opt/stack/diskimage-builder/
# pip install .

Create the bootstrap and deployment images using DIB, and move them to the web folder:
# disk-image-create -a amd64 -t qcow2 ubuntu baremetal grub2 ironic-ansible -o ansible_ubuntu
# mv ansible_ubuntu.vmlinuz ansible_ubuntu.initramfs /httpboot/
# disk-image-create -a amd64 -t qcow2 ubuntu baremetal grub2 devuser cloud-init-nocloud -o user_image
# mv user_image.qcow2 /httpboot/

Fix file permissions:
# cd /httpboot/
# chown ironic:ironic *

Now we can enroll anddeploy our bare metal node using ansible:
# cd /opt/stack/bifrost/playbooks/
# ansible-playbook -vvvv -i inventory/bifrost_inventory.py enroll-dynamic.yaml
Wait for the provisioning state to read &8220;available&8221;, as a bare metal server needs to cycle through a few states and could be cleared, if needed. During the enrollment procedure, the node can be cleared by the shred command. This process does take a significant amount of time time, so you can disable or fine tune it in the Ironic configuration (as you saw above where we enabled it).
Now we can start the actual deployment procedure:
# ansible-playbook -vvvv -i inventory/bifrost_inventory.py deploy-dynamic.yaml
If deployment completes properly, you will see the provisioning state for your server as &8220;active&8221; in the Ironic node-list.
+————————————————————–+———+——————–+—————–+————————-+——————+
| UUID                                                    | Name  | Instance UUID | Power State | Provisioning State | Maintenance |
+————————————————————–+———+——————–+—————–+————————-+——————+
| 00000000-0000-0000-0000-000000000001   | server1| None          | power on      | active                     | False            |
+————————————————————–+———+——————–+—————–+————————-+——————+

Now you can log in to the deployed server via ssh using the login and password that we defined above during image creation (ansible/secret) and then, because the infrastructure to use it has now been created, clone the multi-root tool from Github.
Conclusion
As you can see, bare metal server provisioning isn&8217;t such a complicated procedure. Using the Ironic standalone server (bifrost) with the Ironic ansible driver, you can easily develop a custom ansible role for your specific deployment case and simultaneously deploy any number of bare metal servers in automation mode.
I want to say thank you to Pavlo Shchelokovskyy and Ihor Pukha for your help and support throughout the entire process. I am very grateful to you guys.
The post Automate bare metal server provisioning using Ironic (bifrost) and the ansible deploy driver appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Why IBM is tripling its cloud data center capacity in the UK

The need for cloud data centers in Europe continues to grow.
UK cloud adoption rates have increased to 84 percent over the last five years, according to Cloud Industry Forum.
That’s why I am thrilled to announce a major expansion of IBM UK cloud data centers, tripling the capacity in the United Kingdom to meet this growing customer demand. The investment expands the number of IBM cloud data centers in the country from two to six.

It is the largest commitment IBM Cloud has made to one country at one time. Expanding the cloud data center foot print in UK that has started over five years ago, IBM will have more UK data centers than any other vendor.
Meeting demand in highly regulated industries
Highly regulated industries, such as the public sector and financial services, have nuanced and sensitive infrastructure and security needs.
The UK government&;s Digital Transformation Plan to boost productivity has put digital technologies at the heart of the UK&8217;s economic future.
The Government Digital Service (GDS) leading the digital transformation of government, runs GOV.UK, helping millions of people find the government services and information they need every day. To make public services simpler, better and safer, the UK&8217;s national infrastructure and digital services require innovative solutions, strong cyber security defenses and high availability platforms. is thus essential to embrace the digital intelligence that will deliver outstanding services to UK citizens.
In response, IBM is further building out its capabilities through its partnership with Ark Data Centres, the majority owner in a joint venture with the UK government. Together, we’re delivering public data center services that are already being used at scale by high-profile, public-sector agencies.
It is all about choice
The IBM point of view is to design a cloud that brings greater flexibility, transparency and control over how clients manage data, run businesses and deploy IT operations.
Hybrid is the reality of cloud migration. Clients don’t want to move everything to the public cloud or keep everything in the private cloud. They want to have a choice.
For example, IBM offers the opportunity to keep data local in client locations to those enterprises with fears about data residency and compliance with regulations for migration of sensitive workloads. Data locality is certainly a factor for European businesses, but even more businesses want the ability to move existing workloads to the cloud and provide cognitive tools and services that allow them to fuel new cloud innovations.
From cost savings to innovation platform
Data is the game changer in cloud.
IBM is optimizing its cloud for data and analytics, infused with services including Watson, blockchain and Internet of Things (IoT) so that clients can take advantage of higher-value services in the cloud. This is not just about storage and compute. If clients can’t analyze and gain deeper insights from the data they have in the cloud, they are not using cloud technology to its full potential.
Besides, our customers are focusing more and more on value creation and innovation. That&8217;s why travel innovators are adopting IBM Cloud, fueled by Watson&8217;s cognitive intelligence, to transform interactions with customers and speed the delivery of new services.
Thomson, part of TUI UK & Ireland, one of the UK’s largest travel operators, taps into one of IBM’s UK cloud data centers to run its new tool developed in IBM’s London Bluemix Garage. The app uses Watson APIs such as Conversation, Natural Language Classifier and Elasticsearch on Bluemix to enable customers to receive holiday destination matches based on natural language requests like &;I want to visit local markets” or “I want to see exotic animals.&;
Other major brands, including Dixons Carphone, National Express, National Grid, Shop Direct, Travis Perkins PLC, Wimbledon, Finnair, EVRY and Lufthansa, are entrusting IBM Cloud to transform their business to create more seamless, personalized experiences for customers and accelerate their digital transformation.
By the end of 2017, IBM will have 16 fully operational cloud data centers across Europe, representing the largest and most comprehensive European cloud data center network. Overall, IBM has now the largest cloud data center footprint globally with more than 50.
These new IBM Cloud data centers will help businesses in industries such as retail, banking, government and healthcare meet customer needs.
The post Why IBM is tripling its cloud data center capacity in the UK appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Three Considerations for Planning your Docker Datacenter Deployment

Congratulations! You&;ve decided to make the change your application environment with Docker Datacenter. You&8217;re now on your way to greater agility, portability and control within your environment. But what do you need to get started? In this blog, we will cover things you need to consider (strategy, infrastructure, migration) to ensure a smooth POC and migration to production.
1. Strategy
Strategy involves doing a little work up-front to get everyone on the same page. This stage is critical to align expectations and set clear success criteria for exiting the project. The key focus areas are to determining your objective, plan out how to achieve it and know who should be involved.
Set the objective &; This is a critical step as it helps to set clear expectations, define a use case and outline the success criteria for exiting a POC. A common objective is to enable developer productivity by implementing a Continuous Integration environment with Docker Datacenter.
Plan how to achieve it &8211; With a clear use case and outcome identified, the next step is to look at what is required to complete this project. For a CI pipeline, Docker is able to standardize the development environment, provide isolation of the applications and their dependencies and eliminate any &;works on my machine&; issues to facilitate the CI automation. When outlining the plan, make sure to select the pilot application. The work involved will vary depending on whether it is a legacy application refactoring or new application development.
Integration between source control and CI allows Docker image builds to be automatically triggered from a standard Git workflow.  This will drive the automated building of Docker images. After Docker images are built they are shipped to the secure Docker registry to store them (Docker Trusted Registry) and role based access controls enable secure collaboration. Images can then be pulled and deployed across a secure cluster as running applications via the management layer of Docker Datacenter (Universal Control Plane).
Know who should be involved &8211; The solution will involve multiple teams and it is important to include the correct people early to avoid any potential barriers later on. These teams can include the following teams, depending on the initial project: development, middleware, security, architects, networking, database, and operations. Understand their requirements and address them early and gain consensus through collaboration.
PRO TIP &8211; Most first successes tend to be web applications with some sort of data tier that can either utilize traditional databases or be containerized with persistent data being stored in volumes.
 
2. Infrastructure
Now that you understand the basics of building a strategy for your deployment, it’s time to think about infrastructure.  In order to install Docker Datacenter (DDC) in a highly available (HA) deployment, the minimum base infrastructure is six nodes.  This will allow for the installation of three UCP managers and three DTR replicas on worker nodes in addition to the worker nodes where the workloads will be deployed. An HA set up is not required for an evaluation but we recommend a minimum of 3 replicas and managers for production deployments so your system can handle failures.
PRO TIP &8211; A best practice is to not deploy and run any container workloads on the UCP managers and DTR replicas. These nodes perform critical functions within DDC and are best if they only run the UCP or DTR services.
Nodes are defined as cloud, virtual or physical servers with Commercially Supported (CS) Docker Engine installed as a base configuration.
Each node should consist of a minimum of:

4GB of RAM
16GB storage space
For RHEL/CentOS with devicemapper: separate block device OR additional free space on the root volume group should be available for Docker storage.
Unrestricted network connectivity between nodes
OPTIONAL Internet access to Docker Hub to ease the initial downloads of the UCP/DTR and base content images
Installed with Docker supported operating system 
Sudo access credentials to each node

Other nodes may be required for related CI tooling. For a POC built around DDC in a HA deployment with CI/CD, ten nodes are recommended. For a POC built around DDC in a non-HA deployment with CI/CD, five nodes are recommended.
Below are specific requirements for the individual components of the DDC platform:
Universal Control Plane

Commercially Supported (CS) Docker Engine must be used in conjunction with DDC.
TCP Load balancer should be available for UCP in an HA configuration.
A valid DNS entry should be created for the load balancer VIP.
SSL certificate from a trusted root CA should be created (a self-signed certificate is created for UCP and may be used but additional configuration is required).
DDC License for 30 day trial or annual subscription must be obtained or purchased for the POC.

Docker Trusted Registry

Commercially Supported (CS) Docker Engine must be used in conjunction with DDC.
TCP Load balancer should be available for DTR in an HA configuration.
A valid DNS entry should be created for the load balancer VIP.
Image Storage options include a clustered filesystem for HA or blob storage (AWS S3, Azure, S3 compatible storage, or OpenStack Swift)
SSL certificate from a trusted root CA should be created (a self-signed certificate is created for DTR and may be used but additional configuration is required).
LDAP/AD is available for authentication; managed built-in authentication can also be used but requires additional configuration
DDC License for 30 day trial or annual subscription must be obtained or purchased for the POC.

The POC design phase is the ideal time to assess how Docker Datacenter will integrate into your existing IT infrastructure, from CI/CD, networking/load balancing, volumes for persistent data, configuration management, monitoring, and logging systems. During this phase, understand how  how the existing tools fit and discover any  gaps in your tooling. With the strategy and infrastructure prepared, begin the POC installation and testing. Installation docs can be found here.
 
3. Moving from POC Into Production
Once you have the built out your POC environment, how do you know if it’s ready for production use? Here are some suggested methods to handle the migration.

Perform the switchover from the non-Dockerized apps to Docker Datacenter in pre-production environments. Have Dev, Test, and Prod environments, switchover Dev and/or Test and run through a set burn in cycle to allow for the proper testing of the environment to look for any unexpected or missing functionality. Once non-production environments are stable, switch over to the production environment.

Start integrating Docker Datacenter alongside your existing application deployments. This method requires that the application can run with multiple instances running at the same time. For example, if your application is fronted by a load balancer, add the Dockerized application to the existing load balancer pool and begin sending traffic to the application running in Docker Datacenter. Should issues arise, remove the Dockerized application running  from the load balancer pool until issues can be resolved.

Completely cutover to a Dockerized environment all in one go. As additional applications begin to utilize Docker Datacenter, continue to use a tested pattern that works best for you to provide a standard path to production for your applications.

We hope these tips, learned from first hand experience with our customers help you in planning for your deployment. From standardizing your application environment and simultaneously adding more flexibility for your application teams, Docker Datacenter gives you a foundation to build, ship and run containerized applications anywhere.

3 Considerations for a successful deployment Click To Tweet

Enjoy your Docker Datacenter POC

Get started with your Docker Datacenter POC
See What’s New in Docker Datacenter
Learn more by visiting the Docker Datacenter webpage
Sign up for a free 30 day trial

The post Three Considerations for Planning your Docker Datacenter Deployment appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

8 steps to help your organization enter the cloud

Say you&;re a CIO or CTO who wants to make a fundamental shift in how digital technology can drive your enterprise to innovate and produce transformational business outcomes. Say you know how it can change not just the operations of your business, but its culture as well.
In essence, you&8217;re ready to enter the cloud.
As I talk to clients who are at this stage of their cloud journey, the big question then becomes, &;How?&;
Certainly cloud architecture, process and functionality are important ingredients for success, but consider stepping back and looking at the big picture. After all, you&8217;re making a fundamental shift in your enterprise. You want to ensure that cloud can support your business mission and one way to ensure that is to develop a cloud implementation strategy.
How do you form that strategy? At IBM, we&8217;re fond of the word “think,” and through our work with the research analysis firm Frost and Sullivan, we&8217;ve come up with some ways to help think through and plan your cloud journey:
1. Educate your IT team.
Make sure your team understands that moving to cloud technology is not outsourcing or a way to cut jobs, but rather an opportunity. By shifting the &8220;grunt work&8221; of infrastructure deployment and maintenance to a cloud provider, it will free up IT professionals to participate in more strategic work.
2. Make it “cloud first” for any new projects.
This simply means that when your business needs a new application, start by considering cloud-based solutions. With a &8220;cloud first&8221; policy, corporate developers become champions of strategy and heroes to their line of business colleagues.
3. Move test and development to the public cloud.
On-demand access to scalable resources and pay-as-you-go pricing enable developers to test, replicate, tweak, and test again in an environment that replicates the production environment. This simple move will free up hundreds of hours of IT operational resources to work on the cloud or other strategic projects.
4.  Review your IT maintenance schedule.
Check for planned hardware and software upgrades and refreshes. Major upgrades can be disruptive to users, as well as costly and time-consuming to implement. Where possible, you should synchronize planned upgrades with your cloud project. In some cases, you may decide that certain workloads should remain in your on-premises data center for the time being.
5. Organize a cross-functional project planning team.
Identify workloads to migrate. This is your opportunity to gain the trust of line-of-business managers who, in many companies, consider IT a roadblock. The term &8220;fast solutions&8221; will play very well to this audience.
6. Hire an expert provider to spearhead the project.
In setting out to build their cloud strategies, most businesses face two handicaps: a lack of expertise and few resources to spare. An outside expert can assist with tasks from risk assessment, to strategy development, to project planning, to management of the migration project. But remember, your provider should focus on a successful business outcome, not just a &8220;tech flash-cut.&8221;
7. Plan your ongoing cloud support needs.
The time to consider how you will manage your cloud is now, before you start moving strategic workloads. While you may be at the beginning of your cloud journey, you should look ahead to the inevitable time when the majority of workloads will be cloud-delivered. You may want to consider one of the few cloud service providers to offer a managed-service option.
8. Build your migration and integration project plan.
This is the essential on-ramp to your company’s cloud journey. Work with your experts and cross-functional team to identify two or three simple, low-risk workloads to move to the cloud. For most enterprises, the best bets are web-enabled workloads that are neither critical, nor strategic to the running of the business, and that require limited interaction with external data sources.
Those are the essentials. Use them to achieve your &8220;digital revolution.&8221;
To learn more, read “Stepping into the Cloud: A Practical Guide to Creating and Implementing a Successful Cloud Strategy.”
Image via FreeImages.com/Stephen Calsbeek
The post 8 steps to help your organization enter the cloud appeared first on news.
Quelle: Thoughts on Cloud

Creating and accessing a Kubernetes cluster on OpenStack, part 3: Run the application

The post Creating and accessing a Kubernetes cluster on OpenStack, part 3: Run the application appeared first on Mirantis | The Pure Play OpenStack Company.
Finally, you&;re ready to actually interact with the Kubernetes API that you installed. The general process goes like this:

Define the security credentials for accessing your applications.
Deploy a containerized app to the cluster.
Expose the app to the outside world so you can access it.

Let&8217;s see how that works.
Define security parameters for your Kubernetes app
The first thing that you need to understand is that while we have a cluster of machines that are tied together with the Kubernetes API, it can support multiple environments, or contexts, each with its own security credentials.
For example, if you were to create an application with a context that relies on a specific certificate authority, I could then create a second one that relies on another certificate authority. In this way, we both control our own destiny, but neither of us gets to see the other&8217;s application.
The process goes like this:

First, we need to create a new certificate authority which will be used to sign the rest of our certificates. Create it with these commands:
$ sudo openssl genrsa -out ca-key.pem 2048
$ sudo openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj “/CN=kube-ca”

At this point you should have two files: ca-key.pem and ca.pem. You&8217;ll use them to create the cluster administrator keypair. To do that, you&8217;ll create a private key (admin-key.pem), then create a certificate signing request (admin.csr), then sign it to create the public key (admin.pem).
$ sudo openssl genrsa -out admin-key.pem 2048
$ sudo openssl req -new -key admin-key.pem -out admin.csr -subj “/CN=kube-admin”sudo openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out admin.pem -days 365

Now that you have these files, you can use them to configure the Kubernetes client.
Download and configure the Kubernetes client

Start by downloading the kubectl client on your machine. In this case, we&8217;re using linux; adjust appropriately for your OS.
$ curl -O https://storage.googleapis.com/kubernetes-release/release/v1.4.3/bin/linux/amd64/kubectl

Make kubectl executable:
$ chmod +x kubectl

Move it to your path:
$ sudo mv kubectl /usr/local/bin/kubectl

Now it&8217;s time to set the default cluster. To do that, you&8217;ll want to use the URL that you got from the environment deployment log. Also, make sure you provide the full location of the ca.pem file, as in:
$ kubectl config set-cluster default-cluster –server=[KUBERNETES_API_URL] –certificate-authority=[FULL-PATH-TO]/ca.pem
In my case, this works out to:
$ kubectl config set-cluster default-cluster –server=http://172.18.237.137:8080 –certificate-authority=/home/ubuntu/ca.pem

Next you need to tell kubectl where to find the credentials, as in:
$ kubectl config set-credentials default-admin –certificate-authority=[FULL-PATH-TO]/ca.pem –client-key=[FULL-PATH-TO]/admin-key.pem –client-certificate=[FULL-PATH-TO]/admin.pem
Again, in my case this works out to:
$ kubectl config set-credentials default-admin –certificate-authority=/home/ubuntu/ca.pem –client-key=/home/ubuntu/admin-key.pem –client-certificate=/home/ubuntu/admin.pem

Now you need to set the context so kubectl knows to use those credentials:
$ kubectl config set-context default-system –cluster=default-cluster –user=default-admin
$ kubectl config use-context default-system

Now you should be able to see the cluster:
$ kubectl cluster-info

Kubernetes master is running at http://172.18.237.137:8080
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.

Terrific!  Now we just need to go ahead and run something on it.
Running an app on Kubernetes
Running an app on Kubernetes is pretty simple and is related to firing up a container. We&8217;ll go into the details of what everything means later, but for now, just follow along.

Start by creating a deployment that runs the nginx web server:
$ kubectl run my-nginx –image=nginx –replicas=2 –port=80

deployment “my-nginx” created

Be default, containers are only visible to other members of the cluster. To expose your service to the public internet, run:
$ kubectl expose deployment my-nginx –target-port=80 –type=NodePort

service “my-nginx” exposed

OK, so now it&8217;s exposed, but where?  We used the NodePort type, which means that the external IP is just the IP of the node that it&8217;s running on, as you can see if you get a list of services:
$kubectl get services

NAME         CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes   11.1.0.1      <none>        443/TCP   3d
my-nginx     11.1.116.61   <nodes>       80/TCP    18s

So we know that the &;nodes&; referenced here are kube-2 and kube-3 (remember, kube-1 is the API server), and we can get their IP addresses from the Instances page&;

&8230; but that doesn&8217;t tell us what the actual port number is.  To get that, we can describe the actual service itself:
$ kubectl describe services my-nginx

Name:                   my-nginx
Namespace:              default
Labels:                 run=my-nginx
Selector:               run=my-nginx
Type:                   NodePort
IP:                     11.1.116.61
Port:                   <unset> 80/TCP
NodePort:               <unset> 32386/TCP
Endpoints:              10.200.41.2:80,10.200.9.2:80
Session Affinity:       None
No events.

So the service is available on port 32386 of whatever machine you hit.  But if you try to access it, something&8217;s still not right:
$ curl http://172.18.237.138:32386

curl: (7) Failed to connect to 172.18.237.138 port 32386: Connection timed out

The problem here is that by default, this port is closed, blocked by the default security group.  To solve this problem, create a new security group you can apply to the Kubernetes nodes.  Start by choosing Project->Compute->Access & Security->+Create Security Group.
Specify a name for the group and click Create Security Group.
Click Manage Rules for the new group.

By default, there&8217;s no access in; we need to change that.  Click +Add Rule.

In this case, we want a Custom TCP Rule that allows Ingress on port 32386 (or whatever port Kubernetes assigned the NodePort). You  can specify access only from certain IP addresses, but we&8217;ll leave that open in this case. Click Add to finish adding the rule.

Now that you have a functioning security group you need to add it to the instances Kubernetes is using as worker nodes &; in this case, the kube-2 and kube-3 nodes.  Start by clicking the small triangle on the button at the end of the line for each instance and choosing Edit Security Groups.
You should see the new security group in the left-hand panel; click the plus sign (+) to add it to the instance:

Click Save to save the changes.

Add the security group to all worker nodes in the cluster.
Now you can try again:
$ curl http://172.18.237.138:32386

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
   body {
       width: 35em;
       margin: 0 auto;
       font-family: Tahoma, Verdana, Arial, sans-serif;
   }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href=”http://nginx.org/”>nginx.org</a>.<br/>
Commercial support is available at
<a href=”http://nginx.com/”>nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
As you can see, you can now access the Nginx container you deployed on the Kubernetes cluster.

Coming up, we&8217;ll look at some of the more useful things you can do with containers and with Kubernetes. Got something you&8217;d like to see?  Let us know in the comments below.
The post Creating and accessing a Kubernetes cluster on OpenStack, part 3: Run the application appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

CloudNativeCon and KubeCon: What we learned

Imagine yourself on a surfboard. You’re alone. You’re paddling out to farther into the sea and you’re ready to catch a giant wave. Only you look to your left, to your right and behind you, and you suddenly realize you’re not alone at all. There are countless other surfers who share your aim.
That’s how developers are feeling about cloud native application development and Kubernetes as excitement builds for the impending wave.
The excitement was apparent during the recent and KubeCon joint event in Seattle. More than 1,000 developers gathered to share ideas around the growing number of projects under the Cloud Native Compute Foundation (via Linux Foundation) banner. That includes Kubernetes, one of the foundation’s most significant and broadly adopted projects.

Despite the fact that it’s still relatively early days for Kube and cloud native computing, CNCF executive director Dan Kohn said there are plenty of reasons to be excited about cloud native.
In his opening keynote, Kohn highlighted these top advantages that cloud native offers:

Isolation. Containerizing applications ensures that you get the same version in development and production. Operations are simplified.
No lock-in. When you choose a vendor that relies on open technology, you’re not locked in to using that vendor.
Improved scalability. Cloud native provides the ability to scale your application to meet customer demand in real time.
Agility and maintainability. These factors are improved when applications are split into microservices.

It was apparent by the sessions alone that Kubernetes is already seeing enterprise adoption. Numerous big-name companies were presented as use cases.
Chris Aniszczyk, VP of developer programs for The Linux Foundation, shared some of the impressive growth numbers around the CNCF and Kube communities:

Now @cra wrapping up a busy 2 days with some impressive numbers! CloudNativeCon the hard way! @CloudNativeFdn @kelseyhightower pic.twitter.com/ySe5pNokjM
— Jeffrey Borek (@jeffborek) November 10, 2016

And if conference attendance is any indication, the community is poised to grow even more over the next few months. Next year’s CloudNativeCon events in Berlin and Austin are expected to double or triple the Seattle attendance number.
The IBM contribution to Kubernetes
The work IBM is doing with Kubernetes is twofold. First and foremost, IBM is helping the community understand its pain points and contribute its resources, as it does with dozens of open source projects. Second, IBM developers and technical leaders are working with internal product teams to fold in Kubernetes into the larger cloud ecosystem.
“Because Kubernetes is going to be such an important part of our infrastructure going forward, we want to make sure we contribute as much as we get out of it,” IBM Senior Technical Staff Member Doug Davis said at the CloudNativeCon conference. “We’re going to see more people coming to our team, and you’re going to see a bigger IBM presence within the community.”
IBM is also committed to helping the Kubernetes community interact and cooperate with other open source communities. Kubernetes technology provides plug points and extensibility points that allow it to be run on , for example.
Brad Topol, a Distinguished Engineer who leads IBM work in OpenStack, explained how the communities are working together:

At CloudNativeCon in Seattle @BradTopol discusses the relationship between OpenStack and CNCF. pic.twitter.com/o2wj8swTBo
— IBM Cloud (@IBMcloud) November 8, 2016

momentum continues
Serverless remained a hot topic at CloudNativeCon. IBMer Daniel Krook presented a keynote on the topic, including an overview of , the IBM open source, serverless offering that is available on Bluemix:

LIVE on : @DanielKrook talks OpenWhisk at CloudNativeCon. Slides: https://t.co/P51xrjVqFP https://t.co/dRJmHKiXcy
— IBM Cloud (@IBMcloud) November 9, 2016

Krook also joined in to provide a solid definition of “serverless,” something that tends to spark debate whenever the topic is broached:

The buzz around serverless continues at CloudNativeCon. @DanielKrook gives his definition of this emerging technology. pic.twitter.com/UzFhqtBnD0
— IBM Cloud (@IBMcloud) November 9, 2016

An update on the Open Container Initiative
In a lightning talk, Jeff Borek, Worldwide Program Director of Open Cloud Business Development, joined Microsoft Senior Program Manager Rob Dolin for an update on the OCI. The organization started in 2015 as a Linux Foundation project with the goal of creating open, industry standards around container formats and runtimes.
Watch their session here:

LIVE on Periscope: From CloudNativeCon, @JeffBorek & @RobDolin discuss the Open Container Initiative. https://t.co/rKpa4UpRcn
— IBM Cloud (@IBMcloud) November 9, 2016

Learn more: &;Why choose a serverless architecture?&;
The post CloudNativeCon and KubeCon: What we learned appeared first on news.
Quelle: Thoughts on Cloud

New Dockercast episode and interview with Docker Captain Laura Frank

We recently had the opportunity to catch up with the amazing Laura Frank. Laura is a developer focused on making tools for other developers.As an engineer at Codeship, she works on improving the Docker infrastructure and overall experience for users on Codeship. Previously, she worked on several open source projects to support Docker in the early stages of the project, including Panamax and ImageLayers. She currently lives in Berlin.
Laura is also a Docker Captain, a distinction that Docker awards select members of the community that are experts in their field and passionate about sharing their Docker knowledge with others.
As we do with all of these podcasts, we begin with a little bit of history of &;How did you get here?” Then we dive into the Codeship offering and how it optimizes its delivery flow by using Docker containers for everything.  We then end up with a “What&;s the coolest Docker story you have?”  I hope you enjoy  &; please feel free to comment and leave suggestions.
 

In addition to the questions covered in the podcast, we’ve had the chance to ask Laura for a couple additional questions below.
How has Docker impacted what you do on a daily basis?
I’m lucky to work with Docker every day in my role as an engineer at Codeship. In addition to appreciating  the technical aspects of Docker, I really enjoy seeing the different ways the Docker ecosystem as a whole empowers engineering teams to move faster. Docker is really impactful at two levels: we can use Docker to simplify the way we build and distribute software. But we can also solve problems in more unique ways because containerization is more accessible. It’s not just about running a production application in containers; you can use Docker to provide a distributed system of containers in order to scale up and down and handle task processing in interesting ways. To me, Docker is really about reducing friction in the development process and allowing engineers to focus on the stuff we’re best at &; solving complex problems in interesting ways.
As a Docker Captain, how do you share that learning with the community?
I’m usually in front of a crowd, talking through a set of problems that can be solved with Docker. There are lots of great ways to share information with others, from writing a blog post or presenting a webinar, to answering questions at a meetup. I’m very hands on when it comes to helping people wrap their heads around the questions they have when using Docker. I think the best way to help is to open my laptop and work through the issues together.
Since Docker has is such a complex and vast ecosystem, it’s important that Captains, and all of us who lead different areas of the Docker community, understand that each person has different levels of expertise with different components. The goal isn’t to impress people with how smart you are or what cool things you’ve built; the goal is to help your peers become better at what they do. But, the most important point is that everyone has something to contribute to the community.
Who are you when you’re not online?
I really love to get far away from computers when I’m not at work. I think there are so many other interesting parts of me that aren’t related to the work I do in the Docker community, and are separate from me as a technologist. You have to strike the right balance to stay focused and healthy. I love to adventure outdoors &8212; canoeing and kayaking in the summer in addition to, running around the city, hiking, and camping. Eliminating distractions and giving my brain some time to recover helps me think more clearly and strategically during the week.
How did you first get involved with Docker?
In 2013, I worked at HP Cloud on an infrastructure engineering team, and someone shared Solomon’s lightning talk from PyCon in an IRC or HipChat channel. I remember being really intrigued by the technical complexity and greater vision that he expressed. Later, my boss from HP left to join CenturyLink Labs, where he was building out a team to work on Docker-related developer tools, and a handful of us went with him. It was a huge gamble. There wasn’t much in the way of dev tools built around Docker, and those projects were really fun and exciting to work on, because we were just figuring out everything as we went along. My team was behind Panamax, ImageLayers, Lorry, and Dray, to name a few. If someone were to take me back to 2013 and tell me that this weirdly obscure new project would be the thing I spend 100% of my time working with, I wouldn’t have believed them, but I’m really glad it’s true.
If you could switch your job with anyone else, whose job would you want?
I’d be a pilot. I think it also shares common qualities with my role as an engineer &8212; I love the high-level view and seeing lots of complex systems working together. Plus, I think I’d look pretty cool in a tactical jumpsuit. Maybe I’ll float that idea by the rest of the engineers on my team as a possible dress code update.
Do you have a favorite quote?
“Don’t half-ass two things. Whole-ass one thing” &8211; Ron Swanson. It’s really tempting to try to learn everything about everything, especially related to technology that is constantly changing. The Docker world can be pretty chaotic. Sometimes it’s better to slow down, focus on one component of the ecosystem, and rely on the expertise of your peers for guidance in other areas. The Docker Community is great place to see this in action, because you simply can’t do it all yourself. You have to rely on the contributions of others. And you know, finish unloading the dishwasher before starting to clean the bathroom. Ron Swanson is a wise man in all areas of life.
 
The post New Dockercast episode and interview with Docker Captain Laura Frank appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Conquering impossible goals with real-time analytics

“Past data only said, &;go faster&; or &8216;ride better,’&; Kelly Catlin, Olympic Cyclist and Silver Medalist, shared with the audience at IBM World of Watson event on 24 October. In other words, the feedback generated from all her analytics data sources — the speed, cadence, power meters on her bicycle — was generally useless to this former mountain bike racer who wanted to improve her track cycling performance by 4.5 percent to capture a medal at a medal at the 2016 Rio Olympic Games.

USA Cycling Women&8217;s Team Pursuit

While I am by no means an Olympic level athlete, I knew exactly what Kelly meant. I’ve logged over 300 miles in running races over 8 years, and just in this past year started to see some small improvements in my 5Ks and half-marathons. Suddenly, I started asking, “How much faster could I run a half marathon? Could I translate these improvements to longer distances?” I downloaded all my historical race information into an excel chart. I looked at my Runkeeper and Strava training runs. Despite all this data, I was stuck. &;What should I do to improve?&8221; I asked a coach. He said, “Run more during the week.”
But I wanted to know more. How much capacity do I really have? How much does my asthma limit me? Should I only run in certain climates? During which segments of a race should I speed up or slow down? Just like Kelly, who spent four hours per session reviewing data, I understood how historical data had limited impact on improving current performance.
According to Derek Bouchard-Hall, CEO of USA Cycling, “At the elite level, a 3 percent performance improvement in 12 months is attainable but very difficult. For the USA Women’s Team Pursuit Team, they had only 11 months and needed 4.5 percent improvement which would require them to perform at a new world record time (4.12/15.4 Lap Average). The coach could account for the 3 percent in physiological improvement but needed technology to bring the other 1.5 percent. He focused in two areas: equipment (bike/tire, wind tunnel training) and real-time analytic training insights.”

How exactly could real-time analytics insight change performance?
According to Kelly, “Now, we can make executable changes.” She and her teammates now know when to make a transition of who is leading the group, how best to make that transition, and which times of the race to pick up cadence.
The result: USA Women’s Team Pursuit finished in the race in 4:12:454 to secure the silver medal behind Great Britain, finishing in 4:10:236.
The introduction of data sets and technology did not alone lead to Team USA’s incredible improvement. Instead, it was the combination of well-defined goals, strategic implementation of technology, and actionable, timely recommendations that led to their strong performance and results.
As you consider how to improve an area of your business, keep in mind these three things from the USA Cycling project with IBM Bluemix:

Set well-defined goals. Or, as business expert Stephen Covey would say, “always begin with the end in mind.” USA Cycling clearly articulated they needed to increase performance by 4.5 percent, and that would take more than a coach.
Choice and implementation of technology matters. Choose the tools that will not only deliver analytics data and insights, but do so in a timely and relevant manner for your business. Explore how to get started with IBM Bluemix.
Data alone doesn’t equal guidance. You must review the data, and with your colleagues, your coach, your running buddy, set clear, executable actions.

The IBM Bluemix Garage Method can help you define your ideas and bring a culture of innovation agility to your cloud development.
A version of this post originally appeared on the IBM Bluemix blog.
The post Conquering impossible goals with real-time analytics appeared first on news.
Quelle: Thoughts on Cloud