One cloud to rule them all — or is it?

The post One cloud to rule them all &; or is it? appeared first on Mirantis | The Pure Play OpenStack Company.
So you’ve sold your organization on private cloud.  Wonderful!  But to get that ROI you’re looking for, you need to scale quickly and get paying customers from your organization to fund your growing cloud offerings.
It’s the typical Catch-22 situation when trying to do something on the scale of private cloud: You can’t afford to build it without paying customers, but you can’t get paying customers without a functional offering.
In the rush to break the cycle, you onboard more and more customers.  You want to reach critical mass and become the de-facto choice within your organization.  Maybe you even have some competition within your organization you have to edge out.  Before long you end up taking anyone with money.  
And who has money?  In the enterprise, more often than not it&;s the bread and butter of the organization: the legacy workloads.
Promises are made.  Assurances are given.  Anything to onboard the customer.  “Sure, come as you are, you won’t have to rewrite your application; there will be no/minimal impact to your legacy workloads!”
But there&8217;s a problem here. Legacy workloads &8212; that is, those large, vertically scaled behemoths that don&8217;t lend themselves to &;cloud native&; principles &8212; present both a risk and an opportunity when growing your private cloud, depending how they are handled.
(Note: Just because a workload has been virtualized does not make it &8220;cloud-native&8221;. In fact, many virtualized workloads, even those implemented using SOA, service-oriented architecture, will not be cloud native. We&8217;ll talk more about classifying, categorizing and onboarding different workloads in a future article.)
&8220;Legacy&8221; cloud vs &8220;Agile&8221; cloud
The term &8220;legacy cloud&8221; may seem like a bit of an oxymoron, but hear me out. For years, surveys that ask people about their cloud use have had to include responses from people who considered vSphere cloud because the line between cloud and virtualization is largely irrelevant to most people.
Or at least it was, when there wasn&8217;t anything else.
But now there&8217;s a clear difference. Legacy cloud is geared towards these legacy workloads, while agile cloud is geared toward more &8220;cloud native&8221; workloads.
Let’s consider some example distinctions between a “Legacy Cloud” and an “Agile Cloud”. This table shows some of the design trade-offs between environments built to support legacy workloads versus those built without those restrictions:

Legacy Cloud
Agile Cloud

No new features/updates (platform stability emphasis), or very infrequently, limited & controlled
Regular/continuous deployment of latest and greatest features (platform agility emphasis)

Live Migration Support (redundancy in the platform instead of in the app), DRS (in case of ESXi hypervisors managed by VMWare)
Highly scalable and performant local storage, ability to support other performance enhancing features like huge pages.  No live migration security and operational burdens.

VRRP for Neutron L3 router redundancy
DVR for network performance & scalability; apps built to handle failure of individual nodes

LACP bonding for compute node network redundancy
SR-IOV for network performance; apps built to handle failure of individual nodes

Bring your own (specific) hardware
Shared, standard hardware defrayed with tenant chargeback policies (white boxes)

ESXi hypervisor or bare metal as a service (Ironic) to insulate data plane, and/or separate controllers to insulate control plane
OpenStack reference KVM deployment

A common theme here are features that force you to choose whether you are designing for performance & scalability (such as Neutron DVR) versus HA and resiliency (such as VRRP for Neutron L3 agents).
It’s one or the other, so introducing legacy workloads into your existing cloud can conflict with other objectives, such as increasing development velocity.
So what do you do about it?
If you find yourself in this situation, you basically have three choices:

Onboard tenants with legacy workloads and force them to potentially rewrite their entire application stack for cloud
Onboard tenants with legacy workloads into the cloud and hope everything works
Decline to onboard tenants/applications that are not cloud-ready

None of these are great options.  You want workloads to run reliably, but you also want to make the onboarding process easy without imposing large barriers of entry to tenants applications.
Fortunately, there&8217;s one more option: split your cloud infrastructure according to the types of workloads, and engineer a platform offering for each. Now, that doesn&8217;t necessarily mean a separate cloud.
The main idea is to architect your cloud so that you can provide a legacy-type environment for legacy workloads without compromising your vision for cloud-aware applications. There are two ways to do that:

Set up a separate cloud with an entirely new control plane for associated compute capacity.  This option offers a complete decoupling between workloads, and allows for changes/updates/upgrades to be isolated to other environments without exposing legacy workloads to this risk.
Use compute nodes such as ESXi hypervisor or bare metal (e.g., Ironic) for legacy workloads. This option maintains a single OpenStack control plane while still helping isolate workloads from OpenStack upgrades, disruptions, and maintenance activities in your cloud.  For example, ESXi networking is separate from Neutron, and bare metal is your ticket out of being the bad guy for rebooting hypervisors to apply kernel security updates.

Keep in mind that these aren’t mutually exclusive options; it is possible to do both.  
Of course each option come with their own downsides as well; an additional control plane involves additional overhead (to build and operate), and running a mixed hypervisor environment has its own set of engineering challenges, complications, and limitations.  Both options also add overhead when it comes to repurposing hardware.
There&8217;s no instant transition
Many organizations get caught up in the “One Cloud To Rule Them All” mentality, trying to make everything the same and work with a single architecture to achieve the needed economies of scale, but ultimately the final decision should be made according to your situation.
It&8217;s important to remember that no matter what you do, you will have to deal with a transition period, which means you need to provide a viable path for your legacy tenants/apps to gradually make the switch.  But first, asses your situation:

If your workloads are all of the same type, then there’s not a strong case to offer separate platforms out of the gate.  Or, if you’re just getting started with cloud in your organization, it may be premature to do so; you may not yet have the required scale, or you may be happy with onboarding only those applications which are cloud ready.
When you have different types of workloads, with different needs &8212; for example, Telco/NFV vs Enteprise/IT vs BigData/IoT workloads &8212; you may want to think about different availability zones inside the same cloud, so specific nuances for each type can be addressed inside it’s own zone while maintaining one cloud configuration, life cycle management and service assurance perspective, including having similar hardware. (Having similar hardware makes it easier to keep spares on hand.)
If you find yourself in a situation where you want to innovate with your cloud platform, but you still need to deal with legacy workloads with conflicting requirements, then workload segmentation is highly advisable.  In this case, you&8217;ll probably want to break from the “One Cloud” mentality in favor of the flexibility of multiple clouds  If you try to satisfy both your &8220;innovation&8221; mindset and your legacy workload holders on one cloud, you&8217;ll likely disappoint both.

After making this choice, you may then plan your transition path accordingly.
Moving forward
Even if you do create a separate legacy cloud, you probably don&8217;t want to maintain it in perpetuity.  Think about your transition strategy; a basic and effective carrot and stick approach is to limit new features and cloud-native functionality to your agile cloud, and to bill/chargeback at higher rates in your legacy cloud (which are, at any rate, justified by the costs incurred to provide and support this option).
Whatever you ultimately decide, the most important thing to do is make sure you&8217;ve planned it out appropriately, rather than just going with the flow, so to speak. If you need to, contact a vendor such as Mirantis; they can help you do your planning and get to production as quickly as possible.
The post One cloud to rule them all &8212; or is it? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

The retailer cloud journey: An incremental climb

As a Cloud Advisor who supports several retail clients, it’s impressive to see a company mature in its cloud adoption and realize true value in its business transformation.
One such story is of a major U.S. retailer’s journey to cloud. It’s a story that’s still being written.  But how did this story begin?
It started with the retailer’s vision for rapid retail innovation with lower IT costs. The company was looking to support a more customer-focused infrastructure, enabled for faster design and larger scale experimentation of its digital initiatives.
The retailer required a solution that struck an optimal balance between performance, customer service, cost management and self-service automation and converged on a continuously available architecture on IBM Cloud.
Here&;s a breakdown of the details:
Continuous availability
The basic technical concept that enables continuous availability is the capacity to run a service from multiple, geo-dispersed “clouds” in parallel. Each cloud is capable of running the business service independently of its peers, yet replicates state and persistent data to its peer clouds. This requires uniform, reliable and performant network access to replicate state and persistent data.
IBM Cloud delivers on both promises providing global high-performance infrastructure capable of supporting applications at “internet scale.” Further, SoftLayer’s standardized data center and pod design provide modular hardware configurations. Global, unmetered, network-private backbone access facilitates data replication across these geo-dispersed clouds.
Self-service automation
With the architecture proved in all seasons of the retail business cycle, the focus of the retailer is on the road to self-service automation and learning from initial deployments. SoftLayer was built with automation in sight. Everything in the platform — including the provisioning and de-provisioning of services, logging, billing and alerts — is automated and controlled by the Infrastructure Management System (IMS). Each function provided in the IMS is available via APIs supporting the company’s goals for full-stack automation.
With cost efficiencies derived from cloud behind them, the retailer continues to mature into an environment for innovation and business value.
Teams defined their principles based on deep-rooted cultural values. Keeping the focus on customer centricity translates into ensuring each design element and each realized feature benefits the customer. Realizing the benefits derived through automation, application development teams focused on reducing the length of the development cycle. This approach  has served well for years in the retailer’s  product development teams. It allows the application development teams to rapid test, experiment and learn quickly.
Interested? Want to know how to get started? This process is well aligned with the IBM Garage method, combining industry best practices for design thinking, lean startup, agile development, DevOps, and cloud to build and deliver innovative solutions.
This major retailer’s cloud transformation story is still being written and IBM is proud to be its partner along this journey.  We continue to actively partner on this journey, bringing clarity to how IBM and industry cloud practices and technologies can help achieve business objectives.
An Elite Cloud Advisor, Jyoti Chawla specializes in developing enterprise transformation strategies and architecture for global enterprises to transform to Cloud and emerging technologies. Follow Jyoti on Twitter and Linkedin.
The post The retailer cloud journey: An incremental climb appeared first on Cloud Computing.
Quelle: Thoughts on Cloud

CloudForms as a Container

The CloudForms 4.1 release (June &;16) delivered a new format for the CloudForms appliance: as a container in docker format. CloudForms has led the way by offering the appliance in several different virtualization and cloud formats, such as:

Red Hat Virtualization
Red Hat OpenStack Platform
Google Cloud Platform
Microsoft Azure
Microsoft SCVMM (Hyper-v)
VMware vSphere

With the new CloudForms container you can now host CloudForms on:

Red Hat OpenShift Enterprise 3
Red Hat Atomic Host (7.2 or higher)
Red Hat Enterprise Linux (7.2 or higher)
Anywhere using docker

This is really ground breaking for a cloud management platform, as Container technology brings additional levels of portability, scalability and security.
Another great benefit is the simplicity to instantiate the container.
NOTE: Red Hat CloudForms 4.1 availability as a container image is currently a TECHNICAL PREVIEW, therefore is UNSUPPORTED for production use. See Technology Preview Features Support Scope for more information. You can obtain the Red Hat CloudForms container image from https://registry.access.redhat.com.
Here are the various ways you can instantiate CloudForms across the different container platforms available.
Red Hat Atomic Host

Install Red Hat Atomic Host.
Log in via SSH to your Atomic Host.
Download the CloudForms container:

# atomic install cloudforms/cfme4:latest

Run the CloudForms container:

# atomic run cloudforms/cfme4:latest
Alternatively you can also use the docker command to run the CloudForms container:
# docker run –privileged -di -p 80:80 -p 443:443 cloudforms/cfme4:latest
Red Hat Enterprise Linux

Install Red Hat Enterprise Linux 7.2
Log in via SSH to your Red Hat Enterprise Linux 7.2
Register your system with Red Hat:

# subscription-manager register –username=<rhnuser> –password=<pwd>
# subscription-manager list –available
# subscription-manager attach –pool=<pool_id>
# subscription-manager repos –enable=rhel-7-server-extras-rpms
# subscription-manager repos –enable=rhel-7-server-optional-rpms

Install docker and needed dependencies:

# yum install docker device-mapper-libs device-mapper-event-libs

Start the docker service:

# systemctl start docker.service

Enable the docker service:

# systemctl enable docker.service

Run the CloudForms container:

# docker run –privileged -di -p 80:80 -p 443:443 cloudforms/cfme4:latest

Login using a browser to http://<hostname>

Anywhere with docker

Install docker.
Edit /etc/sysconfig/docker and amend the Red Hat registry to the ADD_REGISTRY key:

ADD_REGISTRY=’–add-registry registry.access.redhat.com’

Restart the docker service.
Execute the following command:

# docker run –privileged -di -p 80:80 -p 443:443 cloudforms/cfme4:latest
Lastly&;.SSH Access
Execute the following command to obtain a bash prompt on the CloudForms container to do things like import items or view log files:
# sudo docker exec -i -t <container ID/name> /bin/bash
You will be given access under /var/www/miq/vmdb path.
Quelle: CloudForms

Red Hat Confirms Over 40+ Accepted Sessions at OpenStack Summit Barcelona

This Fall&;s 2016 OpenStack Summit in Barcelona, Spain will be an exciting event. After a challenging issue with the voting system this time around (somehow prevented direct URLs to each session), the Foundation has posted the final session agenda, detailing the entire week&8217;s schedule of sessions and events. Once again, I am excited to see that based on community voting, Red Hat will be sharing over 40 sessions of technology overview and deep dives around OpenStack services for containers, storage, networking, compute, network functions virtualization (NFV), and much more. 
Red Hat is a Premiere sponsor in Barcelona this Fall and we are looking forward to sharing all of our general sessions, workshops, and full-day breakout track. To learn more about Red Hat&8217;s accepted sessions, have a look at the details below. Be sure to visit us at each session you can make, come by our booth in the Marketplace, which starts on Monday evening during the booth crawl, 6-7:30pm, or be sure to contact your Red Hat sales representative to meet with any of our executives, engineering, or product leaders face-to-face while in Barcelona. Either way, we look forward to seeing you all again in Spain in October! 
For more details on each session, click on the title below:

Tuesday October 25th
General sessions

Deploying and Operating a Production Application Cloud with OpenStack
 Chris Wright, Pere Monclus (PLUMgrid), Sandra O&8217;Boyle (Heavy Reading), Marcel Haerry (Swisscom)
11:25am-12:05pm

Delivering Composable NFV Services for Business, Residential & Mobile Edge
 Azhar Sayeed, Sharad Ashlawat (PLUMgrid)
12:15pm-12:55pm

I found a security bug, what happen&8217;s next?
 Tristan de Cacqueray and Matthew Booth
2:15pm-2:55pm

Failed OpenStack Update?! Now What?
 Roger lopez
2:15pm-2:55pm

OpenStack Scale and Performance Testing with Browbeat
Will Foster, Sai Sindhur Malleni, Alex Krzos
2:15pm-2:55pm

OpenStack and the Orchestration Options for Telecom / NFV
Chris Wright, Tobias Ford (AT&T), Hui Deng (China Mobile), Diego Lopez Garcia (Telefonica)
3:05pm-3:45pm

How to Work Upstream with OpenStack
Julien Danjou, Ashiq Khan (NTT), Ryota Mibu (NEC)
3:05pm-3:45pm

Live From Oslo
Kenneth Giusti, Joshua Harlow (Go Daddy), Oleksii Zamiatin (Mirantis), ChangBo Guo (EasyStack), Alexis Lee (HPE)
3:05pm-3:45pm

OpenStack and Ansible: Automation born in the Cloud
Keith Tenzer
3:05pm-3:45pm

Message Routing: a next-generation alternative to RabbitMQ
Kenneth Giusti, Andrew Smith
3:05pm-3:45pm

Pushing your QA upstream
Rodrigo Duarte Sousa
3:55pm-4:35pm

TryStack: The Free OpenStack Community Sandbox
Will Foster, Kambiz Aghaiepour
3:55pm-4:35pm

Kerberos and Health Checks and Bare Metal, Oh My! Updates to OpenStack Sahara in Newton
Elise Gafford, Nikita Konovalov (Mirantis), Vitaly Gridnev (Mirantis)
5:05pm-5:45pm

Wednesday October 26th

Feeling a bit deprecated? We are too. Let&8217;s work together to embrace the OpenStack Unified CLI.
 Darin Sorrentino, Chris Janiszewski
11:25am-12:55pm

The race conditions of Neutron L3 HA&8217;s scheduler under scale performance
John Schwarz, Ann Taraday (Mirantis), Kevin Benton (MIrantis)
11:25am-12:55pm

Barbican Workshop &; Securing the Cloud
Ade Lee, Douglas Mendizabel (Rackspace), Elvin Tubillara (IBM), Kaitlin Farr (John Hopkins University), Fernando Diaz (IBM)
11:25am-12:55pm

Cinder Always On &8211; Reliability And Scalability Guide
Gorka Eguileor, Michal Dulko (Intel)
12:15pm-12:55pm

OpenStack is an Application! Deploy and Manage Your Stack with Kolla-Kubernetes
Ryan Hallisey, Ken Wronkiewicz (Cisco), Michal Jastrzebski (Intel)
2:15pm-2:55pm

OpenStack Requirements : What we are doing, what to expect and whats next?
 Swapnil Kulkarni and Davanum Srinivas
3:55pm-4:35pm

Stewardship: bringing more leadership and vision to OpenStack
 Monty Taylor, Amrith Kumar (Tesora), Colette Alexander (Intel), Thierry Carrez (OpenStack Foundation)
3:55pm-4:35pm

Using OpenStack Swift to empower Turkcell&8217;s public cloud services
 Christian Schwede, Orhan Biyiklioglu (Turkcell) & Doruk Aksoy (Turkcell)
5:05pm-5:45pm

Lessons Learned from a Large-Scale Telco OSP+SDN Deployment
Guil Barros, Cyril Lopez, Vicken Krissian
5:05pm-5:45pm

KVM and QEMU Internals: Understanding the IO Subsystem
Kyle Bader
5:05pm-5:45pm

Effective Code Review
Dougal Matthews
5:55pm-6:35pm

Thursday October 27th

 Anatomy Of OpenStack Through The Eagle Eyes Of Troubleshooters
 Sadique Puthen
9:00am-9:40am

 The Ceph Power Show :: Hands-on Lab to learn Ceph &;The most popular Cinder backend&;
Brent Compton, Karan Singh
9:00am-9:40am

 Building self-healing applications with Aodh, Zaqar and Mistral
Zane Bitter, Lingxian Kong (Catalyst IT), Fei Long Wang (Catalyst IT)
9:00am-9:40am

 Writing A New Puppet OpenStack Module Like A Rockstar
Emilien Macchi
9:50am-10:30am

 Ambassador Community Report
Erwan Gallen, Kavit Munshi (Aptira), Jaesuk Ahn (SKT), Marton Kiss (Aptira), Akihiro Hasegawa (Bit-isle Equinix, Inc)
9:50am-10:30am

 VPP: the ultimate NFV vSwitch (and more!)?
Franck Baudin, Uri Elzur (Intel)
9:50am-10:30am

 Zuul v3: OpenStack and Ansible Native CI/CD
James Blair
11:00am-11:40am

 Container Defense in Depth
Thomas Cameron, Scott McCarty
11:50am-12:30pm

 Analyzing Performance in the Cloud : solving an elastic problem with a scientific approach
Alex Krzos, Nicholas Wakou (Dell)
11:50am-12:30pm

 One-stop-shop for OpenStack tools
Ruchika Kharwar
1:50pm-2:30pm

 OpenStack troubleshooting: So simple even your kids can do it
Vinny Valdez, Jonathan Jozwiak
1:50pm-2:30pm

 Solving Distributed NFV Puzzle with OpenStack and SDN
Rimma Iontel, Fernando Oliveira (VZ), Rajneesh Bajpai (BigSwitch)
2:40pm-3:20pm

 Ceph, now and later: our plan for open unified cloud storage
Sage Weil
2:40pm-3:20pm

 How to configure your cloud to be able to charge your users using official OpenStack components !
Julien Danjou, Stephane Albert (Objectif Libre), Christophe Sauthier (Objectif Libre)
2:40pm-3:20pm

 A dice with several faces: Coordinators, mentors and interns on OpenStack Outreachy internships
Victoria Martinez de la Cruz, Nisha Yadav (Delhi Tech Universty), Samuel de Medeiros Queiroz (HPE)
3:30pm-4:10pm

 Yo dawg I herd you like Containers, so we put OpenStack and Ceph in Containers
 Sean Cohen, Sebastien Han, Federico Lucifredi
3:30pm-4:10pm

 Picking an OpenStack Networking solution
Russell Bryant, Gal Sagie (Huawei), Kyle Mestery (IBM)
4:40pm-5:20pm

Forget everything you knew about Swift Rings &8211; here&8217;s everything you need to know about Swift Rings
Christian Schwede, Clay Gerrard (Swiftstack)
5:30pm-6:10pm

Quelle: RedHat Stack

Most enterprises tailor hybrid cloud to their specific needs

CIOs, CTOs and all line-of-business leaders looking to gain differentiation and strategic advantage: you&;ve come a long way in the last four years when it comes to cloud technology.
That&8217;s one of the key takeaways from a new IBM Institute for Business Value report, Tailoring Hybrid Cloud.
My co-authors — IBMers Justin Chua, Robert Freese, Anthony Karimi, Julie Schuneman — and I wanted to answer a specific question: how are organizations currently differentiating themselves using cloud? To find out, we interviewed 30 executives and surveyed 1,000 global respondents from 18 industries. Sixty-one percent of respondents held the title of CIO, CTO or head of IT.
We learned some interesting things:

In 2012, cloud was still viewed as something &;special.&; No longer. Seventy-eight percent of the executives we spoke with described their cloud initiatives as coordinated or fully integrated.
However, even with the rising use of cloud overall, almost half of computing workloads are expected to remain on dedicated, on-premises servers.

The implications of this became clear as we spoke to executives. Each enterprise is trying to tailor hybrid cloud to what best suits it.
Most often, it&8217;s a blend of public cloud, private cloud and traditional IT services. For many of these enterprises, finding the right cloud technology mix starts with deciding what to move to the cloud and addressing the challenges that can affect migration.
Our study also found that innovation advantages can be gained through rapid experimentation,strategic application programming interfaces (APIs) and extended access to external talent and technologies.
Conducting rapid experimentation gives innovative organizations the crucial ability to test and fail quickly. Cloud, with its on-demand and scalable attributes, enables this sort of nimble development and testing. What’s more, quick and automated resource provisioning can shorten development time and reduce time to market.
We discovered that executives achieved the strongest results, true strategic advantage and differentiation, by integrating cloud initiatives company-wide and tapping external resources for access to additional skills and greater efficiency.
Probably the most important thing the study revealed for organizations that are just beginning to tap into cloud technology or are ready to take the next step in digital transformation comes by way of three questions:

How is your organization planning to incorporate hybrid cloud into your overall transformation strategy?
What is the optimal combination of cloud and on-premises IT investments for your organization? What factors will you regularly monitor to identify needed changes over time?
How effective are you in tapping into external resources in assessing and implementing cloud-based solutions?

Cloud can be the centerpiece of an overall organizational transformation. Potential business impacts and the associated financial implications require ongoing scrutiny. During each stage of cloud adoption, combine the insights of business and IT. A tailor-made environment for your organization will be possible when IT employees truly understand what the business needs and line-of-business employees know what technologies/IT can do for them.
To learn more, read the IBM Institute for Business Value report, Tailoring hybrid cloud: designing the right mix for innovation, efficiency and growth.

The post Most enterprises tailor hybrid cloud to their specific needs appeared first on .
Quelle: Thoughts on Cloud

Report: IBM public cloud empowers developers

The latest edition of Forrester Research’s Forrester Wave report which evaluates global public cloud platforms characterized IBM as a “strong performer” in public cloud.
IBM earned “the highest possible score for its private and hybrid cloud strategy as well as the top ranking for IBM’s infrastructure services,” eWeek reports. Forrester’s study used 34 evaluation criteria to evaluate eight global cloud platform service providers.
In particular, IBM empowers enterprise developers with the tools they need to build applications, Forrester’s report contends. It cites “platform configuration options, app migration services, cognitive analytics services, security and compliance certifications, complex networking support, growing partner roster and native DevOps tools” as strengths.
In a statement, Bill Karpovich, general manager of the IBM Cloud Platform, said:
We believe being recognized as a strong performer in Forrester&;s latest Wave report reinforces what we hear from our clients every day—that cloud is not &;one size fits all.&8217; Enterprises require choice and expertise to evolve their diverse application portfolios, and IBM Cloud was designed to deliver on those core tenets.
For more about the Q3 Forrester Wave study, check out eWeek’s full report.
The post Report: IBM public cloud empowers developers appeared first on .
Quelle: Thoughts on Cloud

Welcome to the new Thoughts on Cloud

Notice anything different?
Today we launch a brand new design for Thoughts on Cloud, with several new features and navigational tools we hope will improve your reading experience. The changes are intended to make whatever you&;re looking for a snap to find, easy to share, enjoyable to read and open to your feedback.
The first thing you surely noticed is our new user interface, which we hope you find aesthetically pleasing and intuitive.
Here&8217;s what won&8217;t be changing: our content. We will continue to bring you the best in thought leadership and analysis from within IBM Cloud and elsewhere on topics within the sphere of , including hybrid cloud, security, app development, cognitive computing, storage, mobile, big data and more. If any of those topics are of particular interest to you, we have categorized all our posts by topic for easy access. Simply hover over the dropdown menu at the top of the page for a list of categories.
If you click on a specific post of interest, scroll to the bottom to find three recommended, related articles. If you&8217;d like to know what&8217;s popular on the particular day you&8217;re browsing the site, that&8217;s available in the sidebar to the right of the post text. Also in that sidebar is a quick, real-time look at the IBM Cloud Twitter feed so you can see the latest in cloud news.
We have opened up comments on many of our posts, so please join in our conversation about what&8217;s new in cloud computing. Thanks for reading Thoughts on Cloud. Stay tuned for much more about the world of cloud computing.
Save
The post Welcome to the new Thoughts on Cloud appeared first on Cloud Computing.
Quelle: Thoughts on Cloud

The 4 Biggest Questions About Docker from VMworld 2016

Simply incredible. We spent last week at speaking with thousands of enterprise security, infrastructure and virtualization pros. It was humbling to witness all of the curiosity and excitement around at the show, and how Docker clearly made a strong impression on the attendees.

This curiosity around Docker and its use within enterprise environments is the reason why i’m writing this blog. We noticed that there were many of the same questions that arose, and we figured we should share them with you, as you start your journey towards adopting Docker containers and VMs.
Here are the most commonly asked questions from the conference.

What is Docker? Or even a container? Is it a lightweight VM? Can I use it with vSphere? What value do they provide?

 

Containers are really about applications, not servers. That&;s why they aren&8217;t VMs. @docker VMWorld
— Karen Lopez (@datachick) August 29, 2016
 
A Docker container is a standard unit in which application code, binaries and libraries can be packaged and isolated. The Docker Engine is the runtime installed on your infrastructure of choice and is what executes commands to build and deploy containers. Many containers can be connected together to form a single application or one container can include the entire codebase. Docker provides an abstraction layer between the application itself and the underlying compute infrastructure making the application completely portable to any other endpoint running Docker.
Docker containers are not VMs nor even lightweight VMs as their architecture is different.The image below displays the key differences between Docker containers and VMs .  Docker containers share the OS kernel on the host where each VM has a full copy of an OS inside the VM.

This does not mean these two models are mutually exclusive. Docker containers run anywhere a Docker Engine is installed&;and Docker Engine runs on bare metal, in VMs (vSphere, Hyper-V) and clouds (AWS, Google, Azure, and more). This also means that Docker containers are portable from any one of the above environments to the other without having to recode the application. Additionally many users add containers into an existing virtual infrastructure to increase the density of workloads possible per VM.

There are several reasons why Docker containers are being adopted within the enterprise:

Security &; Docker containers are completely isolated from one another, even when running on the same host and sharing the same OS. This makes them ideal for enterprise teams leveraging (for example) bare metal servers and are looking to comply with industry security regulations. And with the Docker Datacenter platform enterprise teams receive on-premises tools chock full of security features.
Portability across infrastructure and app environments &8211; Docker containers can run anywhere the Docker Engine is installed. This gives teams the ability to move their applications across different environment without having to tweak the code. For example, teams can easily move from vSphere to other environments like Azure and AWS .
Optimize Resources &8211; Docker containers can be deployed within VMs, and in fact vSphere is a great place to run them. This allows teams to run multiple containers within VMs. This reduces the overall VM footprint and decreases maintenance costs associated with maintaining legacy apps. Given that there are now less VMs, companies can spend less on vSphere including reduced hypervisor licensing costs as well.

 

Are you currently using @docker containers & VMs together? VMWorld
— Docker (@docker) August 21, 2016

Speed &8211; Docker containers help streamline the application lifecycle, helping developers build applications more quickly and IT ops teams react faster to changing business needs. Containers spin up on average in ⅜ of a second, compared to VMs which take several seconds or minutes. This sub second spin up time of Docker containers allow teams to onboard developers more quickly and deploy out to production more frequently.

Does Docker support Windows Server?

Will @Docker like containers ever catch on in Windows? http://t.co/jMHaVVVMFo VMworld
— Keith Townsend (@CTOAdvisor) August 26, 2014

Today Docker Engine runs on all major Linux distros like: Ubuntu, CentOS, RHEL, OpenSUSE and more.  Support for Windows Server is the most popular question as most companies have a mix of Windows and Linux based applications.  I’m pleased to say that very soon, Docker Engine will run on Windows Server 2016.  This means that the same Docker container technology and workflow can be applied to Linux and Windows Server workloads. For example, going forward, admins can have applications that have a back-end windows piece e.g. Microsoft SQL server and leverage a linux-based web front end, and have be part of the same app… running in vSphere VMs, baremetal or cloud (boom)!
Windows Server 2016 and Docker is available as a tech preview to try here.

Docker sells commercial solutions built specifically with enterprise teams in mind

 

And here are the @Docker Commercial Management tools: Cloud VMworld pic.twitter.com/CxYKBVX8pL
— Arjan Timmerman (@Arjantim) August 29, 2016

Our commercial management platform, Docker Datacenter, is what enterprise teams are leveraging across the entire application lifecycle. Developers use our solution to quickly create apps, update apps and deploy them and IT Ops uses the platform to secure their application environment, comply with industry regulations, and deploy applications out to production more frequently.  In addition they are able to reduce the overall application-related costs to the business.
As mentioned, Docker Datacenter is our enterprise solution. Sold as a monthly or annual subscription, Docker Datacenter (DDC) delivers an on-premises Containers as a Service environment that IT ops teams use to manage and secure the environment and devs use to create applications in a self-service manner. The tool provides an image registry, orchestration/management plane and commercial support from the Docker Customer Success team. This support also includes validated configurations of operating systems and support for previous versions of the Docker engine.
Oh, and Docker Datacenter has got the GUIs
 
lots of options with @Docker &8211; CLI, API, and GUI for deploying VMworld tfdx
— Tim Smith (@tsmith_co) August 29, 2016

Many VMware customers are accustomed to managing VMs in their vCenter GUI. So, they were happy to know that yes, there are Docker tools to help manage images and containers, and they come complete with a GUI. Well, there’s a couple actually. And just like how VMware users use tools built by VMware, for VMware, we recommend Docker users use tools built by Docker, for Docker.
With Docker Datacenter, IT Operations teams have the ability to manage, orchestrate and scale their Dockerized apps across their environment. The tool is chock full of enterprise features including:

Ability to deploy containers onto nodes directly from within the UCP GUI
Manage nodes, images and applications
Scale instances horizontally for times of peak application usage
Role-based access controls to control who can access what
Integration with LDAP/AD to quickly create teams and organizations

Here is a quick look at the Docker Datacenter management  dashboard.

Docker Datacenter also provides the capability to store, manage, and secure your images.Key features include:

Ability to sign images and ensure images are not tampered with
Ability to manage images, repositories, tags
Quickly update/patch apps and push new images to DTR
Integration with Universal Control Plane for quick deployment

How Docker Datacenter is priced, and what we mean when we say Docker “node”

The Docker Datacenter subscription is licensed by the number of Docker engines you require. A node is anything (VM, bare metal server or Cloud instance) with the Docker Engine installed on it. A good way to understand how many engines you require is to think about the number of existing VMs, or bare metal servers or cloud instances you want to begin Dockerizing. Datacenter is available on a monthly or annual subscription basis with the option of business day or business critical support to align to your application service levels.  Check out our pricing page to learn more.
For any virtualization gurus looking to learn more about Docker and how Docker containers and VMs can be used together I highly recommend you give this ebook on “Docker for the Virtualization Admin” a read.
Additional Resources

Read the eBook: Docker for The Virtualization Admin
Learn more about Docker Datacenter
See a demo of Docker Datacenter
Hear from Docker Datacenter Customers

 

Top 4 Docker questions from VMworld answered hereClick To Tweet

The post The 4 Biggest Questions About Docker from VMworld 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Your Guide to ContainerCamp UK

 UK kicks off tomorrow in the heart of London&;s Piccadilly and we can hardly contain our excitement. There are loads of talks that you won’t want to miss!
 
Thursday, September 8th   
 
Ben Hall, co-organizer of the Docker London Meetup Group will be speaking at Container Camp Day 0, a joint event put on by the Docker London and Kubernetes London Meetup Groups. Tickets are free but space is already sold out. You can sign up on the waitlist.
Be on the lookout for Docker Captains Elton Stoneman, Benjamin Wooton, Alex Ellis, and Nicolas de Loof who will be in attendance and make sure to say hello.
 
Friday, September 9th
 
9:55 am: Ben Firshman, Director of Product Management at Docker &; Building serverless apps with Docker
Everyone&8217;s talking about serverless right now. For good reason – it&8217;s makes distributed apps much simpler to build, scale, and maintain. In this session, Ben will demonstrate how you can use Docker to mix in serverless techniques &8211; right now &8211; and how serverless is going to change how you build distributed apps in the future.

 
11:15 am: Nishant Totla, Docker Software Engineer &8211; Orchestrating Linux containers while tolerating failures
Management of containers in production requires special care in order to keep the application up and running. In this session, learn the mechanisms and architecture of the Docker Engine orchestration platform (using a framework called swarmkit) to tolerate failures of services and machines, from cluster state replication and leader-election to container re-scheduling logic when a host goes down.

 
12:35 pm Lightening Talk: Nicholas Deloof &8211; Continuous delivery in a container world
 
5:00 pm: DockerCaptain Alex Ellis &8211; Docker and IoT: securing the server-room with realtime ARM microservices
Docker and Raspberry Pi are the perfect combination for protecting the data center against thermal overload and tampering. Learn how Docker Captain Alex Ellis used off-the-shelf hardware to create a scalable solution with help from Pimoroni and Docker Swarm.

 

Your docker agenda 4 containercamp w/ @bfirsh @nishanttotla @ndeloof @alexellisuk Click To Tweet

The post Your Guide to ContainerCamp UK appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/