Steve Singh Joins Docker’s Board of Directors

The whole team at Docker would like to welcome Steve Singh, CEO of Concur and Member of SAP’s Executive Board to the Docker family. Steve has accepted a role on Docker’s Board of Directors, bringing his deep experience in building world-class organizations to the Docker board. Steve leads the SAP Business Networks & Applications Group, which brings together teams from Ariba, Fieldglass, Concur, SAP Health, Business Data Network and SMP ERP groups. We had a chance to sit down with Steve to get his thoughts on his appointment to the Docker Board.

 
How and why did you initially become involved with Docker?
I was certainly aware of Docker. There were also a number of groups across SAP that were using Docker. When a member of the Docker board approached me about joining the company’s Board of Directors, I learned a fair bit more about the market opportunity Docker was pursuing and could easily see the importance of the Docker suite for corporate IT and ISV&;s. I was also intrigued by the opportunity to support Ben and Solomon in building an enduring business.
 
What lead you to Joining the Board?
For me, there are two requirements when considering board roles. The first question I ask  &; is the company focused on a meaningful problem or opportunity? Docker is focused on giving every developer an opportunity to be independent of the infrastructure that their services are delivered upon. That&8217;s a huge opportunity across corporate IT and every ISV. When you think about how software is becoming the foundation for every industry, you can see the importance of Docker. The second factor is the nature of the founders. It is important to me to work with people with whom I have shared values. I like people that care deeply about their teammates, their community and the legacy that they will leave. Solomon and Ben were down to earth people that had a passion for their company and their team mates. As a founder of a business, I was impressed that Solomon was trying to solve a big problem and wasn’t daunted by obstacles. I was hopeful that as a board member, I could help accelerate the mission that Solomon and Ben were executing against.
 

As a founder of a high growth start-up yourself and then scaling it; how does that perspective guide how you view your board role?
If I look back at my own experience at Concur, I realized that the early board members were strong financial investors but that they didn’t have a lot of operational experience. I think that the role of the board should be to provide that experience and guidance. Our role is to help the team think through and define their strategy and to help attract, develop and retain incredible leadership talent.
 
SAP (Ariba), which is part of your business unit, is a Docker customer. Did that play a role in your decision to join the Docker board? 
As it turns out, a number of businesses within SAP use Docker and the reviews I received from developers around the company were phenomenal. They loved the Docker product. I couldn’t find one part of the organization that had used Docker and didn’t love it. So while it didn&8217;t factor into my decision to join the board, it was certainly encouraging to see the high regard for Docker.
 
As a founder that has grown their organization from a startup to a company with several successful business units, are there lessons learned on how to continue and maintain that momentum?
Success is all about people &8211; both the quality of the individuals that are part of the team and perhaps more importantly, the culture that binds those individuals together. As your company gets larger, it is easy to lose your focus. It is easy for the &;signal&; to degrade from the founder to the newest person joining the team. Certainly part of that signal is the mission of the company, but the most important components of that signal, are the values that define the company and the people that you want at your company. If you can keep that signal strong as you grow, you have every chance to build an incredible company. Not just one that succeeds financially and from a market perspective, but one that is like a second family.
 
What do you believe is compelling and unique about Docker’s commercial opportunities?
The entire Docker product line has massive opportunity and the open source and the commercial solutions feed into each other. I believe the opportunity is measured in the tens of billions as the demand for Docker among software developers and IT is growing at an unbelievable rate. Docker enables software developers and IT to plug and play into any infrastructure, which gives them control and real economic benefit. In the long term, SAP and other global 2000 companies will have leverage in working with their cloud providers because Docker enables 100 percent portability. This ensures that organizations will be able to seek competitive offerings while avoiding lock-in.
 
As you look ahead in the next year &8211; what do you see as Docker’s priorities? What are the challenges? What do you see as the board’s challenges?
I see three main priorities for Docker in 2017. Ben and Solomon have to focus on recruiting to develop and bind together a great management team. It is not enough to recruit rock stars – companies need to develop teams that genuinely like working together. The mark of a successful team in one where colleagues form a friendship in a business environment. This reinforces their commitment as they really don’t want to let their peers down. Second, we need to make sure we continue to set the pace for our open source solutions and ensure that our commercial solution, Docker Datacenter (DDC), significantly exceeds customers&8217; expectations. Third, we need to crush our 2017 business metrics, which I believe we can.
 
Tell us a little bit about yourself – What do you enjoy doing when you are not in your role at Concur or fulfilling your board duties at Concur, CornerStone, OnDemand, etc.
I get a tremendous amount of joy from working with others. Through their own example, my parents taught me that the measure of life is improving the trajectory of humanity &8211; no matter how small or large that improvement is. For me, the best way to accomplish that is to help others. I strive to help my co-workers, friends, community and of course my family. When I am not working – I am with my wife and kids. We have an active family life and my wife and I like to participate in what are children are doing &8211; whether it is with our youngest who is into horseback riding or working with our son, who has started his own company, or visiting our oldest daughter, who is in her final year at college. Family, friends and community &8211; everything else is transient.
The post Steve Singh Joins Docker’s Board of Directors appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Enterprise cloud strategy: Governance in a multi-cloud environment

Enterprises manage risk.
It’s a business reality that applies as much to as it does to finance, operations or marketing.
To mitigate risk from data loss or downtime, or retain control of enterprise data and application strategy, organizations today often use two or more cloud providers in their cloud environments. This multi-cloud strategy can also improve overall enterprise performance by avoiding &;vendor lock-in&; and making use of different infrastructures to meet the needs of diverse applications.
Whether you&;re a chief information officer or chief technology officer planning or implementing a multi-cloud strategy, you must make some critical decisions, the first being governance. Multi-cloud governance is essential for fast delivery of cloud services while also satisfying enterprise needs for budget control, visibility, security and compliance. It can be broken down into two areas: cloud services brokerage and control-plane abstraction.
Gartner defines cloud services brokerage as an IT role and business model in which a company or other entity adds value to one or more public or private cloud services. The organization does this on behalf of the departments or lines of business that use the service. An IT department can assume the role itself, or the organization may choose to hire a cloud services broker to help. Regardless of how you source your brokerage, consider several questions to know how effective it is:

Can your brokerage strategy compare capabilities of various clouds for workloads? That is, can it determine “which cloud” is appropriate by workload?
Will it help you manage your cloud expenditures across user groups, departments and projects?
Will your brokerage create a holistic view of your IT environment and service-level agreements?

As you answer these questions, remember: integration across disparate APIs and governance processes is key to unlocking multi-cloud governance success. When addressed properly, it can help manage all aspects of your cloud environment, including access and control, security and compliance, and customer records. It can even provide needed visibility into your environment and scale cloud capabilities.
To enable cloud freedom but still meet the enterprise&8217;s security and compliance requirements, you need control-plane abstraction. Control-plane abstraction helps automate delivery of policies, procedures and configurations before cloud services are used. It helps reduce complexities and errors that easily arise in a multi-cloud environment.
That same kind of control is vital for multi-cloud environments. One example: a customer-service application deployed on cloud may need access to authentication, customer data, pricing and other services that are developed and deployed on-premises. Without integration and control, your workloads and applications could have functional deficiencies or security exposures.
To ensure smooth flying through your clouds, you must successfully manage, at a minimum, three facets of control-plane abstraction.
First, the platform must have the ability to orchestrate and automate blueprints and application patterns. For example, it should be able to develop infrastructure and application stacks. Your platform should also be able to deploy hardened images across clouds that adhere to security and compliance requirements.
Second, you need top-notch identity and access management. Your on-premises access policies — particularly role-based access — should be extended to all cloud platforms. Additionally, you must restrict native portal access to each cloud and control management access through common tooling.
Finally, incident, problem and change management solutions should be integrated to provide visibility — the proverbial &8220;single pane of glass&8221; — across multiple cloud environments from diverse providers. Warning: quality and service levels differ between service providers. Know the default services levels for each cloud.
In practical terms, good governance in a multi-cloud environment means not being blindsided by unexpected costs, security problems, or poor platform and API integration. It&8217;s the necessary first step in implementing your cloud strategy, transforming your organization and joining the digital revolution. Once you&8217;ve done it, it&8217;s time to take on applications and data in a multi-cloud environment—which I&8217;ll discuss in my next post
For more information about cloud brokerage services, read &8220;Hybrid IT through Cloud Brokerage&8221;.
The post Enterprise cloud strategy: Governance in a multi-cloud environment appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Everything You Need To Know About A Trump Server's Chats With A Russian Bank

“At the end of the day, we don’t know what happened.&;

Gustavo Caballero / Getty Images

SAN FRANCISCO — Did Republican presidential nominee Donald Trump have a special email server used exclusively to communicate with a Russian bank with ties to President Vladimir Putin? On Monday night, the internet was abuzz with speculation after Slate published a story claiming that a number of experts had not only found the email server, but had concluded there was a “sustained relationship between a server registered to the Trump Organization and two servers registered to an entity called Alfa Bank,” a large, private bank in Russia whose oligarch founders have close ties to Putin.

Democratic presidential nominee Hillary Clinton piled into the news cycle with a tweet calling for an investigation into Trump’s ties to Russia.


View Entire List ›

Quelle: <a href="Everything You Need To Know About A Trump Server&039;s Chats With A Russian Bank“>BuzzFeed

IBM wins Frost & Sullivan 2016 Cloud Company of the Year award

Market research firm Frost & Sullivan has conferred its 2016 Cloud Company of the Year award to IBM, citing hybrid integration and affordability as major factors.
Lynda Stadtmueller, Vice President of Cloud Services for Stratecast/Frost & Sullivan, explained the choice of the IBM Cloud platform because it &;supports the concept of &;hybrid integration&; — that is, a hybrid IT environment in which disparate applications and data are linked via a comprehensive integration platform, allowing the apps to share common management functionality and control.&;
The capabilities she noted enable Bluemix users to tap into analytics functionality and Watson.
Stadtmueller continued: “IBM Cloud offers a price-performance advantage over competitors due to its infrastructure configurations and service parameters — including a bare metal server option; single-tenant (private) compute and storage options; granular capacity selections for processing, memory, and network for public cloud units; and all-included technical support.”
IBM VP of Cloud Strategy and Portfolio Management Don Boulia said the award &8220;recognizes the extraordinary range and depth of IBM&8217;s cloud services portfolio.&8221;
Other IBM capabilities Frost & Sullivan cited were its scalable cloud portfolio, extensive connectivity and microservices.
For more, check out Read IT Quik’s full article.
The post IBM wins Frost &; Sullivan 2016 Cloud Company of the Year award appeared first on news.
Quelle: Thoughts on Cloud

CloudForms 4.2 Beta 1 (Public)

Welcome to the CloudForms 4.2 Beta 1 release. The beta program will run for a number of weeks starting Halloween 2016.
Please note this is a Beta Blog and therefore should NOT be used to confirm the GA release of this product.
Let&;s break down the mega release into various sections of the platform for a quick review;
Providers
VMware vCloud Air/Director
This new provider has been developed in conjunction with XLAB.SI. It delivers the following capabilities;
Inventory

Collect vApps
Collect Datacenters

Events
Event Catcher and Switchboard support
Metrics &; Not Yet
LifeCycle

Provision vCloud Apps (vApps) from CloudForms Service Catalog and Operations UI

VMware vSphere

New Dashboard for vSphere Provider.
Allow for Cluster only selection &8211; We had a requirement to allow users to select only the cluster, and not specify the host or datastore. So during provisioning on VMware vSphere you can now do this, select only the cluster and if the cluster supports DRS it will automatically decide a host and datastore on the VMware side of the house.
Provisioning with Storage Profiles &8211; Now you can provision in CloudForms supporting VMware Storage Profiles. VMware Storage Profiles let you assign policies to datastores such as production or test. In CloudForms we pre-filter the datastore selection based on these profiles.

Red Hat Virtualization

Snapshot Management &8211; Take/Restore from Snapshots within RHV.
Disk Management &8211; Connect/Disconnect drives to your virtual machines. Fully supporting VM reconfigure.

Middleware (Hawkular)
Inventory

Clusters
Hosts
Entities
Topology
Applications
Templates
Datasources
Drivers
Deployment status
Cross linking

Dedicated performance reports for Hawkular are also included.
Events

Receive Events
Support for Alert Profiles and automated expressions for middleware servers

Metrics
The Hawkular provider supports live metrics. This means that when you view the charts within CloudForms we grab the live metrics from the server at that time for the following,

Datasources
Transactions
JMS Topics
Queues

Life Cycle Operations

Deploy Application
Upload WAR
Create Datasource(s)
Add JDBC Drivers

OpenStack Cloud

Create/Update/Delete OpenStack Cloud Tenants
Create/Update/Delete Host Aggregates
Take and Remove Snapshots at VM level

OpenStack Infrastructure

New topology view of the Under Cloud
Ironic Controls Added for Hosts

Set as Manageable
Introspect Nodes
Provide Nodes

OpenStack Neutron

Create/Update/Delete Router
Create/Update/Delete Network
Create/Update/Delete Subnet
Inventory of Network Ports

OpenStack Swift
New provider in a new Storage menu. This provider class will be built out in future releases.

Inventory

OpenStack Cinder
New provider in a new Storage menu. This provider class will be built out in future releases.

Inventory
Snapshot Support for Volumes exposed in the UI and Automate.
Create/Restore from Backup exposed in the UI and Automate.

OpenShift Enterprise

View Container Templates
Chargeback for container images &8211; Enabling images to support a fixed cost. This can contribute a base image cost to a variable utilised report for pods and applications.
Chargeback based on container image tags.
Support for Custom Attributes &8211; Now we see the OpenShift labels as custom attributes.
Allow policies to prevent image scans, this is useful if you wish to stop CloudForms from inspecting certain images for security or performance reasons.
Reports : Pods for images per project and Pods per node.

Google Cloud

Metrics &8211; CPU, Memory and Network.
Load Balancer Inventory.
Load Balancer Health Checks &8211; Shown in inventory and actionable using automation.
Hide deprecated images from provisioning.
Preemptible Instances &8211; Googles Preemptible Instances are a low cost way of getting compute, coming with restrictions such as termination without notice. CloudForms supports the provisioning of these instances.
Retirement Support.

Microsoft Azure

Additional metrics to CPU such as;

Memory
Disk

Chargeback for Fixed, Allocated and Utilized costs for VM resources.
Support for Floating IPs during provisioning.
Load Balancer inventory.

Microsoft SCVMM

Bug fixes.

Amazon EC2
New CloudForms Appliance Image &8211; This means you can now run CloudForms in Amazon EC2 without any other hosting infrastructure required.
User Interfaces
Both

Single Level Proxy Support &8211; This allows for users to access the remote console for workloads that may be behind a firewall (e.g. service providers). You can configure CloudForms to proxy remote console sessions when direct host visibility is not available. This capability is also exposed to automate.
Notification Draw &8211; Users can receive both Toasts and Notifications from any event happening in CloudForms. This means that during provisioning, as various phases are passed such as approval, quota check, etc., you can notify the user that this has happened. Furthermore, we have enabled this with a helper method in Automate, meaning that any automate method can emit notifications. The notifications can be read or saved. The drawer holds a history of previous notifications.

Operations UI

Topology viewer added for Infrastructures and Cloud Providers.
New toggle view to switch between classic inventory view and new dashboard view.
Schedule automate tasks &8211; Run once or recurring.
VM Explorer Trees &8211;  A new setting has been introduced and set as default. This setting REMOVES the VM&8217;s from the explorer trees, as it caused a substantial performance hit. This setting can be turned back on for smaller environments under My Settings > Services > Workloads > All VMs. The page load time was reduced from 93,770ms to 524ms (99% improvement) with a test of 20,000 VMs.
Timelines &8211; New Timelines component for timelines view on VMs, Providers or other objects supporting this feature.

Service UI

New support for Chargeback roll-up data per My Services. Shows $/$$/$$$ costings.
Service Power Operations &8211; You can now Stop/Start/Suspend an entire service composed of multiple VMs.
Confirmation when deleting items from your shopping cart.
Cockpit Integration &8211; Red Hat Enterprise Linux 7.x systems can be managed/configured using the Cockpit server manager interface. CloudForms now allows launching the Cockpit UI in a new window for systems identified as enabled.

Platform
Chargeback

Numerous changes to Chargeback to improve accuracy in results.

Centralized Administrator

This item is to support some of our larger installations of CloudForms whereby the customer wishes to have one single entry point into CloudForms from any number of regions or zones setup globally. We have supported for some time the notion of a Reporting Region, this allows to report centrally on any data rolled up from child regions to the parent reporting region. With Centralized Administration you can now not only report, but start to perform some of lifecycle tasks too such as;

VM Power Operations &8211; Start/Stop/Suspend a VM in any region from your central region.
VM Retirement &8211; Retire a VM in any region from your central region.

Tenancy
Tenancy has seen two major changes in this release as follows;
OpenStack and our Tenancy
You can now synchronize the tenants that exist within OpenStack to CloudForms. This means you can, as an administrator, define some simple mapping rules and CloudForms will automatically keep the tenants that exist within the OpenStack providers synchronized to those in CloudForms.
CloudForms Tenancy
Ad-hoc sharing of resources across tenants. This will allow users to select an item in their view and share it with anyone in any other tenant in CloudForms.
Database Maintenance
The results from numerous support surveys shows that the database can suffer performance or stability issues when maintenance is not carried out regularly. Therefore we are including in the &;black console&; menus the ability to configure Database Maintenance activities.
Database High Availability
We are supporting in the product, PostgreSQL High Availability. The support is for Primary to Stand-by, you can manually control the swap or use a heartbeat to automatically fail over. The feature is easily enabled using the &8220;Black Console&8221; menu.
Automate

Import Automate Models from GiT Repository

Fully UI configurable and managed.
Post Commit Hooks &8211; Automatically synchronize the changes to the CloudForms appliances enabled with the GiT Server Role.
Tags &8211; Select what is synchronized by Tags.
Branches &8211; Select what is synchronized by Branch.
Supports certificates

Schedule automate tasks &8211; Now you can create tasks that are triggered based on a timed schedule
Notifications &8211; You can $evm.create_notification(:message => &8220;my custom message&8221;). We support error levels and subjects too. This will allow you to provide feedback direct to your users from automate. For example, if you have an automate script that exports, converts and imports a VM from one platform to another, you could notify the user who initiated the task when each phase has completed. Previously the only messaging to the user was email, with notifications you have live feed back through the UI direct to the user.

Quelle: CloudForms

What you missed at OpenStack Barcelona

The post What you missed at OpenStack Barcelona appeared first on Mirantis | The Pure Play OpenStack Company.
The OpenStack Summit in Barcelona was, in some ways, like those that had preceded it &; and in other ways, it was very different.  As in previous years, the community showed off large customer use cases, but there was something different this year: whereas before it had been mostly early adopters &8212; and the same early adopters, for a time &8212; this year there were talks from new faces with very large use cases, such as Sky TV and Banco Santander.
And why not? Statistically, OpenSTack seems to have turned a corner, with the semi-annual user survey showing that workloads are no longer just development and testing but actual production, users are no longer limited to huge corporations but also to work at small to medium sized businesses, and containers have gone from an existential threat to a solution with which to work, not fight, and concerns about interoperability seem to have been squashed, finally.
Let&;s look at some of the highlights of the week.

It&8217;s traditional to bring large users up on stage during the keynotes, but this year, with users such as Spain&8217;s largest bank, Banco Santander, Britain&8217;s broadcaster, Sky UK, the world&8217;s largest particle physics laboratory, CERN, and the world&8217;s largest retailer, Walmart, it did seem more like showing what OpenStack can do, than in previous years, when it was more about proving that anybody was actually using it in the first place.
For example, Cambridge’s Dr. Rosie Bolton talked about the SKA radio observatory  that will look at 65,000 frequency channels, consuming and destroying 1.3 zettabytes of data every six hours. The project will run for 50 years cost over a billion dollars.

This.is.Big.Data @OpenStack   pic.twitter.com/XgT3eEjDVh
— Sean Kerner (@TechJournalist) October 25, 2016

OpenStack Foundation CEO Mark Collier also introduced enhancements to the OpenStack Project Navigator, which provides information on the individual projects and their maturity, corporate diversity, adoption, and so on. The Navigator now includes a Sample Configs section, which provides the projects that are normally used for various use cases, such as web applications, eCommerce, and high throughput computing.
Research from 451 Research
The Foundation also talked about findings from a new 451 Research report that looked at OpenStack adoption and challenges.  
Key findings from the 451 Research include:

Mid-market adoption shows that OpenStack use is not limited to large enterprises. Two-thirds of respondents (65 percent) are in organizations of between 1,000 and 10,000 employees.1
OpenStack-powered clouds have moved beyond small-scale deployments. Approximately 72 percent of OpenStack enterprise deployments are between 1,000 to 10,000 cores in size. Additionally, five percent of OpenStack clouds among enterprises top the 100,000 core mark.
OpenStack supports workloads that matter to enterprises, not just test and dev. These include infrastructure services (66 percent), business applications and big data (60 percent and 59 percent, respectively), and web services and ecommerce (57 percent).
OpenStack users can be found in a diverse cross section of industries. While 20 percent cited the technology industry, the majority come from manufacturing (15 percent), retail/hospitality (11 percent), professional services (10 percent), healthcare (7 percent), insurance (6 percent), transportation (5 percent), communications/media (5 percent), wholesale trade (5 percent), energy & utilities (4 percent), education (3 percent), financial services (3 percent) and government (3 percent).
Increasing operational efficiency and accelerating innovation/deployment speed are top business drivers for enterprise adoption of OpenStack, at 76 and 75 percent, respectively. Supporting DevOps is a close second, at 69 percent. Reducing cost and standardizing on OpenStack APIs were close behind, at 50 and 45 percent, respectively.

The report talked about the challenge OpenStack faces from containers in the infrastructure market, but contrary to the notion that more companies were leaning on containers than OpenStack, the report pointed out that OpenStack users are adopting containers at a faster rate than the rest of the enterprise market, with 55 percent of OpenStack users also using containers, compared to just 17 percent across all respondents.
According to Light Reading, &;451 Research believes OpenStack will succeed in private cloud and providing orchestration between public cloud and on-premises and hosted OpenStack.&;
The Fall 2016 OpenStack User Survey
The OpenStack Summit is also the where we hear the results of the semi-annual user-survey. In this case, the key findings among OpenStack deployments include:

Seventy-two percent of OpenStack users cite cost savings as their No. 1 business driver.
The Net Promoter Score (NPS) for OpenStack deployments—an indicator of user satisfaction—continues to tick up, eight points higher than a year ago.
Containers continues to lead the list of emerging technologies, as it has for three consecutive survey cycles. In the same question, interest in NFV and bare metal is significantly higher than a year ago.
Kubernetes shows growth as a container orchestration tool.
Seventy-one percent of deployments catalogued are in “production” versus in testing or proof of concept. This is a 20 percent increase year over year.
OpenStack is adopted by companies of every size. Nearly one-quarter of users are organizations smaller than 100 people.

New this year is the ability to explore the full data, rather than just relying on highlights.
Community announcements
Also announced during the keynotes were new Foundation Gold members, the winner of the SuperUser award, and progress on the Foundation&8217;s Certified OpenStack Administrator exam.
The OpenStack Foundation charter allows for 24 Gold member companies, who elect 8 Board Directors to represent them all.  (The other members include one each chosen by the 8 Platinum member companies, and 8 individual directors elected by the community at large.) Gold member companies must be approved by existing board members, and this time around City Network, Deutsche Telekom, 99Cloud and China Mobile were added.
China Mobile was also given the Superuser award, which honors a company&8217;s commitment to and use of OpenStack.
Meanwhile, in Austin, the Foundation announced the Certified OpenStack Administrator exam, and in the past six months, 500 individuals have taken advantage of the opportunity.
And then there were the demos&;
While demos used to be simply to show how the software works, that now seems to be a given, and instead demos were done to tackle serious issues.  For example, Network Functions Virtualization is a huge subject for OpenStack users &8212; in fact 86% of telcos say OpenStack will be essential to their adoption of the technology &8212; but what is it, exactly?  Mark Collier and representatives of the OPNFV and Vitrage projects were able to demonstrate how OpenStack applies in this case, showing how a High Availability Virtual Network Function (VNF) enables the system to keep a mobile phone call from disconnecting even if a cable or two is cut.  (In this case, literally, as Mark Collier levied a comically huge pair of scissors against the hardware.)
But perhaps the demo that got the most attention wasn&8217;t so much of a demo as a challenge.  One of the criticisms constantly levied against OpenStack is that there&8217;s no &8220;vanilla&8221; version &8212; that despite the claims of freedom from lock-in, each distribution of OpenStack is so different from the others that it&8217;s impossible to move an application from one distro to another.
To fight that charge, the OpenStack community has been developing RefStack, a series of tests that a distro must pass in order to be considered &8220;OpenStack&8221;. But beyond that, IBM issued the &8220;Interoperability Challenge,&8221; which required teams to take a standard deployment tool &8212; in this case, based on Ansible &8212; and use it, unmodified, to create a WordPress-hosting LAMP stack.
In the end, 18 companies joined the challenge, and 16 of them appeared on stage to simultaneously take part.
So the question remained: would it work?  See for yourself:

Coming up next
So the next OpenStack Summit will be in Boston, May 8-12, 2017. For the first time, however, it won&8217;t include the OpenStack Design Summit, which will be replaced by a separate Project Teams Gathering, so it&8217;s likely to once again have a different feel and flavor as the community &8212; and the OpenStack industry &8212; grows.
The post What you missed at OpenStack Barcelona appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

5 Tales from the Docker Crypt

(Cue the Halloween music)
Welcome to my crypt. This is the crypt keeper speaking and I’ll be your spirit guide on your journey through the dangerous and frightening world of IT applications. Today you will learn about 5 spooky application stories covering everything from cobweb covered legacy processes to shattered CI/CD pipelines. As these stories unfold, you will hear  how Docker helped banish cost, complexity and chaos.
Tale 1 &; “Demo Demons”
Splunk was on a mission to enable their employees and partners across the globe to deliver demos of their software regardless of where they’re located in the world, and have each demo function consistently. These business critical demos include everything from Splunk security, to web analytics and IT service intelligence. This vision proved to be quite complex to execute. At times their SEs would be in customer meetings, but their demos would sometimes fail. They needed to ensure that each of their 30 production demos within their Splunk Oxygen demo platform could live forever in eternal greatness.
To ensure their demos were working smoothly with their customers, Splunk uses Docker Datacenter, our on-premises solution that brings container management and deployment services to the enterprise via an integrated platform. Images are stored within the on-premises Docker Trusted Registry and are connected  to their Active Directory server so that users have the correct role-based access to the images. These images are publicly accessible to people who are authenticated but are outside of the corporate firewall. Their sales engineers can now pull the images from DTR and give the demo offline ensuring that anyone who goes out and represents the Splunk brand, can demo without demise.
Tale 2 &8211; “Monster Maintenance”
Cornell University&;s IT team was spending too many resources taking care of r their installation of Confluence. Their team spent 1,770 hours maintaining applications over a six month period and were in need of utilizing immutable infrastructure that could be easily torn down once processes were complete. Portability across their application lifecycle, which included everything from development, to production, was also a challenge.
With a Docker Datacenter (DDC) commercial subscription from Docker, they now host their Docker images in a central location, allowing multiple organizations to access them securely. Docker Trusted Registry provides high availability via DTR replicas, ensuring that their dockerized apps are continuously available, even if a node fails. With Docker, they experience a 10X reduction in maintenance time. Additionally, he portability of Docker containers helps their workloads move across multiple environments, streamlining their application development, and deployment processes. The team is now able to deploy applications 13X faster than in the past by leveraging reusable architecture patterns and simplified build and deployment processes.
Tale 3 &8211; “Managing Menacing Monoliths and Microservices!”
SA Home Loans, a mortgage firm located in South Africa was experiencing slow application deployment speeds. It took them 2 weeks just to get their newly developed applications over to their testing environment, slowing innovation. These issues extended to production as well. Their main home loan servicing software, a mixture of monolithic Windows services and IIS applications, was complex and difficult to update,placing a strain on the business. Even scarier was that when they deployed new features or fixes, they didn’t have an easy or reliable roll back plan if something went wrong (no blue/green deployment). In addition, their company decided to adopt a microservices architecture. They soon realized that upon completion of this project they’d have over 50 separate services across their Dockerized nodes in production! Orchestration now presented itself as a new challenge.
To solve their issues, SA Home Loans trusts in Docker Datacenter. SA Home Loans can now deploy apps 30 times more often! The solution also provides the production-ready container orchestration solution that they were looking for. Since DDC has embedded swarm within it, it shares the Docker engine APIs, and is one less complex thing to learn. The Docker Datacenter solution provides ease of use and familiar frontend for the ops team.
 
Tale 4 &8211; “Unearthly Labor”
USDA’s legacy website platform consisted of seven manually managed monolithic application servers that implemented technologies using traditional labor-intensive techniques that required expensive resources. Their systems administrators had to SSH into individual systems deploying updates and configuration one-by-one. USDA discovered that this approach lacked the flexibility and scalability to provide the services necessary for supporting their large number of diverse apps built with PHP, Ruby, and Java – namely Drupal, Jekyll, and Jira. A different approach would be required to fulfill the shared platform goals of USDA.
USDA now uses Docker and has expedited their project and modernized their entire development process. In just 5 weeks. they launched four government websites on their new dockerized  platform to production. Later, an additional four websites were launched including one for the first Lady, Michelle Obama, without any  additional hardware costs. By using Docker, the USDA saved  upwards of $150,000 in technology infrastructure costs alone. Because they could leverage a shared infrastructure model, they were also able to reduce  labor costs as well. Using Docker provided the USDA with the  agility needed  to develop, test, secure, and even deploy modern software in a high-security federal government datacenter environment.
Tale 5 &8211; “An Apparition of CI/CD”
Healthdirect dubbed their original applications development process &;anti CI/CD&; as it was broken, and difficult to create a secure end-to-end CI/CD pipeline. They had a CI/CD process for the infrastructure team, but were unable to repeat the process across multiple business units. The team wanted repeatability but lacked the ability to deploy their apps and provide 100% hands-off automation. .
Today Healthdirect is using Docker Datacenter. Now their developers are empowered in the release process and the code developed locally ships to production without changes. With Docker, Healthdirect was able to  innovate faster and deploy their applications to production, with ease.
So there they are. 5 spooky tales for you on this Halloween day.To learn more about Docker Datacenter check out this demo.
Now, be gone from my crypt. It’s time for me to retire back to my coffin.
Oh and one more thing….Happy Halloween!!
For more resources:

Hear from Docker customers
Learn more about Docker Datacenter
Sign up for your 30 day free evaluation of Docker Datacenter

 

5 spooky Tales from the Docker Crypt  To Tweet

The post 5 Tales from the Docker Crypt appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Considerations for Running Docker for Windows Server 2016 with Hyper-V VMs

We often get asked at , “Where should I run my application? On bare metal, virtual or cloud?” The beauty of Docker is that you can run a anywhere, so we usually answer this question with “It depends.” Not what you were looking for, right?
To answer this, you first need to consider which infrastructure makes the most sense for your application architecture and business goals. We get this question so often that our technical evangelist, Mike Coleman has written a few blogs to provide some guidance:

To Use Physical Or To Use Virtual: That Is The Container Deployment Question
So, When Do You Use A Container Or VM?

During our recent webinar, titled &;Docker for Windows Server 2016&;, this question came up a lot, specifically what to consider when deploying a Windows Server 2016 application in a -V VM with Docker and how it works. First, you’ll need to understand the differences between Windows Server containers, Hyper-V containers, and Hyper-V VMs before considering how they work together.
A Hyper-V container is a Windows Server container running inside a stripped down Hyper-V VM that is only instantiated for containers.

This provides additional kernel isolation and separation from the host OS that is used by the containerized application. Hyper-V containers automatically create a Hyper-V VM using the application’s base image and the Hyper-V VM includes the required application binaries, libraries inside that Windows container. For more information on Windows Containers read our blog. Whether your application runs as a Windows Server container or as a Hyper-V container is a runtime decision. Additional isolation is a good option for multi tenant environments. No changes are required to the Dockerfile or image, the same image can be run in either mode.
Here we the the top Hyper-V container questions with answers:
Q: I thought that containers do not need a hypervisor?
A: Correct, but since a Hyper-V container packages the same container image with its own dedicated kernel it ensures tighter isolation in multi-tenant environments which may be a business or application requirement for specific Windows Server 2016 applications.
Q: ­Do you need a hypervisor layer before the OS in both Hyper-V and Docker for Windows Server containers?
A: The hypervisor is optional. With Windows Server containers, isolation is achieved not with hypervisor, but with process isolation, filesystem and registry sandboxing.
Q: Can the Hyper-V containers be managed from the Hyper-V Manager, in the same way that the VM&;s are? (ie. turned on/off, check memory usage, etc?)
A: While Hyper-V is the runtime technology powering Hyper-V Isolation, Hyper-V containers are not VMs and neither appear as a Hyper-V resource nor be managed with classic Hyper-V tools, like Hyper-V Manager. Hyper-V containers are only executed at runtime by the Docker Engine.
Q: Can you run Windows Server container and Hyper-V Containers running Linux workloads on the same host?
A: Yes. You can run a Hyper-V VM with a Linux OS on a physical host running Windows Server.  Inside the VM, you can run containers built with Linux.

Next week we’ll bring you the next blog in our Windows Server 2016 Q&A Series &; Top questions about Docker for SQL Server Express. See you again next week.
For more resources:

Learn more: www.docker.com/microsoft
Read the blog: Webinar Recap: Docker For Windows Server 2016
Learn how to get started with Docker for Windows Server 2016
Read the blog to get started shifting a legacy Windows virtual machine to a Windows Container

Top considerations for running Docker @WindowsServer container in Hyper-VClick To Tweet

The post Considerations for Running Docker for Windows Server 2016 with Hyper-V VMs appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Fuel plugins: Getting Started with Tesora DBaaS Enterprise Edition and Mirantis OpenStack

The post Fuel plugins: Getting Started with Tesora DBaaS Enterprise Edition and Mirantis OpenStack appeared first on Mirantis | The Pure Play OpenStack Company.
The Tesora Database as a Service platform is an enterprise-hardened version of Openstack Trove, offering secure private cloud access to the most popular open source and commercial databases through a single consistent interface.
In this guide we will show you how to install Tesora in a Mirantis Openstack environment.
Prerequisites
In order to deploy Tesora DBaaS, you will need a fuel with the Tesora plugin installed.  Start by making sure you have:

A Fuel server up and running. (See the Quick Start for instructions if necessary.)
Discovered nodes for controllers, compute and storage
A discovered node for the dedicated node for the Tesora controller.

Now let&;s go ahead and add the plugin to Fuel.
Step 1 Adding the Tesora Plugin to Fuel
To add the Tesora plugin to Fuel, follow these steps:

Download the tesora plugin from the Mirantis Plugin page, located at:

https://www.mirantis.com/validated-solution-integrations/fuel-plugins/

Once you have downloaded the plugin, copy the plugin file to your Fuel Server using the scp command, as in:
$scp tesora-dbaas-1.7-1.7.7-1.noarch.rpm root@[fuel s:/tmp

After copying the Fuel Plugin to the fuel server, add it to the fuel plugin list. First ssh to the Fuel server:
$ssh root@[fuel server ipi]

Next, add the plugin to Fuel:
[root@fuel ~]# fuel plugins –install tesora-dbaas-1.7-1.7.7-1.noarch.rpm

Finally, verify that the plugin has been added to Fuel:
[root@fuel ~]# fuel plugins
id | name                     | version | package_version
—|————————–|———|—————-
1  | fuel-plugin-tesora-dbaas | 1.7.7   | 4.0.0

If the plugin was successfully added, you should see it listed in the output.
Step 2 Add Tesora DBaaS to an Openstack Environment
From here, it&8217;s a matter of creating an OpenStack cluster that uses the new plugin.  You can do that by following these steps:

Connect to the Fuel UI and log in with the admin credentials using your browser.
Create a new Openstack environment. Follow the prompts and either leave the defaults or alter them to suit your environment.
Before adding new nodes, enter the environment and select the settings tab and then other on the left hand side of the window.
Select Tesora DBaaS Platform and enter the username and password supplied to you by Tesora.  The username and password will be used to download the Database images provided by Tesora to the Tesora DBaaS controller.  Finish by typing &;I Agree&; to show that you agree to the Terms of Use and click Save Settings.
Now create your Environment by assign nodes to the roles for

Compute
Storage
Controller
Tesora DBaaS Controller

As shown in the image below:

After you have finished adding the roles go ahead and deploy the environment.

Step 3 Importing the Database image files to the Tesora DBaaS Controller
Once the environment is built, it&8217;s time to import the database images.  

From the Fuel Server, SSH to the Tesora DBaaS controller server.  You can find the IP address of the Tesora DBaaS controller by entering the following command:
[root@fuel ~]# sudo fuel node list | grep tesora
9  | ready  | Untitled (61:ef) | 4       | 10.20.0.6 | 08:00:27:a3:61:ef | tesora-dbaas    |               | True   | 4

After identifying the IP address you will need to ssh to the fuel server:
[root@fuel ~]# sudo ssh root@10.20.0.6
Warning: Permanently added ‘10.20.0.6’ (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-93-generic x86_64)
* Documentation:  https://help.ubuntu.com/
Last login: Wed Aug 10 23:51:29 2016 from 10.20.0.2

Next load the the pre-built Database images.  After logging into the DBaaS Controller, change your working directory to /opt/tesora/dbaas/bin:
root@node-9:~# cd /opt/tesora/dbaas/bin

Now export your tesora variables:
root@node-9:/opt/tesora/dbaas/bin# source openrc.sh

After setting your variables, you can now import your database images with the following command:
root@node-9:/opt/tesora/dbaas/bin# ./add-datastore.sh mysql 5.6
Installing guest ‘tesora-ubuntu-trusty-mysql-5.6-EE-1.7′

Above is an  example of loading mysql version 5.6.  The format of the command is:
add-datastore.sh DBtype version

To get a list of Database that are available and version please see the link below:

https://tesoradocs.atlassian.net/wiki/display/EE17CE16/Import+Datastore

Once you have imported your Database images, it&8217;s time to go to Horizon.
Step 4 Create and Access a Database Instance
Now you can go ahead and create the actual database. Log into your Horizon dashboard from within Fuel. On the lefthand side, click Tesora Databases.  

From here, you have the following options:

Instances: This option enables you to create, delete and display any database instances that are current running.
Clusters: This option enables you to create and manage a cluster Database environment.
Backups: Create or view backups of any current running Database Images.
Datastores: List all Databases that have been imported
Configuration Groups: This option enables you to manage database configuration tasks by using configuration groups, which make it possible to set configuration parameters, in bulk, on one or more databases.

At this point Tesora DBaaS should be up and running, enabling you to deploy, configure and manage databases in your environment.
The post Fuel plugins: Getting Started with Tesora DBaaS Enterprise Edition and Mirantis OpenStack appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Tieto’s path to containerized OpenStack, or How I learned to stop worrying and love containers

The post Tieto&;s path to containerized OpenStack, or How I learned to stop worrying and love containers appeared first on Mirantis | The Pure Play OpenStack Company.
Tieto is the cloud service provider in Northern Europe, with over 150 cloud customers in the region and revenues in the neighborhood of €1.5 billion (with a &;b&;). So when the company decided to take the leap into OpenStack, it was a decision that wasn&8217;t taken lightly &; or without very strict requirements.
Now, we&8217;ve been talking a lot about containerized OpenStack here at Mirantis lately, and at the OpenStack Summit in Barcelona, our Director of Product Engineering will get with Tieto&8217;s Cloud Architect  Lukáš Kubín to explain the company&8217;s journey from a traditional architecture to a fully adaptable cloud infrastructure, so we wanted to take a moment and ask the question:
How does a company decide that containerized OpenStack is a good idea?
What Tieto wanted
At its heart, Tieto wanted to deliver a bimodal multicloud solution that would help customers digitize their businesses. In order to do that, it needed an infrastructure in which it could have confidence, and OpenStack was chosen as the platform for cloud native applications delivery.  The company had the following goals:

Remove vendor lock-in
Achieve the elasticity of a seamless on-demand capacity fulfillment
Rely on robust automation and orchestration
Adopt innovative open source solutions
Implement Infrastructure as Code

It was this last item, implementing Infrastructure as Code, that was perhaps the biggest challenge from an OpenStack standpoint.
Where we started
In fact, Tieto had been working with OpenStack since 2013, creating internal projects to evaluate OpenStack Havana and Icehouse using internal software development projects; at that time, the target architecture included Neutron and Open vSwitch. 
By 2015, the company was providing scale-up focused IaaS cloud offerings and unique application-focused PaaS services, but what was lacking was a shared platform with full API controlled infrastructure for horizontally scalable workload.
Finally, this year, the company announced its OpenStack Cloud offering, based on the OpenStack distribution of tcp cloud (now part of Mirantis), and OpenContrail rather than Open vSwitch.
Why OpenContrail? The company cited several reasons:

Licensing: OpenContrail is an open source solution, but commercial support is available from vendors such as Mirantis.
High Availability: OpenContrail includes native HA support.
Cloud gateway routing: North-South traffic must be routed on physical edge routers  instead of software gateways to work with existing solutions
Performance: OpenContrail provides excellent pps, bandwidth, scalability, and so on (up to 9.6 Gbps)
Interconnection between SDN and Fabric: OpenContrail supports the dynamic legacy connections through EVPN or ToR switches
Containers: OpenContrail includes support for containers, making it possible to use one networking framework for multiple environments.

Once completed, the Tieto Proof of Concept cloud included;

OpenContrail 2.21
20 compute nodes
Glance and Cinder running on Ceph
Heat orchestration

Tieto had achieved Infrastructure as Code, in that deployment and operations were controlled through OpenStack Salt formulas. This architecture enabled the company to use DevOps principles, in that they could use declarative configurations that could be stored in a repository and re-used as necessary.
What&8217;s more, the company had an architecture that worked, and that included commercial support for OpenContrail (through Mirantis).
But there was still something missing.
What was missing
With operations support and Infrastructure as Code, Tieto&8217;s OpenStack Cloud was already beyond what many deployments ever achieve, but it still wasn&8217;t as straightforward as the company would have liked.  
As designed, the OpenStack architecture consisted of almost two dozen VMs on at least 3 physical KVM nodes &8212; and that was just the control plane!

As you might imagine, trying to keep all of those VMs up to date through operating system updates and other changes made operations more complex that it needed to be.  Any time an update needed to be applied, it had to be applied to each and every VM. Sure, that process was easier because of the DevOps advantages introduced by the OpenStack-Salt formulas that were already in the repository, but that was still an awful lot of moving parts.
There had to be a better way.
How to meet that challenge
That &8220;better way&8221; involves treating OpenStack as a containerized application in order to take advantage of the efficiencies this architecture enables, including:

Easier operations, because each service no longer has its own VM, with it own operating system to worry about
Better reliability and easier manageability, because containers and docker files can be tested as part of a CI/CD workflow
Easier upgrades, because once OpenStack has been converted to a microservices architecture, it&8217;s much easier to simply replace one service
Better performance and scalability, because the containerized OpenStack services can be orchestrated by a tool such as Kubernetes.

So that&8217;s the &8220;why&8221;.  But what about the &8220;how&8221;?  Well, that&8217;s a tale for another day, but if you&8217;ll be in Barcelona, join us at 12:15pm on Wednesday to get the full story and maybe even see a demo of the new system in action!
The post Tieto&8217;s path to containerized OpenStack, or How I learned to stop worrying and love containers appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis