Cloud computing: More than just storage

While the term &;” may seem nebulous, it actually has a very simple definition, according to PC Magazine: “storing and accessing data and programs over the internet.”
Most people use some form of cloud computing already, such as a student using Google Docs to work on a paper with a classmate or anyone who accesses their email from the web instead of an application. In other words, the information for the paper or the email don’t exist on the computer’s hardware; it’s stored in the provider’s cloud. Personal storage now requires less effort.
But cloud computing isn’t just for consumers. It’s also revolutionizing the way businesses are, well, doing business.
After choosing from one of the top providers, including IBM, businesses can select from three different service models of cloud computing: infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS). These services are aptly named. With IaaS, the provider gives the business an entire infrastructure to work with. PaaS enables businesses to create their own custom applications that they can disperse via the cloud. SaaS provides the business with one or more cloud-based software applications to use for their purposes.
At first glance, these services seem fairly straightforward.  However, there are a number of definitive advantages to cloud computing:

It shifts the burden of maintenance and responsibility from the business to the provider. Local computers no longer require the capacity to run all the applications, and the company no longer needs a “whole team of experts to install, configure, test, run, secure, and update [the applications],” according to Salesforce. These differences bring both hardware and IT costs down, as local computers no longer need vast amounts of memory or the cutting-edge processing ability; they simply must be able to connect to the cloud system.
Costs come down further by removing the need for on-site storage. With this remote storage services, companies no longer need to rent or buy physical space and facilities and purchase expensive equipment to store servers and databases, this HowStuffWorks report also points out.
Cloud also offers extreme flexibility. Cloud services are pay per use, and can be adjusted easily to accommodate a company’s needs. SkyHighNetworks provides a good example: “A sales promotion might be wildly popular, and capacity can be added quickly to avoid crashing servers and losing sales. When the sale is over, capacity can shrink to reduce costs.”
Employees across the business can access data and files at any time. Work doesn’t get trapped on users’ hard disks or flash drives, and the information accessed is always the most relevant and updated. This allows for seamless collaboration across geographies and time zones, which is  transformative for multinational companies with offices all around the world.

Like other game-changing technological advances, cloud computing is here to stay.
Learn more about mobile cloud computing.
The post Cloud computing: More than just storage appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Service Catalogs and the User Self-Service Portal

One of the most interesting features of CloudForms is the ability to define services that can include one or more virtual machines (VMs) or instances and can be deployed across hybrid environments. Services can be made available to users through a self-service portal that allows users to order predefined IT services without IT operations getting involved, thereby delivering on one of the major promises of .
The intention of this post is to provide you with step-by-step instructions to get you started with a simple service catalog. After you have gone through the basic concepts, you should have the skills to dive deeper into more complex setups.

Getting started with Service Catalogs
Let’s set the stage for this post: You added your Amazon Web Services (AWS) account to CloudForms as a cloud provider. Your AWS account includes a Red Hat Enterprise Linux (RHEL) image ready to use. Now you want to give your users the ability to deploy RHEL instances on AWS but you want to limit or predefine most of the options they could choose when deploying these instances.
Service Basics
Four items are required to make a service available to users from the CloudForms self-service portal:

A Provisioning Dialog which presents the basic configuration options for a VM or instance.
A Service Dialog where you allow users to configure VM or instance options.
A Service Catalog which is used to group Services Dialogs together.
A Service Catalog Item (ie. the actual Service) which joins a Service Dialog with a Provisioning Dialog.

Provisioning Dialogs
To work with services in CloudForms it is important to understand the concept of Provisioning Dialogs. When you begin the process of provisioning a VM or instance via CloudForms, you are presented with a Provisioning Dialog where you set certain options for the VM or instance. The options presented are dependent on the provider you are using. For instance, a cloud provider might have &;flavors&; of instances, whereas an infrastructure provider might allow you to set the Memory size or number of CPUs on a VM.
Every provider in CloudForms comes with a sample provisioning dialog covering the options specific to that provider. To have a look at some sample Provisioning Dialogs, go to Automate > Customization > Provisioning Dialogs > VM Provision and select &8220;Sample Provisioning Dialogs&8221;. This is a textual representation of the dialog you will get when you provision a VM or instance.
For this post, we need to make sure instance provisioning to AWS is working, so go to Compute > Clouds > Instances and create a new AWS instance by choosing &8220;Provision Instances&8221; from the &8220;Lifecycle&8221; drop-down. Select the image you are going to use, click “Continue” and walk through the Provisioning Dialog.

Service Dialogs
A Service Dialog determines which options the users get to change. The choice of options that are presented to the user is up to you. You could just give them the option to set the service name, or you could have them change all of the Provisioning Dialog options. You have to create a Service Dialog to define the options users are allowed to see and set. To help with creating a Service Dialog, CloudForms includes a simple form designer.
Anatomy of a Service Dialog
A Service Dialog contains three components:

One or more &8220;Tabs&8221;
Inside the &8220;Tabs&8221;, one or more &8220;Boxes&8221;
Inside the &8220;Boxes&8221;, one or more &8220;Elements&8221;
The &8220;Elements&8221; contain methods, like check boxes, drop-down lists or text fields, to fill in the options on the Provisioning Dialog. Here is the most important part: The names of the Elements have to correspond to the options used in the Provisioning Dialog!

What are the Element Names?
Very good question. As mentioned the options and values we provide in the Service Dialog must match those used in the Provisioning Dialog. There are some rather generic names like &8220;vm_name&8221; or &8220;service_name&8221;, while others might be specific to the provider in question.
So how do you find the options and values you can pass in a Service Dialog? The easiest way is to look at the Provisioning Dialog. In this case, for our Amazon EC2 instance:

As an administrator, go to Automate > Customization
Open the &8220;Provisioning Dialogs&8221; accordion and locate the &8220;VM Provision&8221; folder
Find the appropriate dialog, &8220;Sample Amazon Instance Provisioning Dialog&8221;
Now you can use your browser’s search capabilities to find options and their potential values. For practice just look for e.g. “vm_name”.

Creating a Service Dialog
Enough theory, let&;s dive in and create our first simple Service Dialog. The Service Dialog should let users choose a service and instance name for an AWS instance.

As an administrator, go to Automate > Customization
Open the &8220;Service Dialogs&8221; accordion. You will find two example Service Dialogs.
Add a new Service Dialog: Configuration > Add a new Dialog
Type &8220;aws_single_rhel7_instance&8221; into the Label field, this will be the name of the Service Dialog in CloudForms. Add a description if you want, this is not mandatory but good practice.
For Buttons, check &8220;Submit&8221; and &8220;Cancel&8221;.

From this starting point, you can now add content to the Dialog:

From the drop-down with the &8220;+&8221; sign choose &8220;Add a new Tab to this Dialog&8221;.

For Label use &8220;instance_settings&8221;, as Description use &8220;Instance Settings&8221;.
With the &8220;instance_settings&8221; Tab selected choose &8220;Add a new Box to this Tab&8221; from the &8220;+&8221; drop-down.
Give the new Box a Label and Description of &8220;Instance and Service Name&8221;.
From the &8220;+&8221; drop-down choose &8220;Add a new Element to this Box&8221;.
Fill in Label and Description with &8220;Service Name&8221; and Name with &8220;service_name&8221;.
For the Type, choose &8220;Text Box&8221; with Value Type &8220;String&8221;.

Following the same procedure add a second Element to the Box. The Name field should be &8220;vm_name&8221; and the Label and Description fields should be &8220;Instance Name&8221;. Similarly, Type should be &8220;Text Box&8221; with Value Type &8220;String&8221;.

That’s it! Now you can finally hit the &8220;Add&8221; button at the lower right corner.
Create a Catalog
Now that you have created your Service Dialog, we can add it to a Service Catalog by creating its associated Catalog Item.
First, we will create a Catalog:

Go to Services > Catalogs and expand the &8220;Catalogs&8221; accordion.
Select the &8220;All Catalogs&8221; folder and click Configuration > Add a new Catalog.
For Name and Description fill in &8220;Amazon EC2&8221;.
We will assign Catalog Items to this Catalog later.

Create a Catalog Item
Now we have the Catalog without any content, the Service Dialog, and the Provisioning Dialog. To allow users to order the service from the self-service catalog, we have to create a Catalog Item. Let&8217;s create a Catalog Item to order a RHEL instance using our Service Dialog:

Go to Services > Catalogs and expand the &8220;Catalog Items&8221; accordion.
Select the &8220;Amazon EC2&8221; catalog and click Configuration > Add a new Catalog Item.
From the &8220;Catalog Item Type&8221; drop-down select &8220;Amazon&8221;.
For Name and Description use &8220;RHEL Instance&8221; and check the box labelled &8220;Display in Catalog&8221;
From the &8220;Catalog&8221; drop-down choose &8220;Amazon EC2&8221;
From the &8220;Dialog&8221; drop-down choose &8220;aws_single_rhel7_instance&8221;. This is the Service Dialog you created earlier.
The three fields below point to methods used when provisioning/reconfiguring or retiring the service. For now, just configure these to use built-in methods as follows:

Click into the “Provisioning Entry Point State Machine” field, you will be taken to the Datastore Explorer.
Under the “ManageIQ” subtree, navigate to the following method and hit &8220;Apply&8221;: &8220;/Service/Provisioning/StateMachines/ServiceProvision_Template/CatalogItemInitialization&8221;
Click into the “Retirement Entry Point State Machine” field, navigate to this method and hit apply: “/Service/Retirement/StateMachines/ServiceRetirement/Default”

Switch to the &8220;Details&8221; tab. In real life you would put a detailed description of your Service here. You could use HTML for better formatting, but for the purpose of this post &8220;Single Amazon EC2 instance&8221; will do.
Switch to the &8220;Request Info&8221; tab. Here you preset all of the options from the Provisioning Dialog. (Remember that the user is only allowed to set Service Name and the Instance Name options via the Service Dialog):

On the &8220;Catalog&8221; tab, set the image Name to your AWS image name (&8220;rhel7&8221; in this case) and the Instance Name to &8220;changeme&8221;

On the &8220;Properties&8221; tab set the Instance Type to &8220;T2 Micro&8221;. If you ever plan to access the instance you should of course select a &8220;Guest Access Key Pair&8221;, too.

On the &8220;Customize&8221; tab set the Root Password. And in Customize Template choose the &8220;Basic root pass template&8221; as a script for cloud-init.

Click Add at the bottom right.

As you can see your new Catalog Item is listed with a generic icon. Let’s change this by uploading an icon in the &8220;Custom Image&8221; section. You can pick any image you like.
Recap or &8220;What have we done so far&8221;?
We created a Provisioning Dialog that defines the options that can be set on a VM or instance. We created a Service Dialog which allows us to expose certain options to be set by the user. For our example, only the instance name and service name are configurable. Then we created a Service Catalog and finally a Catalog Item. The Catalog Item joins the Service Dialog with all of the options in the Provisioning Dialog. Now, users should be able to order RHEL instance from the self-service catalog.
Let’s Order a RHEL Instance
To order your new service:

Access the self-service portal on https://<your_cf_appliance>/self_service. You will be greeted by the self-service dashboard
Select &8220;Service Catalog&8221; on the menu bar.

You should now see your service. Select it and you will be taken to the form you have defined in your Service Dialog:

Fill in the &8220;Service Name&8221; and &8220;Instance Name&8221; fields. Recall that these are the only two options that you made available to users in your Service Dialog.
Click &8220;Add to Shopping Cart&8221; and access the &8220;Shopping Cart&8221; by clicking the icon on the top right (there should now be a number on it).
Click &8220;Order&8221;. You have created a new provisioning request. You can follow the request by selecting &8220;My Requests&8221; from the menu bar and selecting the specific request to see its progression and details.

Once the &8220;Request State&8221; is shown as &8220;finished&8221;, your AWS instance is provisioned.
Conclusion
As you can see, creating a basic service catalog and to use the self-service portal in CloudForms is not rocket science. Of course, there is a lot more to learn, but there are also a lot of good resources to help you on your journey. For example, articles on this blog, the official documentation, and of course the excellent “Mastering CloudForms Automation” book written by  Peter McGowan that I cannot recommend highly enough.
Quelle: CloudForms

Steve Singh Joins Docker’s Board of Directors

The whole team at Docker would like to welcome Steve Singh, CEO of Concur and Member of SAP’s Executive Board to the Docker family. Steve has accepted a role on Docker’s Board of Directors, bringing his deep experience in building world-class organizations to the Docker board. Steve leads the SAP Business Networks & Applications Group, which brings together teams from Ariba, Fieldglass, Concur, SAP Health, Business Data Network and SMP ERP groups. We had a chance to sit down with Steve to get his thoughts on his appointment to the Docker Board.

 
How and why did you initially become involved with Docker?
I was certainly aware of Docker. There were also a number of groups across SAP that were using Docker. When a member of the Docker board approached me about joining the company’s Board of Directors, I learned a fair bit more about the market opportunity Docker was pursuing and could easily see the importance of the Docker suite for corporate IT and ISV&;s. I was also intrigued by the opportunity to support Ben and Solomon in building an enduring business.
 
What lead you to Joining the Board?
For me, there are two requirements when considering board roles. The first question I ask  &; is the company focused on a meaningful problem or opportunity? Docker is focused on giving every developer an opportunity to be independent of the infrastructure that their services are delivered upon. That&8217;s a huge opportunity across corporate IT and every ISV. When you think about how software is becoming the foundation for every industry, you can see the importance of Docker. The second factor is the nature of the founders. It is important to me to work with people with whom I have shared values. I like people that care deeply about their teammates, their community and the legacy that they will leave. Solomon and Ben were down to earth people that had a passion for their company and their team mates. As a founder of a business, I was impressed that Solomon was trying to solve a big problem and wasn’t daunted by obstacles. I was hopeful that as a board member, I could help accelerate the mission that Solomon and Ben were executing against.
 

As a founder of a high growth start-up yourself and then scaling it; how does that perspective guide how you view your board role?
If I look back at my own experience at Concur, I realized that the early board members were strong financial investors but that they didn’t have a lot of operational experience. I think that the role of the board should be to provide that experience and guidance. Our role is to help the team think through and define their strategy and to help attract, develop and retain incredible leadership talent.
 
SAP (Ariba), which is part of your business unit, is a Docker customer. Did that play a role in your decision to join the Docker board? 
As it turns out, a number of businesses within SAP use Docker and the reviews I received from developers around the company were phenomenal. They loved the Docker product. I couldn’t find one part of the organization that had used Docker and didn’t love it. So while it didn&8217;t factor into my decision to join the board, it was certainly encouraging to see the high regard for Docker.
 
As a founder that has grown their organization from a startup to a company with several successful business units, are there lessons learned on how to continue and maintain that momentum?
Success is all about people &8211; both the quality of the individuals that are part of the team and perhaps more importantly, the culture that binds those individuals together. As your company gets larger, it is easy to lose your focus. It is easy for the &;signal&; to degrade from the founder to the newest person joining the team. Certainly part of that signal is the mission of the company, but the most important components of that signal, are the values that define the company and the people that you want at your company. If you can keep that signal strong as you grow, you have every chance to build an incredible company. Not just one that succeeds financially and from a market perspective, but one that is like a second family.
 
What do you believe is compelling and unique about Docker’s commercial opportunities?
The entire Docker product line has massive opportunity and the open source and the commercial solutions feed into each other. I believe the opportunity is measured in the tens of billions as the demand for Docker among software developers and IT is growing at an unbelievable rate. Docker enables software developers and IT to plug and play into any infrastructure, which gives them control and real economic benefit. In the long term, SAP and other global 2000 companies will have leverage in working with their cloud providers because Docker enables 100 percent portability. This ensures that organizations will be able to seek competitive offerings while avoiding lock-in.
 
As you look ahead in the next year &8211; what do you see as Docker’s priorities? What are the challenges? What do you see as the board’s challenges?
I see three main priorities for Docker in 2017. Ben and Solomon have to focus on recruiting to develop and bind together a great management team. It is not enough to recruit rock stars – companies need to develop teams that genuinely like working together. The mark of a successful team in one where colleagues form a friendship in a business environment. This reinforces their commitment as they really don’t want to let their peers down. Second, we need to make sure we continue to set the pace for our open source solutions and ensure that our commercial solution, Docker Datacenter (DDC), significantly exceeds customers&8217; expectations. Third, we need to crush our 2017 business metrics, which I believe we can.
 
Tell us a little bit about yourself – What do you enjoy doing when you are not in your role at Concur or fulfilling your board duties at Concur, CornerStone, OnDemand, etc.
I get a tremendous amount of joy from working with others. Through their own example, my parents taught me that the measure of life is improving the trajectory of humanity &8211; no matter how small or large that improvement is. For me, the best way to accomplish that is to help others. I strive to help my co-workers, friends, community and of course my family. When I am not working – I am with my wife and kids. We have an active family life and my wife and I like to participate in what are children are doing &8211; whether it is with our youngest who is into horseback riding or working with our son, who has started his own company, or visiting our oldest daughter, who is in her final year at college. Family, friends and community &8211; everything else is transient.
The post Steve Singh Joins Docker’s Board of Directors appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Enterprise cloud strategy: Governance in a multi-cloud environment

Enterprises manage risk.
It’s a business reality that applies as much to as it does to finance, operations or marketing.
To mitigate risk from data loss or downtime, or retain control of enterprise data and application strategy, organizations today often use two or more cloud providers in their cloud environments. This multi-cloud strategy can also improve overall enterprise performance by avoiding &;vendor lock-in&; and making use of different infrastructures to meet the needs of diverse applications.
Whether you&;re a chief information officer or chief technology officer planning or implementing a multi-cloud strategy, you must make some critical decisions, the first being governance. Multi-cloud governance is essential for fast delivery of cloud services while also satisfying enterprise needs for budget control, visibility, security and compliance. It can be broken down into two areas: cloud services brokerage and control-plane abstraction.
Gartner defines cloud services brokerage as an IT role and business model in which a company or other entity adds value to one or more public or private cloud services. The organization does this on behalf of the departments or lines of business that use the service. An IT department can assume the role itself, or the organization may choose to hire a cloud services broker to help. Regardless of how you source your brokerage, consider several questions to know how effective it is:

Can your brokerage strategy compare capabilities of various clouds for workloads? That is, can it determine “which cloud” is appropriate by workload?
Will it help you manage your cloud expenditures across user groups, departments and projects?
Will your brokerage create a holistic view of your IT environment and service-level agreements?

As you answer these questions, remember: integration across disparate APIs and governance processes is key to unlocking multi-cloud governance success. When addressed properly, it can help manage all aspects of your cloud environment, including access and control, security and compliance, and customer records. It can even provide needed visibility into your environment and scale cloud capabilities.
To enable cloud freedom but still meet the enterprise&8217;s security and compliance requirements, you need control-plane abstraction. Control-plane abstraction helps automate delivery of policies, procedures and configurations before cloud services are used. It helps reduce complexities and errors that easily arise in a multi-cloud environment.
That same kind of control is vital for multi-cloud environments. One example: a customer-service application deployed on cloud may need access to authentication, customer data, pricing and other services that are developed and deployed on-premises. Without integration and control, your workloads and applications could have functional deficiencies or security exposures.
To ensure smooth flying through your clouds, you must successfully manage, at a minimum, three facets of control-plane abstraction.
First, the platform must have the ability to orchestrate and automate blueprints and application patterns. For example, it should be able to develop infrastructure and application stacks. Your platform should also be able to deploy hardened images across clouds that adhere to security and compliance requirements.
Second, you need top-notch identity and access management. Your on-premises access policies — particularly role-based access — should be extended to all cloud platforms. Additionally, you must restrict native portal access to each cloud and control management access through common tooling.
Finally, incident, problem and change management solutions should be integrated to provide visibility — the proverbial &8220;single pane of glass&8221; — across multiple cloud environments from diverse providers. Warning: quality and service levels differ between service providers. Know the default services levels for each cloud.
In practical terms, good governance in a multi-cloud environment means not being blindsided by unexpected costs, security problems, or poor platform and API integration. It&8217;s the necessary first step in implementing your cloud strategy, transforming your organization and joining the digital revolution. Once you&8217;ve done it, it&8217;s time to take on applications and data in a multi-cloud environment—which I&8217;ll discuss in my next post
For more information about cloud brokerage services, read &8220;Hybrid IT through Cloud Brokerage&8221;.
The post Enterprise cloud strategy: Governance in a multi-cloud environment appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

IBM wins Frost & Sullivan 2016 Cloud Company of the Year award

Market research firm Frost & Sullivan has conferred its 2016 Cloud Company of the Year award to IBM, citing hybrid integration and affordability as major factors.
Lynda Stadtmueller, Vice President of Cloud Services for Stratecast/Frost & Sullivan, explained the choice of the IBM Cloud platform because it &;supports the concept of &;hybrid integration&; — that is, a hybrid IT environment in which disparate applications and data are linked via a comprehensive integration platform, allowing the apps to share common management functionality and control.&;
The capabilities she noted enable Bluemix users to tap into analytics functionality and Watson.
Stadtmueller continued: “IBM Cloud offers a price-performance advantage over competitors due to its infrastructure configurations and service parameters — including a bare metal server option; single-tenant (private) compute and storage options; granular capacity selections for processing, memory, and network for public cloud units; and all-included technical support.”
IBM VP of Cloud Strategy and Portfolio Management Don Boulia said the award &8220;recognizes the extraordinary range and depth of IBM&8217;s cloud services portfolio.&8221;
Other IBM capabilities Frost & Sullivan cited were its scalable cloud portfolio, extensive connectivity and microservices.
For more, check out Read IT Quik’s full article.
The post IBM wins Frost &; Sullivan 2016 Cloud Company of the Year award appeared first on news.
Quelle: Thoughts on Cloud

CloudForms 4.2 Beta 1 (Public)

Welcome to the CloudForms 4.2 Beta 1 release. The beta program will run for a number of weeks starting Halloween 2016.
Please note this is a Beta Blog and therefore should NOT be used to confirm the GA release of this product.
Let&;s break down the mega release into various sections of the platform for a quick review;
Providers
VMware vCloud Air/Director
This new provider has been developed in conjunction with XLAB.SI. It delivers the following capabilities;
Inventory

Collect vApps
Collect Datacenters

Events
Event Catcher and Switchboard support
Metrics &; Not Yet
LifeCycle

Provision vCloud Apps (vApps) from CloudForms Service Catalog and Operations UI

VMware vSphere

New Dashboard for vSphere Provider.
Allow for Cluster only selection &8211; We had a requirement to allow users to select only the cluster, and not specify the host or datastore. So during provisioning on VMware vSphere you can now do this, select only the cluster and if the cluster supports DRS it will automatically decide a host and datastore on the VMware side of the house.
Provisioning with Storage Profiles &8211; Now you can provision in CloudForms supporting VMware Storage Profiles. VMware Storage Profiles let you assign policies to datastores such as production or test. In CloudForms we pre-filter the datastore selection based on these profiles.

Red Hat Virtualization

Snapshot Management &8211; Take/Restore from Snapshots within RHV.
Disk Management &8211; Connect/Disconnect drives to your virtual machines. Fully supporting VM reconfigure.

Middleware (Hawkular)
Inventory

Clusters
Hosts
Entities
Topology
Applications
Templates
Datasources
Drivers
Deployment status
Cross linking

Dedicated performance reports for Hawkular are also included.
Events

Receive Events
Support for Alert Profiles and automated expressions for middleware servers

Metrics
The Hawkular provider supports live metrics. This means that when you view the charts within CloudForms we grab the live metrics from the server at that time for the following,

Datasources
Transactions
JMS Topics
Queues

Life Cycle Operations

Deploy Application
Upload WAR
Create Datasource(s)
Add JDBC Drivers

OpenStack Cloud

Create/Update/Delete OpenStack Cloud Tenants
Create/Update/Delete Host Aggregates
Take and Remove Snapshots at VM level

OpenStack Infrastructure

New topology view of the Under Cloud
Ironic Controls Added for Hosts

Set as Manageable
Introspect Nodes
Provide Nodes

OpenStack Neutron

Create/Update/Delete Router
Create/Update/Delete Network
Create/Update/Delete Subnet
Inventory of Network Ports

OpenStack Swift
New provider in a new Storage menu. This provider class will be built out in future releases.

Inventory

OpenStack Cinder
New provider in a new Storage menu. This provider class will be built out in future releases.

Inventory
Snapshot Support for Volumes exposed in the UI and Automate.
Create/Restore from Backup exposed in the UI and Automate.

OpenShift Enterprise

View Container Templates
Chargeback for container images &8211; Enabling images to support a fixed cost. This can contribute a base image cost to a variable utilised report for pods and applications.
Chargeback based on container image tags.
Support for Custom Attributes &8211; Now we see the OpenShift labels as custom attributes.
Allow policies to prevent image scans, this is useful if you wish to stop CloudForms from inspecting certain images for security or performance reasons.
Reports : Pods for images per project and Pods per node.

Google Cloud

Metrics &8211; CPU, Memory and Network.
Load Balancer Inventory.
Load Balancer Health Checks &8211; Shown in inventory and actionable using automation.
Hide deprecated images from provisioning.
Preemptible Instances &8211; Googles Preemptible Instances are a low cost way of getting compute, coming with restrictions such as termination without notice. CloudForms supports the provisioning of these instances.
Retirement Support.

Microsoft Azure

Additional metrics to CPU such as;

Memory
Disk

Chargeback for Fixed, Allocated and Utilized costs for VM resources.
Support for Floating IPs during provisioning.
Load Balancer inventory.

Microsoft SCVMM

Bug fixes.

Amazon EC2
New CloudForms Appliance Image &8211; This means you can now run CloudForms in Amazon EC2 without any other hosting infrastructure required.
User Interfaces
Both

Single Level Proxy Support &8211; This allows for users to access the remote console for workloads that may be behind a firewall (e.g. service providers). You can configure CloudForms to proxy remote console sessions when direct host visibility is not available. This capability is also exposed to automate.
Notification Draw &8211; Users can receive both Toasts and Notifications from any event happening in CloudForms. This means that during provisioning, as various phases are passed such as approval, quota check, etc., you can notify the user that this has happened. Furthermore, we have enabled this with a helper method in Automate, meaning that any automate method can emit notifications. The notifications can be read or saved. The drawer holds a history of previous notifications.

Operations UI

Topology viewer added for Infrastructures and Cloud Providers.
New toggle view to switch between classic inventory view and new dashboard view.
Schedule automate tasks &8211; Run once or recurring.
VM Explorer Trees &8211;  A new setting has been introduced and set as default. This setting REMOVES the VM&8217;s from the explorer trees, as it caused a substantial performance hit. This setting can be turned back on for smaller environments under My Settings > Services > Workloads > All VMs. The page load time was reduced from 93,770ms to 524ms (99% improvement) with a test of 20,000 VMs.
Timelines &8211; New Timelines component for timelines view on VMs, Providers or other objects supporting this feature.

Service UI

New support for Chargeback roll-up data per My Services. Shows $/$$/$$$ costings.
Service Power Operations &8211; You can now Stop/Start/Suspend an entire service composed of multiple VMs.
Confirmation when deleting items from your shopping cart.
Cockpit Integration &8211; Red Hat Enterprise Linux 7.x systems can be managed/configured using the Cockpit server manager interface. CloudForms now allows launching the Cockpit UI in a new window for systems identified as enabled.

Platform
Chargeback

Numerous changes to Chargeback to improve accuracy in results.

Centralized Administrator

This item is to support some of our larger installations of CloudForms whereby the customer wishes to have one single entry point into CloudForms from any number of regions or zones setup globally. We have supported for some time the notion of a Reporting Region, this allows to report centrally on any data rolled up from child regions to the parent reporting region. With Centralized Administration you can now not only report, but start to perform some of lifecycle tasks too such as;

VM Power Operations &8211; Start/Stop/Suspend a VM in any region from your central region.
VM Retirement &8211; Retire a VM in any region from your central region.

Tenancy
Tenancy has seen two major changes in this release as follows;
OpenStack and our Tenancy
You can now synchronize the tenants that exist within OpenStack to CloudForms. This means you can, as an administrator, define some simple mapping rules and CloudForms will automatically keep the tenants that exist within the OpenStack providers synchronized to those in CloudForms.
CloudForms Tenancy
Ad-hoc sharing of resources across tenants. This will allow users to select an item in their view and share it with anyone in any other tenant in CloudForms.
Database Maintenance
The results from numerous support surveys shows that the database can suffer performance or stability issues when maintenance is not carried out regularly. Therefore we are including in the &;black console&; menus the ability to configure Database Maintenance activities.
Database High Availability
We are supporting in the product, PostgreSQL High Availability. The support is for Primary to Stand-by, you can manually control the swap or use a heartbeat to automatically fail over. The feature is easily enabled using the &8220;Black Console&8221; menu.
Automate

Import Automate Models from GiT Repository

Fully UI configurable and managed.
Post Commit Hooks &8211; Automatically synchronize the changes to the CloudForms appliances enabled with the GiT Server Role.
Tags &8211; Select what is synchronized by Tags.
Branches &8211; Select what is synchronized by Branch.
Supports certificates

Schedule automate tasks &8211; Now you can create tasks that are triggered based on a timed schedule
Notifications &8211; You can $evm.create_notification(:message => &8220;my custom message&8221;). We support error levels and subjects too. This will allow you to provide feedback direct to your users from automate. For example, if you have an automate script that exports, converts and imports a VM from one platform to another, you could notify the user who initiated the task when each phase has completed. Previously the only messaging to the user was email, with notifications you have live feed back through the UI direct to the user.

Quelle: CloudForms

What you missed at OpenStack Barcelona

The post What you missed at OpenStack Barcelona appeared first on Mirantis | The Pure Play OpenStack Company.
The OpenStack Summit in Barcelona was, in some ways, like those that had preceded it &; and in other ways, it was very different.  As in previous years, the community showed off large customer use cases, but there was something different this year: whereas before it had been mostly early adopters &8212; and the same early adopters, for a time &8212; this year there were talks from new faces with very large use cases, such as Sky TV and Banco Santander.
And why not? Statistically, OpenSTack seems to have turned a corner, with the semi-annual user survey showing that workloads are no longer just development and testing but actual production, users are no longer limited to huge corporations but also to work at small to medium sized businesses, and containers have gone from an existential threat to a solution with which to work, not fight, and concerns about interoperability seem to have been squashed, finally.
Let&;s look at some of the highlights of the week.

It&8217;s traditional to bring large users up on stage during the keynotes, but this year, with users such as Spain&8217;s largest bank, Banco Santander, Britain&8217;s broadcaster, Sky UK, the world&8217;s largest particle physics laboratory, CERN, and the world&8217;s largest retailer, Walmart, it did seem more like showing what OpenStack can do, than in previous years, when it was more about proving that anybody was actually using it in the first place.
For example, Cambridge’s Dr. Rosie Bolton talked about the SKA radio observatory  that will look at 65,000 frequency channels, consuming and destroying 1.3 zettabytes of data every six hours. The project will run for 50 years cost over a billion dollars.

This.is.Big.Data @OpenStack   pic.twitter.com/XgT3eEjDVh
— Sean Kerner (@TechJournalist) October 25, 2016

OpenStack Foundation CEO Mark Collier also introduced enhancements to the OpenStack Project Navigator, which provides information on the individual projects and their maturity, corporate diversity, adoption, and so on. The Navigator now includes a Sample Configs section, which provides the projects that are normally used for various use cases, such as web applications, eCommerce, and high throughput computing.
Research from 451 Research
The Foundation also talked about findings from a new 451 Research report that looked at OpenStack adoption and challenges.  
Key findings from the 451 Research include:

Mid-market adoption shows that OpenStack use is not limited to large enterprises. Two-thirds of respondents (65 percent) are in organizations of between 1,000 and 10,000 employees.1
OpenStack-powered clouds have moved beyond small-scale deployments. Approximately 72 percent of OpenStack enterprise deployments are between 1,000 to 10,000 cores in size. Additionally, five percent of OpenStack clouds among enterprises top the 100,000 core mark.
OpenStack supports workloads that matter to enterprises, not just test and dev. These include infrastructure services (66 percent), business applications and big data (60 percent and 59 percent, respectively), and web services and ecommerce (57 percent).
OpenStack users can be found in a diverse cross section of industries. While 20 percent cited the technology industry, the majority come from manufacturing (15 percent), retail/hospitality (11 percent), professional services (10 percent), healthcare (7 percent), insurance (6 percent), transportation (5 percent), communications/media (5 percent), wholesale trade (5 percent), energy & utilities (4 percent), education (3 percent), financial services (3 percent) and government (3 percent).
Increasing operational efficiency and accelerating innovation/deployment speed are top business drivers for enterprise adoption of OpenStack, at 76 and 75 percent, respectively. Supporting DevOps is a close second, at 69 percent. Reducing cost and standardizing on OpenStack APIs were close behind, at 50 and 45 percent, respectively.

The report talked about the challenge OpenStack faces from containers in the infrastructure market, but contrary to the notion that more companies were leaning on containers than OpenStack, the report pointed out that OpenStack users are adopting containers at a faster rate than the rest of the enterprise market, with 55 percent of OpenStack users also using containers, compared to just 17 percent across all respondents.
According to Light Reading, &;451 Research believes OpenStack will succeed in private cloud and providing orchestration between public cloud and on-premises and hosted OpenStack.&;
The Fall 2016 OpenStack User Survey
The OpenStack Summit is also the where we hear the results of the semi-annual user-survey. In this case, the key findings among OpenStack deployments include:

Seventy-two percent of OpenStack users cite cost savings as their No. 1 business driver.
The Net Promoter Score (NPS) for OpenStack deployments—an indicator of user satisfaction—continues to tick up, eight points higher than a year ago.
Containers continues to lead the list of emerging technologies, as it has for three consecutive survey cycles. In the same question, interest in NFV and bare metal is significantly higher than a year ago.
Kubernetes shows growth as a container orchestration tool.
Seventy-one percent of deployments catalogued are in “production” versus in testing or proof of concept. This is a 20 percent increase year over year.
OpenStack is adopted by companies of every size. Nearly one-quarter of users are organizations smaller than 100 people.

New this year is the ability to explore the full data, rather than just relying on highlights.
Community announcements
Also announced during the keynotes were new Foundation Gold members, the winner of the SuperUser award, and progress on the Foundation&8217;s Certified OpenStack Administrator exam.
The OpenStack Foundation charter allows for 24 Gold member companies, who elect 8 Board Directors to represent them all.  (The other members include one each chosen by the 8 Platinum member companies, and 8 individual directors elected by the community at large.) Gold member companies must be approved by existing board members, and this time around City Network, Deutsche Telekom, 99Cloud and China Mobile were added.
China Mobile was also given the Superuser award, which honors a company&8217;s commitment to and use of OpenStack.
Meanwhile, in Austin, the Foundation announced the Certified OpenStack Administrator exam, and in the past six months, 500 individuals have taken advantage of the opportunity.
And then there were the demos&;
While demos used to be simply to show how the software works, that now seems to be a given, and instead demos were done to tackle serious issues.  For example, Network Functions Virtualization is a huge subject for OpenStack users &8212; in fact 86% of telcos say OpenStack will be essential to their adoption of the technology &8212; but what is it, exactly?  Mark Collier and representatives of the OPNFV and Vitrage projects were able to demonstrate how OpenStack applies in this case, showing how a High Availability Virtual Network Function (VNF) enables the system to keep a mobile phone call from disconnecting even if a cable or two is cut.  (In this case, literally, as Mark Collier levied a comically huge pair of scissors against the hardware.)
But perhaps the demo that got the most attention wasn&8217;t so much of a demo as a challenge.  One of the criticisms constantly levied against OpenStack is that there&8217;s no &8220;vanilla&8221; version &8212; that despite the claims of freedom from lock-in, each distribution of OpenStack is so different from the others that it&8217;s impossible to move an application from one distro to another.
To fight that charge, the OpenStack community has been developing RefStack, a series of tests that a distro must pass in order to be considered &8220;OpenStack&8221;. But beyond that, IBM issued the &8220;Interoperability Challenge,&8221; which required teams to take a standard deployment tool &8212; in this case, based on Ansible &8212; and use it, unmodified, to create a WordPress-hosting LAMP stack.
In the end, 18 companies joined the challenge, and 16 of them appeared on stage to simultaneously take part.
So the question remained: would it work?  See for yourself:

Coming up next
So the next OpenStack Summit will be in Boston, May 8-12, 2017. For the first time, however, it won&8217;t include the OpenStack Design Summit, which will be replaced by a separate Project Teams Gathering, so it&8217;s likely to once again have a different feel and flavor as the community &8212; and the OpenStack industry &8212; grows.
The post What you missed at OpenStack Barcelona appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

5 Tales from the Docker Crypt

(Cue the Halloween music)
Welcome to my crypt. This is the crypt keeper speaking and I’ll be your spirit guide on your journey through the dangerous and frightening world of IT applications. Today you will learn about 5 spooky application stories covering everything from cobweb covered legacy processes to shattered CI/CD pipelines. As these stories unfold, you will hear  how Docker helped banish cost, complexity and chaos.
Tale 1 &; “Demo Demons”
Splunk was on a mission to enable their employees and partners across the globe to deliver demos of their software regardless of where they’re located in the world, and have each demo function consistently. These business critical demos include everything from Splunk security, to web analytics and IT service intelligence. This vision proved to be quite complex to execute. At times their SEs would be in customer meetings, but their demos would sometimes fail. They needed to ensure that each of their 30 production demos within their Splunk Oxygen demo platform could live forever in eternal greatness.
To ensure their demos were working smoothly with their customers, Splunk uses Docker Datacenter, our on-premises solution that brings container management and deployment services to the enterprise via an integrated platform. Images are stored within the on-premises Docker Trusted Registry and are connected  to their Active Directory server so that users have the correct role-based access to the images. These images are publicly accessible to people who are authenticated but are outside of the corporate firewall. Their sales engineers can now pull the images from DTR and give the demo offline ensuring that anyone who goes out and represents the Splunk brand, can demo without demise.
Tale 2 &8211; “Monster Maintenance”
Cornell University&;s IT team was spending too many resources taking care of r their installation of Confluence. Their team spent 1,770 hours maintaining applications over a six month period and were in need of utilizing immutable infrastructure that could be easily torn down once processes were complete. Portability across their application lifecycle, which included everything from development, to production, was also a challenge.
With a Docker Datacenter (DDC) commercial subscription from Docker, they now host their Docker images in a central location, allowing multiple organizations to access them securely. Docker Trusted Registry provides high availability via DTR replicas, ensuring that their dockerized apps are continuously available, even if a node fails. With Docker, they experience a 10X reduction in maintenance time. Additionally, he portability of Docker containers helps their workloads move across multiple environments, streamlining their application development, and deployment processes. The team is now able to deploy applications 13X faster than in the past by leveraging reusable architecture patterns and simplified build and deployment processes.
Tale 3 &8211; “Managing Menacing Monoliths and Microservices!”
SA Home Loans, a mortgage firm located in South Africa was experiencing slow application deployment speeds. It took them 2 weeks just to get their newly developed applications over to their testing environment, slowing innovation. These issues extended to production as well. Their main home loan servicing software, a mixture of monolithic Windows services and IIS applications, was complex and difficult to update,placing a strain on the business. Even scarier was that when they deployed new features or fixes, they didn’t have an easy or reliable roll back plan if something went wrong (no blue/green deployment). In addition, their company decided to adopt a microservices architecture. They soon realized that upon completion of this project they’d have over 50 separate services across their Dockerized nodes in production! Orchestration now presented itself as a new challenge.
To solve their issues, SA Home Loans trusts in Docker Datacenter. SA Home Loans can now deploy apps 30 times more often! The solution also provides the production-ready container orchestration solution that they were looking for. Since DDC has embedded swarm within it, it shares the Docker engine APIs, and is one less complex thing to learn. The Docker Datacenter solution provides ease of use and familiar frontend for the ops team.
 
Tale 4 &8211; “Unearthly Labor”
USDA’s legacy website platform consisted of seven manually managed monolithic application servers that implemented technologies using traditional labor-intensive techniques that required expensive resources. Their systems administrators had to SSH into individual systems deploying updates and configuration one-by-one. USDA discovered that this approach lacked the flexibility and scalability to provide the services necessary for supporting their large number of diverse apps built with PHP, Ruby, and Java – namely Drupal, Jekyll, and Jira. A different approach would be required to fulfill the shared platform goals of USDA.
USDA now uses Docker and has expedited their project and modernized their entire development process. In just 5 weeks. they launched four government websites on their new dockerized  platform to production. Later, an additional four websites were launched including one for the first Lady, Michelle Obama, without any  additional hardware costs. By using Docker, the USDA saved  upwards of $150,000 in technology infrastructure costs alone. Because they could leverage a shared infrastructure model, they were also able to reduce  labor costs as well. Using Docker provided the USDA with the  agility needed  to develop, test, secure, and even deploy modern software in a high-security federal government datacenter environment.
Tale 5 &8211; “An Apparition of CI/CD”
Healthdirect dubbed their original applications development process &;anti CI/CD&; as it was broken, and difficult to create a secure end-to-end CI/CD pipeline. They had a CI/CD process for the infrastructure team, but were unable to repeat the process across multiple business units. The team wanted repeatability but lacked the ability to deploy their apps and provide 100% hands-off automation. .
Today Healthdirect is using Docker Datacenter. Now their developers are empowered in the release process and the code developed locally ships to production without changes. With Docker, Healthdirect was able to  innovate faster and deploy their applications to production, with ease.
So there they are. 5 spooky tales for you on this Halloween day.To learn more about Docker Datacenter check out this demo.
Now, be gone from my crypt. It’s time for me to retire back to my coffin.
Oh and one more thing….Happy Halloween!!
For more resources:

Hear from Docker customers
Learn more about Docker Datacenter
Sign up for your 30 day free evaluation of Docker Datacenter

 

5 spooky Tales from the Docker Crypt  To Tweet

The post 5 Tales from the Docker Crypt appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Considerations for Running Docker for Windows Server 2016 with Hyper-V VMs

We often get asked at , “Where should I run my application? On bare metal, virtual or cloud?” The beauty of Docker is that you can run a anywhere, so we usually answer this question with “It depends.” Not what you were looking for, right?
To answer this, you first need to consider which infrastructure makes the most sense for your application architecture and business goals. We get this question so often that our technical evangelist, Mike Coleman has written a few blogs to provide some guidance:

To Use Physical Or To Use Virtual: That Is The Container Deployment Question
So, When Do You Use A Container Or VM?

During our recent webinar, titled &;Docker for Windows Server 2016&;, this question came up a lot, specifically what to consider when deploying a Windows Server 2016 application in a -V VM with Docker and how it works. First, you’ll need to understand the differences between Windows Server containers, Hyper-V containers, and Hyper-V VMs before considering how they work together.
A Hyper-V container is a Windows Server container running inside a stripped down Hyper-V VM that is only instantiated for containers.

This provides additional kernel isolation and separation from the host OS that is used by the containerized application. Hyper-V containers automatically create a Hyper-V VM using the application’s base image and the Hyper-V VM includes the required application binaries, libraries inside that Windows container. For more information on Windows Containers read our blog. Whether your application runs as a Windows Server container or as a Hyper-V container is a runtime decision. Additional isolation is a good option for multi tenant environments. No changes are required to the Dockerfile or image, the same image can be run in either mode.
Here we the the top Hyper-V container questions with answers:
Q: I thought that containers do not need a hypervisor?
A: Correct, but since a Hyper-V container packages the same container image with its own dedicated kernel it ensures tighter isolation in multi-tenant environments which may be a business or application requirement for specific Windows Server 2016 applications.
Q: ­Do you need a hypervisor layer before the OS in both Hyper-V and Docker for Windows Server containers?
A: The hypervisor is optional. With Windows Server containers, isolation is achieved not with hypervisor, but with process isolation, filesystem and registry sandboxing.
Q: Can the Hyper-V containers be managed from the Hyper-V Manager, in the same way that the VM&;s are? (ie. turned on/off, check memory usage, etc?)
A: While Hyper-V is the runtime technology powering Hyper-V Isolation, Hyper-V containers are not VMs and neither appear as a Hyper-V resource nor be managed with classic Hyper-V tools, like Hyper-V Manager. Hyper-V containers are only executed at runtime by the Docker Engine.
Q: Can you run Windows Server container and Hyper-V Containers running Linux workloads on the same host?
A: Yes. You can run a Hyper-V VM with a Linux OS on a physical host running Windows Server.  Inside the VM, you can run containers built with Linux.

Next week we’ll bring you the next blog in our Windows Server 2016 Q&A Series &; Top questions about Docker for SQL Server Express. See you again next week.
For more resources:

Learn more: www.docker.com/microsoft
Read the blog: Webinar Recap: Docker For Windows Server 2016
Learn how to get started with Docker for Windows Server 2016
Read the blog to get started shifting a legacy Windows virtual machine to a Windows Container

Top considerations for running Docker @WindowsServer container in Hyper-VClick To Tweet

The post Considerations for Running Docker for Windows Server 2016 with Hyper-V VMs appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Fuel plugins: Getting Started with Tesora DBaaS Enterprise Edition and Mirantis OpenStack

The post Fuel plugins: Getting Started with Tesora DBaaS Enterprise Edition and Mirantis OpenStack appeared first on Mirantis | The Pure Play OpenStack Company.
The Tesora Database as a Service platform is an enterprise-hardened version of Openstack Trove, offering secure private cloud access to the most popular open source and commercial databases through a single consistent interface.
In this guide we will show you how to install Tesora in a Mirantis Openstack environment.
Prerequisites
In order to deploy Tesora DBaaS, you will need a fuel with the Tesora plugin installed.  Start by making sure you have:

A Fuel server up and running. (See the Quick Start for instructions if necessary.)
Discovered nodes for controllers, compute and storage
A discovered node for the dedicated node for the Tesora controller.

Now let&;s go ahead and add the plugin to Fuel.
Step 1 Adding the Tesora Plugin to Fuel
To add the Tesora plugin to Fuel, follow these steps:

Download the tesora plugin from the Mirantis Plugin page, located at:

https://www.mirantis.com/validated-solution-integrations/fuel-plugins/

Once you have downloaded the plugin, copy the plugin file to your Fuel Server using the scp command, as in:
$scp tesora-dbaas-1.7-1.7.7-1.noarch.rpm root@[fuel s:/tmp

After copying the Fuel Plugin to the fuel server, add it to the fuel plugin list. First ssh to the Fuel server:
$ssh root@[fuel server ipi]

Next, add the plugin to Fuel:
[root@fuel ~]# fuel plugins –install tesora-dbaas-1.7-1.7.7-1.noarch.rpm

Finally, verify that the plugin has been added to Fuel:
[root@fuel ~]# fuel plugins
id | name                     | version | package_version
—|————————–|———|—————-
1  | fuel-plugin-tesora-dbaas | 1.7.7   | 4.0.0

If the plugin was successfully added, you should see it listed in the output.
Step 2 Add Tesora DBaaS to an Openstack Environment
From here, it&8217;s a matter of creating an OpenStack cluster that uses the new plugin.  You can do that by following these steps:

Connect to the Fuel UI and log in with the admin credentials using your browser.
Create a new Openstack environment. Follow the prompts and either leave the defaults or alter them to suit your environment.
Before adding new nodes, enter the environment and select the settings tab and then other on the left hand side of the window.
Select Tesora DBaaS Platform and enter the username and password supplied to you by Tesora.  The username and password will be used to download the Database images provided by Tesora to the Tesora DBaaS controller.  Finish by typing &;I Agree&; to show that you agree to the Terms of Use and click Save Settings.
Now create your Environment by assign nodes to the roles for

Compute
Storage
Controller
Tesora DBaaS Controller

As shown in the image below:

After you have finished adding the roles go ahead and deploy the environment.

Step 3 Importing the Database image files to the Tesora DBaaS Controller
Once the environment is built, it&8217;s time to import the database images.  

From the Fuel Server, SSH to the Tesora DBaaS controller server.  You can find the IP address of the Tesora DBaaS controller by entering the following command:
[root@fuel ~]# sudo fuel node list | grep tesora
9  | ready  | Untitled (61:ef) | 4       | 10.20.0.6 | 08:00:27:a3:61:ef | tesora-dbaas    |               | True   | 4

After identifying the IP address you will need to ssh to the fuel server:
[root@fuel ~]# sudo ssh root@10.20.0.6
Warning: Permanently added ‘10.20.0.6’ (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-93-generic x86_64)
* Documentation:  https://help.ubuntu.com/
Last login: Wed Aug 10 23:51:29 2016 from 10.20.0.2

Next load the the pre-built Database images.  After logging into the DBaaS Controller, change your working directory to /opt/tesora/dbaas/bin:
root@node-9:~# cd /opt/tesora/dbaas/bin

Now export your tesora variables:
root@node-9:/opt/tesora/dbaas/bin# source openrc.sh

After setting your variables, you can now import your database images with the following command:
root@node-9:/opt/tesora/dbaas/bin# ./add-datastore.sh mysql 5.6
Installing guest ‘tesora-ubuntu-trusty-mysql-5.6-EE-1.7′

Above is an  example of loading mysql version 5.6.  The format of the command is:
add-datastore.sh DBtype version

To get a list of Database that are available and version please see the link below:

https://tesoradocs.atlassian.net/wiki/display/EE17CE16/Import+Datastore

Once you have imported your Database images, it&8217;s time to go to Horizon.
Step 4 Create and Access a Database Instance
Now you can go ahead and create the actual database. Log into your Horizon dashboard from within Fuel. On the lefthand side, click Tesora Databases.  

From here, you have the following options:

Instances: This option enables you to create, delete and display any database instances that are current running.
Clusters: This option enables you to create and manage a cluster Database environment.
Backups: Create or view backups of any current running Database Images.
Datastores: List all Databases that have been imported
Configuration Groups: This option enables you to manage database configuration tasks by using configuration groups, which make it possible to set configuration parameters, in bulk, on one or more databases.

At this point Tesora DBaaS should be up and running, enabling you to deploy, configure and manage databases in your environment.
The post Fuel plugins: Getting Started with Tesora DBaaS Enterprise Edition and Mirantis OpenStack appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis