Fuel plugins: Getting Started with Tesora DBaaS Enterprise Edition and Mirantis OpenStack

The post Fuel plugins: Getting Started with Tesora DBaaS Enterprise Edition and Mirantis OpenStack appeared first on Mirantis | The Pure Play OpenStack Company.
The Tesora Database as a Service platform is an enterprise-hardened version of Openstack Trove, offering secure private cloud access to the most popular open source and commercial databases through a single consistent interface.
In this guide we will show you how to install Tesora in a Mirantis Openstack environment.
Prerequisites
In order to deploy Tesora DBaaS, you will need a fuel with the Tesora plugin installed.  Start by making sure you have:

A Fuel server up and running. (See the Quick Start for instructions if necessary.)
Discovered nodes for controllers, compute and storage
A discovered node for the dedicated node for the Tesora controller.

Now let&;s go ahead and add the plugin to Fuel.
Step 1 Adding the Tesora Plugin to Fuel
To add the Tesora plugin to Fuel, follow these steps:

Download the tesora plugin from the Mirantis Plugin page, located at:

https://www.mirantis.com/validated-solution-integrations/fuel-plugins/

Once you have downloaded the plugin, copy the plugin file to your Fuel Server using the scp command, as in:
$scp tesora-dbaas-1.7-1.7.7-1.noarch.rpm root@[fuel s:/tmp

After copying the Fuel Plugin to the fuel server, add it to the fuel plugin list. First ssh to the Fuel server:
$ssh root@[fuel server ipi]

Next, add the plugin to Fuel:
[root@fuel ~]# fuel plugins –install tesora-dbaas-1.7-1.7.7-1.noarch.rpm

Finally, verify that the plugin has been added to Fuel:
[root@fuel ~]# fuel plugins
id | name                     | version | package_version
—|————————–|———|—————-
1  | fuel-plugin-tesora-dbaas | 1.7.7   | 4.0.0

If the plugin was successfully added, you should see it listed in the output.
Step 2 Add Tesora DBaaS to an Openstack Environment
From here, it&8217;s a matter of creating an OpenStack cluster that uses the new plugin.  You can do that by following these steps:

Connect to the Fuel UI and log in with the admin credentials using your browser.
Create a new Openstack environment. Follow the prompts and either leave the defaults or alter them to suit your environment.
Before adding new nodes, enter the environment and select the settings tab and then other on the left hand side of the window.
Select Tesora DBaaS Platform and enter the username and password supplied to you by Tesora.  The username and password will be used to download the Database images provided by Tesora to the Tesora DBaaS controller.  Finish by typing &;I Agree&; to show that you agree to the Terms of Use and click Save Settings.
Now create your Environment by assign nodes to the roles for

Compute
Storage
Controller
Tesora DBaaS Controller

As shown in the image below:

After you have finished adding the roles go ahead and deploy the environment.

Step 3 Importing the Database image files to the Tesora DBaaS Controller
Once the environment is built, it&8217;s time to import the database images.  

From the Fuel Server, SSH to the Tesora DBaaS controller server.  You can find the IP address of the Tesora DBaaS controller by entering the following command:
[root@fuel ~]# sudo fuel node list | grep tesora
9  | ready  | Untitled (61:ef) | 4       | 10.20.0.6 | 08:00:27:a3:61:ef | tesora-dbaas    |               | True   | 4

After identifying the IP address you will need to ssh to the fuel server:
[root@fuel ~]# sudo ssh root@10.20.0.6
Warning: Permanently added ‘10.20.0.6’ (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-93-generic x86_64)
* Documentation:  https://help.ubuntu.com/
Last login: Wed Aug 10 23:51:29 2016 from 10.20.0.2

Next load the the pre-built Database images.  After logging into the DBaaS Controller, change your working directory to /opt/tesora/dbaas/bin:
root@node-9:~# cd /opt/tesora/dbaas/bin

Now export your tesora variables:
root@node-9:/opt/tesora/dbaas/bin# source openrc.sh

After setting your variables, you can now import your database images with the following command:
root@node-9:/opt/tesora/dbaas/bin# ./add-datastore.sh mysql 5.6
Installing guest ‘tesora-ubuntu-trusty-mysql-5.6-EE-1.7′

Above is an  example of loading mysql version 5.6.  The format of the command is:
add-datastore.sh DBtype version

To get a list of Database that are available and version please see the link below:

https://tesoradocs.atlassian.net/wiki/display/EE17CE16/Import+Datastore

Once you have imported your Database images, it&8217;s time to go to Horizon.
Step 4 Create and Access a Database Instance
Now you can go ahead and create the actual database. Log into your Horizon dashboard from within Fuel. On the lefthand side, click Tesora Databases.  

From here, you have the following options:

Instances: This option enables you to create, delete and display any database instances that are current running.
Clusters: This option enables you to create and manage a cluster Database environment.
Backups: Create or view backups of any current running Database Images.
Datastores: List all Databases that have been imported
Configuration Groups: This option enables you to manage database configuration tasks by using configuration groups, which make it possible to set configuration parameters, in bulk, on one or more databases.

At this point Tesora DBaaS should be up and running, enabling you to deploy, configure and manage databases in your environment.
The post Fuel plugins: Getting Started with Tesora DBaaS Enterprise Edition and Mirantis OpenStack appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

OPNFV Functional Testing, TOSCA Orchestration, and vIMSUseCases

The post OPNFV Functional Testing, TOSCA Orchestration, and vIMSUseCases appeared first on Mirantis | The Pure Play OpenStack Company.
The entire purpose of OPNFV, an open source project from the Linux Foundation that brings together the work of the various standards bodies and open source NFV projects into a single platform, is the provide a way for carriers and vendors to easily test and release virtual network functions (VNFs), and for users to understand what components will work together, so it&;s especially important that the Functest team can provide appropriate test coverage.
This week Cloudify Director of Product, Arthur Berezin, together with OPNFV’s Morgan Richomme and Valentin Boucher of Orange Labs, spoke at the OpenStack Summit in a session titled “Project: OPNFV &; Base System Functionality Testing (Functest) of a vIMS on OpenStack,” so we thought we&8217;d take a moment to look at what that means.
About Functest
OPNFV puts a lot of emphasis on ensuring all components are fully tested and ready for production. The Functest group, specifically, is the team that tests and verifies all OPNFV Platform functionality, which covers the VIM and NFVI components.
The key objectives of the Functest project in OPNFV are to:

Define tooling for tests
Define test suites (SLA)
Installation and configuration of the tools
Automate test with CI
Provide API and dashboard functions for Functest and other test projects

But doing all that involves orchestration, and that involves having an appropriate tool.
Choosing an Orchestrator for Testing
The Functest team, as part of their use case testing, sought an orchestration tool based on certain criteria. They were looking for an open source orchestrator and VNF Manager.  The tool had to satisfy a number of different requirements:
“To manage a complex VNF, it’s necessary to use an orchestrator and we selected Cloudify because it fits all the vIMS test-case requirements (open source solution, workflow, TOSCA modeling, good integration with OpenStack components, openness with plugins…).”
To satisfy these requirements, the team chose the open source Cloudify tool.
The second OPNFV release, Brahmaputra, includes test cases for more complete platform capacity checks of the OPNFV platform to host complex VNFs. In order to truly verify that everything is working properly, however, the tests needed a use case that was sufficiently complex.
The team needed a VNF that:

Includes various components
Requires component configuration for communication between VMs
Involves a basic workflow in order to properly complete setup

The team chose Clearwater, open source vIMS from MetaSwitch.  
But what did they actually test?
vIMS Test Cases
Functest team runs a number of different vIMS test cases, including:

  Environment preparation, such as creating a user/tenant, choosing a flavor, and uploading OS images
  Orchestrator deployment, including creating the Cloudify manager router, network and VM
  VNF deployment with Cloudify, including create 7 VMs and installing and configuring software
  VNF tests, including creating users and launching more than 100 tests
  Pushing deployment duration and test results

If you&8217;re interested in getting more details about the test cases, you can read more about the details on the Cloudify blog in this post contributed by the OPNFV team.

Joint Talk at OpenStack Summit
Cloudify Director of Product, Arthur Berezin, together with OPNFV’s Morgan Richomme and Valentin Boucher of Orange Labs, will be speaking at the OpenStack Summit in a session titled “Project: OPNFV &8211; Base System Functionality Testing (Functest) of a vIMS on OpenStack.” The session, taking place on Wednesday, October 26 from 3:05pm-3:45pm, will include a lot more technical information about how Functest uses Cloudify within the vIMS use case from OPNFV.
The OPNFV team will be at booth D15 and Cloudify at booth C4 in the marketplace at the OpenStack Summit in Barcelona.

The post OPNFV Functional Testing, TOSCA Orchestration, and vIMSUseCases appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

First Implementation of NVMe over Fabric Support in OpenStack

The post First Implementation of NVMe over Fabric Support in OpenStack appeared first on Mirantis | The Pure Play OpenStack Company.
Fast growth of data and technology innovation is driving the emergence of a new/next generation Storage architecture in Cloud Data Centers. Storage is evolving from proprietary Highly Available, Dual Controller custom appliances to open software defined clustered/scale-out storage based on industry standard servers.
As part of this storage evolution, NVMe SSD adoption is increasing in Data Centers by moving the low latency storage closer to the processor thru PCI connectivity and accessed over NVMe (NVM Express (NVMe)) standard protocol. NVMe local usage is well understood and the initial adoption of these SSDs has demonstrated performance improvements for IO-intensive applications, due to the drives&; latency and throughput characteristics.
Another, less obvious progression happening in Data Centers is to disaggregate storage from compute to improve Data Center operational efficiency and increase flexibility at the rack level. Released in June 2016, the NVMe over Fabrics (NVMeoF) specification (NVMe Over Fabrics Overview) relies on RDMA to access high performance remote SSDs without sacrificing performance.
To bring this innovation more quickly to market, Intel, Mirantis and Supermicro have partnered to engineer the first implementation of NVMeoF that unlocks the value of NVMe over Fabrics for OpenStack. It may be years before we see such implementations in traditional storage systems. OpenStack provides the ideal environment to innovate quickly around NVMe and NVMeoF technologies.
Mirantis added a new Cinder driver to support the NVMe over Fabric reference implementation based on Intel SPDK NVMeoF target software (SPDK NVMeoF Target), and modified Nova to support attaching NVMeoF volumes to VMs. The implementation was validated with a SuperMicro All-Flash NVMe 2U server running Intel SPDK NVMeoF target. This All-Flash Array supports up to 24 NVMe Drives ( (SYS-2028U-TN24R4T) currently housing Intel® Data Center P3700 NVMe SSDs and Mellanox NICs for RDMA support. At OpenStack Summit in Barcelona October 25-28, 2016 we will be demonstrating this functionality and highlight how we are making this available to the OpenStack community for further innovation.
Next Steps
Following OpenStack Summit we plan to extend this work to include upstreaming into the Ocata release and delivering performance benchmarks.
Join Us
The demonstration will be shown on Thursday, October 27, at 11:25 AM, in the Marketplace Theatre, as part of the Mirantis breakout session at OpenStack Summit in Barcelona, Spain, October 25-28, 2016.
The post First Implementation of NVMe over Fabric Support in OpenStack appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

3 challenges businesses must overcome to reap the benefits of a hybrid cloud environment

The question “Why should businesses migrate to the cloud?” has long been answered. The tangible benefits are well documented.
Organizations are now forced to assess what type of cloud deployment best suits their needs. Be it the agility and affordability of a public cloud or the control and security of a private environment, there’s no one-size-fits-all answer, which is why it’s no surprise that 80 percent of enterprises are expected to commit to hybrid cloud architectures by 2017. But the route to hybrid cloud environment success is not without challenges. There are three steps to getting it right:
1. Select the right provider
The demand for flexibility, scalability and security hasn&;t gone unheeded by hybrid cloud providers. A host of companies offer hosted and on-premises environments that enable businesses to leverage all the benefits of a public cloud service in a hosted or on-premises private cloud environment. But selecting the right provider is just one of the challenges that organizations need to overcome. Whether your business is looking to add cloud capability to its existing virtualized IT environment or its looking to bring private cloud elements to its existing public cloud set-up, compatibility with existing software and services is key. IBM Blue Box runs on the open-source cloud software platform OpenStack, which means that organizations can integrate Blue Box with their existing infrastructure without the need for translation and with no downtime.
2. Clearly identify your needs
The key to successfully deploying a hybrid cloud environment is aligning the cloud strategy with the desired business outcomes. Security remains a core consideration. If an organization’s data is subject to compliance regulations, it faces heavy fines if it fails to meet them. IBM Blue Box offers robust security features including authentication and identity management. Blue Box can integrate with existing, third-party customer IdP systems to ensure control of users and two-factor authentication. That means businesses can operate safe in the knowledge that their data is safe and complies with industry regulations and data sovereignty. Data can reside in-country, or wherever you need it; Blue Box has data centers throughout the world and can deploy hosted or on-premises solutions, providing a truly hybrid model.
3. Seamlessly manage your hybrid environment
For some businesses, the thought of IT teams having to juggle the management of both a private and public cloud infrastructure is perhaps the most daunting element of adopting a hybrid environment. IBM Blue Box provisions and manages organizations’ private clouds for them, leaving them to focus more on business-critical operations.
The benefits of a hybrid cloud environment are manifold: reduced costs, scalability to meet changing business needs, the ability to shift workloads between private and public environments and being able to isolate and protect business-critical data quickly and easily being just four examples. The choice to migrate is simple and with IBM Blue Box, so is navigating the challenges.
To find out more about IBM Blue Box, the OpenStack platform and learn how to ensure your business can reap the benefits of a hybrid cloud, watch our free webinar.
A version of this post originally appeared on LinkedIn.
The post 3 challenges businesses must overcome to reap the benefits of a hybrid cloud environment appeared first on news.
Quelle: Thoughts on Cloud

Here's What Tech Leaders Think About Trump

Silicon Valley entrepreneurs and investors spoke about Trump both onstage and to BuzzFeed News at Vanity Fair&;s New Establishment Summit in San Francisco.

Anne Wojcicki, CEO and cofounder of 23andMe

Anne Wojcicki, CEO and cofounder of 23andMe

“I think this election has been a force in [highlighting] much bigger issues about how we think about women and immigration. It&;s gotten people engaged. I also think the creative energy that&039;s come out about women — there&039;s really the beginning of true change and true movement. And I give Trump thanks for that,” she said, smiling. Issues that affect women, such as sexual assault, were “already starting to reach crescendo,” and have now become national conversations.

Brad Barket / Getty Images

Jeff Bezos, CEO of Amazon and owner of the Washington Post

Jeff Bezos, CEO of Amazon and owner of the Washington Post

“I think the United States is incredibly robust. We’re not a new democracy, we’re very robust, but it is inappropriate for a presidential candidate to erode that around the edges. They should be trying to burnish it instead of erode it. And when you look at the pattern of things, it’s just not going after the media and threatening retribution for people who scrutinize him, it is also saying that he may not give a graceful concession speech if he loses the election. That erodes our democracy around the edges. Saying that he might block his opponent if he wins
erodes our democracy around the edges. These aren’t acceptable behaviors, in my opinion.

Alex Wong / Getty Images

Tim Draper, venture capitalist

Tim Draper, venture capitalist

“We have a duopoly in government and it&039;s not working … We&039;re just an ATM and our vote doesn&039;t even seem to count. Washington seems to get a lot more out of California than California gets out of Washington. We have a huge problem. We need a new system. We need a third party. We’re given two candidates and that&039;s the best we can do?&;”

Danny Moloshok / Reuters

Chamath Palihapitiya, founder and CEO of Social Capital

Chamath Palihapitiya, founder and CEO of Social Capital

“The short-term impacts [on the stock market if Trump is elected] are probably overstated and the long-term impacts are probably underestimated. Most of us who have public market exposure are getting an emotive risk off going into November 8th, and so a lot of the volatility is going to be short term and relatively muted if he wins. I just think you have to take a bigger step back and say: It’s like you’re just repudiating all the good things that make America awesome — and the long-term implications of that. People like us, people like me — I immigrated to this country and I pour enormous amounts of capital, I pay enormous amounts of taxes. I want to be here, I want to help this team win.”

Mike Windle / Getty Images


View Entire List ›

Quelle: <a href="Here&039;s What Tech Leaders Think About Trump“>BuzzFeed

Tieto’s path to containerized OpenStack, or How I learned to stop worrying and love containers

The post Tieto&;s path to containerized OpenStack, or How I learned to stop worrying and love containers appeared first on Mirantis | The Pure Play OpenStack Company.
Tieto is the cloud service provider in Northern Europe, with over 150 cloud customers in the region and revenues in the neighborhood of €1.5 billion (with a &;b&;). So when the company decided to take the leap into OpenStack, it was a decision that wasn&8217;t taken lightly &; or without very strict requirements.
Now, we&8217;ve been talking a lot about containerized OpenStack here at Mirantis lately, and at the OpenStack Summit in Barcelona, our Director of Product Engineering will get with Tieto&8217;s Cloud Architect  Lukáš Kubín to explain the company&8217;s journey from a traditional architecture to a fully adaptable cloud infrastructure, so we wanted to take a moment and ask the question:
How does a company decide that containerized OpenStack is a good idea?
What Tieto wanted
At its heart, Tieto wanted to deliver a bimodal multicloud solution that would help customers digitize their businesses. In order to do that, it needed an infrastructure in which it could have confidence, and OpenStack was chosen as the platform for cloud native applications delivery.  The company had the following goals:

Remove vendor lock-in
Achieve the elasticity of a seamless on-demand capacity fulfillment
Rely on robust automation and orchestration
Adopt innovative open source solutions
Implement Infrastructure as Code

It was this last item, implementing Infrastructure as Code, that was perhaps the biggest challenge from an OpenStack standpoint.
Where we started
In fact, Tieto had been working with OpenStack since 2013, creating internal projects to evaluate OpenStack Havana and Icehouse using internal software development projects; at that time, the target architecture included Neutron and Open vSwitch. 
By 2015, the company was providing scale-up focused IaaS cloud offerings and unique application-focused PaaS services, but what was lacking was a shared platform with full API controlled infrastructure for horizontally scalable workload.
Finally, this year, the company announced its OpenStack Cloud offering, based on the OpenStack distribution of tcp cloud (now part of Mirantis), and OpenContrail rather than Open vSwitch.
Why OpenContrail? The company cited several reasons:

Licensing: OpenContrail is an open source solution, but commercial support is available from vendors such as Mirantis.
High Availability: OpenContrail includes native HA support.
Cloud gateway routing: North-South traffic must be routed on physical edge routers  instead of software gateways to work with existing solutions
Performance: OpenContrail provides excellent pps, bandwidth, scalability, and so on (up to 9.6 Gbps)
Interconnection between SDN and Fabric: OpenContrail supports the dynamic legacy connections through EVPN or ToR switches
Containers: OpenContrail includes support for containers, making it possible to use one networking framework for multiple environments.

Once completed, the Tieto Proof of Concept cloud included;

OpenContrail 2.21
20 compute nodes
Glance and Cinder running on Ceph
Heat orchestration

Tieto had achieved Infrastructure as Code, in that deployment and operations were controlled through OpenStack Salt formulas. This architecture enabled the company to use DevOps principles, in that they could use declarative configurations that could be stored in a repository and re-used as necessary.
What&8217;s more, the company had an architecture that worked, and that included commercial support for OpenContrail (through Mirantis).
But there was still something missing.
What was missing
With operations support and Infrastructure as Code, Tieto&8217;s OpenStack Cloud was already beyond what many deployments ever achieve, but it still wasn&8217;t as straightforward as the company would have liked.  
As designed, the OpenStack architecture consisted of almost two dozen VMs on at least 3 physical KVM nodes &8212; and that was just the control plane!

As you might imagine, trying to keep all of those VMs up to date through operating system updates and other changes made operations more complex that it needed to be.  Any time an update needed to be applied, it had to be applied to each and every VM. Sure, that process was easier because of the DevOps advantages introduced by the OpenStack-Salt formulas that were already in the repository, but that was still an awful lot of moving parts.
There had to be a better way.
How to meet that challenge
That &8220;better way&8221; involves treating OpenStack as a containerized application in order to take advantage of the efficiencies this architecture enables, including:

Easier operations, because each service no longer has its own VM, with it own operating system to worry about
Better reliability and easier manageability, because containers and docker files can be tested as part of a CI/CD workflow
Easier upgrades, because once OpenStack has been converted to a microservices architecture, it&8217;s much easier to simply replace one service
Better performance and scalability, because the containerized OpenStack services can be orchestrated by a tool such as Kubernetes.

So that&8217;s the &8220;why&8221;.  But what about the &8220;how&8221;?  Well, that&8217;s a tale for another day, but if you&8217;ll be in Barcelona, join us at 12:15pm on Wednesday to get the full story and maybe even see a demo of the new system in action!
The post Tieto&8217;s path to containerized OpenStack, or How I learned to stop worrying and love containers appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Webinar Recap: Docker for Windows Server 2016

Last week, we held our first webinar on “ for Windows Server 2016” to a record number of attendees, showcasing the most exciting new Windows Server 2016 feature &; containers powered by Commercially Supported Docker Engine.
Docker CS Engine and containers are now available natively on Windows and supported by Microsoft with Docker’s Commercially Supported (CS) Engine included in Windows Server 2016.Now developers and IT pros can begin the same transformation for Windows-based apps and infrastructure to reap the benefits they’ve seen with Docker for Linux: enhanced security, agility, and improved portability and freedom to run applications on bare metal, virtual or cloud environments.
Watch the on-demand webinar to learn more about the technical innovations that went into making Docker containers run natively on Windows and how to get started.
Webinar: Docker for Windows Server 2016

Here are just a few of the most frequently asked questions from the session.  We’re still sorting through the rest and will post them in a follow up blog.
Q: How do I get started?
A: Docker and Microsoft have worked to make getting started simple, we have some great resources to get you started whether you&;re a developer or an IT pro:

Complete the Docker for Windows Containers Lab on GitHub
Read the blog: Build And Run Your First Docker Windows Server Container
View the images in Docker Hub that Microsoft has made available to the community to start building Windows containers: https://hub.docker.com/r/microsoft/
Get started converting existing Windows applications to Docker containers:

Read the blog: Image2Docker: A New Tool For Prototyping Windows VM Conversions
Register for the webinar on October 25th at 10AM PST &8211; Containerize Windows workloads with Image2Docker Tool

Q: How is Docker for Windows Server 2016 licensed?
A: Docker CS Engine comes included at no additional cost with Windows Server 2016 Datacenter, Standard, and Essentials editions with support provided by Microsoft and backed by Docker. Support is provided in accordance with the selected Windows Server 2016 support contract with available SLAs and hotfixes and full support for Docker APIs.
Q: Is there a specific Windows release that supports Docker for development?
A: You can get started using Windows 10 Anniversary Edition by installing Docker for Windows (direct link for  public beta channel) or by downloading and installing Windows Server 2016. You can also get started using Azure.
To learn more about how to get started, read our blog: Build And Run Your First Docker Windows Server Container or get started with the Docker for Windows Containers Lab on GitHub.
Q: Windows has a Server Core and Nano Sever base image available. What should I use?
A: Windows Server Core is designed for backwards compatibility. It is a larger base image but has the things you need so your existing applications are able to run in Docker. Nano Server is slimmer and is best suited for new applications that don’t have legacy dependencies.
For more resources:

Learn more: www.docker.com/microsoft
Read the blog: Top 5 Docker Questions From Microsoft Ignite
Learn more about the Docker and Microsoft partnership
Read the blog:  Introducing Docker For Windows Server 2016

Check out the Docker for Windows Server 2016 Webinar video and Q&A Recap w/ @friism Click To Tweet

The post Webinar Recap: Docker for Windows Server 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Total Cost of Ownership: AWS TCO vs OpenStack TCO Q&A

The post Total Cost of Ownership: AWS TCO vs OpenStack TCO Q&;A appeared first on Mirantis | The Pure Play OpenStack Company.
Last month, Amar Kapadia led a lively discussion about the Total Cost of Ownership of OpenStack clouds versus running infrastructure on Amazon Web Services.  Here are some of the questions we got from the audience, along with the answers.
Q: Which AWS cost model do you use? Reserved? As you go?
A: Both. We have a field that can say what % are reserved, and what discount you are getting on reserved instances. For the webinar, we assumed 30% reserved instances at 32% discount. The rest are pay-as-you-go.
Q: How does this comparison look when considering VMware&;s newly announced support for OpenStack? Is that OpenStack support with VMware only with regards to supporting OpenStack in a &;Hybrid Cloud&; model? Please touch on this additional comparison. Thanks.
A: In general, a VMware Integrated OpenStack (VIO) comparison will look very different (and show a much higher cost) because they support only vSphere.
Q: Can Opex be detailed as per the needs of the customer? For example, if he doesn&8217;t want an IT/Ops team and datacenter fees included as the customer would provide their own?
A: Yes, please contact us if you would like to customize the calculator for your needs.
Q: Do you have any data on how Opex changes with the scale of the system?
A: It scales linearly. Most of the Opex costs are variable costs that grow with scale.
Q: What parameters were defined for this comparison, and were the results validated by any third party, or on just user/orgnaisatuon experience?
A: Parameters are in the slide. Since there is so much variability in customers&8217; environments, we don&8217;t think a formal third party validation makes sense. So the validation is really through 5-10 customers.
Q: How realistic is it to estimate IT costs? Size of company, size of deployment, existing IT staff (both firing and hiring), each of these will have an impact on the cost for IT/OPs teams.
A: The calculator assumes a net new IT/OPS team. It&8217;s not linked to the company size, but rather the OpenStack cloud size. We assume a minimum team size of about 3.5 people and linear growth after that as your cloud scales.
Q: Should the Sparing not be adding more into the cost, as you will need more hardware for HA for high availability?
A: Yes, sparing is included.
Q: AWS recommends using 90% utilization, and if you are using 60%, it&8217;s better to downgrade the VM to ensure 90% utilization. In the case of provisioning 2500 vms with autoscaling, this should help.
A: Great point, however, we see a large number of customers who do not do this, or do not even know what percentage of their VMs are underutilized. Some customers even have zombie VMs that are not used at all, but they are still paying for them.
Q: With the hypothesis that all applications can be &8220;containerized&8221;, will the comparison outcomes remain the same?
A: We don&8217;t have this yet, but a private cloud will turn out to have a much better TCO. The reason is that we believe private clouds can run containers on bare-metal while public clouds have to run containers in VMs for security reasons. So a private cloud will be a lot more efficient.
Q: This is interesting. Can you please add replication cost? This is what AWS does free of cost within an availability zone. In the case of OpenStack, we need to take care of replication.
A: I assume you mean for storage. Yes we already include a 3x factor to convert from raw storage to usable storage to factor in replication (3-way).
Q: Just wondering how secure is the solution as you have mentioned for a credit card company? AWS is PCI DSS certified.
A: Yes this solution is PCI certified.
Q: Has this TCO calculator been validated against a real customer workload?
A: Yes, 5-10 customers have validated this calculator.
Q: Do you think that these costs apply to another countries, or this is US based?
A: These calculations are US based. Both AWS and private cloud costs could go up internationally.
Q: Hi, thank you for your time in this webinar. How many servers (computes, controllers, storage servers) are you using, and which model do you use for your calculations ? Thanks.
A: The node count is variable. For this webinar, we assumed 54 compute nodes, 6 controllers, and 1080GB of block storage. We assumed commodity Intel and SuperMicro hardware with 3 year warranty.
Q: Can we compare different models, such as AWS vs VMware private cloud/public cloud with another vendor (not AWS)?
A: These require customizations. Please contact us.
The post Total Cost of Ownership: AWS TCO vs OpenStack TCO Q&038;A appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Docker Weekly Roundup | October 9, 2016

 

It’s time for your weekly ! Get caught up on the top news including; expansion into China through a commercial partnership with Alibaba Cloud, announcement of DockerCon 2017, and information on the upcoming Global Mentor Week. As we begin a new week, let’s recap the top five most-read stories of the week of October 9, 2016:

Alibaba Cloud Partnership Docker expands into China market through new partnership with the Alibaba Group, the world&;s largest retail commerce group. The focus of the partnership is to provide a China-based Docker Hub, enable Alibaba to resell Docker’s commercial offerings, and create a “Docker For Alibaba Cloud”.

DockerCon 2017 a three day, conference organized by Docker. This year’s US edition will take place in Austin, TX and continue to build on the success of previous events as it grows and reflects Docker’s established ecosystem and ever-growing community.

Global Mentor Week  is a global event series aimed at providing Docker training to both newcomers and intermediate users. Participants will work through self-paced labs that will be available through an online Learning Management System (LMS). There will be different labs for different skill levels, Developers, Ops, Linux and Windows users.

Docker on Windows &; check out this blog on three tips for setting a solid foundation and improving the Docker on Windows experience from Elton Stoneman.

SQL Server 2016 was publicly available this week and SQL Server 2016 Express Edition in Windows Containers is now available on Docker Hub. In addition, the build scripts will be hosted on the SQL Server Samples GitHub repository and the image can be used in both Windows Server Containers as well as Hyper-V Containers.

Weekly Roundup: Top 5 Docker stories for the week 10/09/16Click To Tweet

The post Docker Weekly Roundup | October 9, 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/