Announcing the new Docs Repo on GitHub!

By John Mulhausen
The documentation team at Docker is excited to announce that we are consolidating all of our documentation into a single GitHub Pages-based repository on GitHub.
When is this happening?

The new repo is public now at https://github.com/docker/docker.github.io.
During the week of Monday, September 26th, any existing docs PRs need to be migrated over or merged.
We’ll do one last “pull” from the various docs repos on Wednesday, September 28th, at which time the docs/ folders in the various repos will be emptied.
Between the 28th and full cutover, the docs team will be testing the new repo and making sure all is well across every page.
Full cutover (production is drawing from the new repo, new docs work is pointed at the new repo, dissolution of old docs/ folders) is complete on Monday, October 3rd.

The problem with the status quo

Up to now, the docs have been all inside the various project repos, inside folders named “docs/” &; and to see the docs running on your local machine was a pain.
The docs were built around Hugo, which is not natively supported by GitHub, and took minutes to build, and even longer for us to deploy.
Even worse than all that, having the docs siloed by product meant that cross-product documentation was rarely worked on, and things like reusable partials (includes) weren’t being taken advantage of. It was difficult to have visibility into what constituted “docs activity” when pull requests pertained to both code and docs alike.

Why this solution will get us to a much better place

All of the documentation for all of Docker’s projects will now be open source!
It will be easier than ever to contribute to and stage the docs. You can use GitHub Pages’ *.github.io spaces, install Jekyll and run our docs, or just run a Docker command:
git clone https://github.com/docker/docker.github.io.git docs
cd docs
docker run -ti &;rm -v &;$PWD&;:/docs -p 4000:4000 docs/docstage
Doc releases can be done with milestone tags and branches that are super easy to reference, instead of cherry-picked pull requests (PRs) from several repos. If you want to use a particular version of the docs, in perpetuity, it will be easier than ever to retrieve them, and we can offer far more granularity.
Any workflows that require users to use multiple products can be modeled and authored easily, as authors will only have to deal with a single point of reference.
The ability to have “includes” (such as reusable instructions, widgets that enable docs functionality, etc) will be possible for the first time.

What does this mean for open source contributors?
Open source contributors will need to create both a code PR and a docs PR, instead of having all of the work live in one PR. We’re going to work to mitigate any inconvenience:

Continuous integration tests will eventually be able to spot when a code PR is missing docs and provide in-context, useful instructions at the right time that guide contributors on how to spin up a docs PR and link it to the code PR.
We are not going to enforce that a docs PR has to be merged before a code PR is merged, just that a docs PR exists. That means we should be able to merge your code PR just as quickly, if not more so, than in the past.
We will leave README instructions in the repos under their respective docs/ folders that point people to the correct docs repo.
We are adding “edit this page” buttons to every page on the docs so it will be easier than ever to locate what needs to be updated and fix it, right in the browser on GitHub.

We welcome contributors to get their feet wet, start looking at our new repo, and propose changes. We’re making it easier than ever to edit our documentation!
The post Announcing the new Docs Repo on GitHub! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

One cloud to rule them all — or is it?

The post One cloud to rule them all &; or is it? appeared first on Mirantis | The Pure Play OpenStack Company.
So you’ve sold your organization on private cloud.  Wonderful!  But to get that ROI you’re looking for, you need to scale quickly and get paying customers from your organization to fund your growing cloud offerings.
It’s the typical Catch-22 situation when trying to do something on the scale of private cloud: You can’t afford to build it without paying customers, but you can’t get paying customers without a functional offering.
In the rush to break the cycle, you onboard more and more customers.  You want to reach critical mass and become the de-facto choice within your organization.  Maybe you even have some competition within your organization you have to edge out.  Before long you end up taking anyone with money.  
And who has money?  In the enterprise, more often than not it&;s the bread and butter of the organization: the legacy workloads.
Promises are made.  Assurances are given.  Anything to onboard the customer.  “Sure, come as you are, you won’t have to rewrite your application; there will be no/minimal impact to your legacy workloads!”
But there&8217;s a problem here. Legacy workloads &8212; that is, those large, vertically scaled behemoths that don&8217;t lend themselves to &;cloud native&; principles &8212; present both a risk and an opportunity when growing your private cloud, depending how they are handled.
(Note: Just because a workload has been virtualized does not make it &8220;cloud-native&8221;. In fact, many virtualized workloads, even those implemented using SOA, service-oriented architecture, will not be cloud native. We&8217;ll talk more about classifying, categorizing and onboarding different workloads in a future article.)
&8220;Legacy&8221; cloud vs &8220;Agile&8221; cloud
The term &8220;legacy cloud&8221; may seem like a bit of an oxymoron, but hear me out. For years, surveys that ask people about their cloud use have had to include responses from people who considered vSphere cloud because the line between cloud and virtualization is largely irrelevant to most people.
Or at least it was, when there wasn&8217;t anything else.
But now there&8217;s a clear difference. Legacy cloud is geared towards these legacy workloads, while agile cloud is geared toward more &8220;cloud native&8221; workloads.
Let’s consider some example distinctions between a “Legacy Cloud” and an “Agile Cloud”. This table shows some of the design trade-offs between environments built to support legacy workloads versus those built without those restrictions:

Legacy Cloud
Agile Cloud

No new features/updates (platform stability emphasis), or very infrequently, limited & controlled
Regular/continuous deployment of latest and greatest features (platform agility emphasis)

Live Migration Support (redundancy in the platform instead of in the app), DRS (in case of ESXi hypervisors managed by VMWare)
Highly scalable and performant local storage, ability to support other performance enhancing features like huge pages.  No live migration security and operational burdens.

VRRP for Neutron L3 router redundancy
DVR for network performance & scalability; apps built to handle failure of individual nodes

LACP bonding for compute node network redundancy
SR-IOV for network performance; apps built to handle failure of individual nodes

Bring your own (specific) hardware
Shared, standard hardware defrayed with tenant chargeback policies (white boxes)

ESXi hypervisor or bare metal as a service (Ironic) to insulate data plane, and/or separate controllers to insulate control plane
OpenStack reference KVM deployment

A common theme here are features that force you to choose whether you are designing for performance & scalability (such as Neutron DVR) versus HA and resiliency (such as VRRP for Neutron L3 agents).
It’s one or the other, so introducing legacy workloads into your existing cloud can conflict with other objectives, such as increasing development velocity.
So what do you do about it?
If you find yourself in this situation, you basically have three choices:

Onboard tenants with legacy workloads and force them to potentially rewrite their entire application stack for cloud
Onboard tenants with legacy workloads into the cloud and hope everything works
Decline to onboard tenants/applications that are not cloud-ready

None of these are great options.  You want workloads to run reliably, but you also want to make the onboarding process easy without imposing large barriers of entry to tenants applications.
Fortunately, there&8217;s one more option: split your cloud infrastructure according to the types of workloads, and engineer a platform offering for each. Now, that doesn&8217;t necessarily mean a separate cloud.
The main idea is to architect your cloud so that you can provide a legacy-type environment for legacy workloads without compromising your vision for cloud-aware applications. There are two ways to do that:

Set up a separate cloud with an entirely new control plane for associated compute capacity.  This option offers a complete decoupling between workloads, and allows for changes/updates/upgrades to be isolated to other environments without exposing legacy workloads to this risk.
Use compute nodes such as ESXi hypervisor or bare metal (e.g., Ironic) for legacy workloads. This option maintains a single OpenStack control plane while still helping isolate workloads from OpenStack upgrades, disruptions, and maintenance activities in your cloud.  For example, ESXi networking is separate from Neutron, and bare metal is your ticket out of being the bad guy for rebooting hypervisors to apply kernel security updates.

Keep in mind that these aren’t mutually exclusive options; it is possible to do both.  
Of course each option come with their own downsides as well; an additional control plane involves additional overhead (to build and operate), and running a mixed hypervisor environment has its own set of engineering challenges, complications, and limitations.  Both options also add overhead when it comes to repurposing hardware.
There&8217;s no instant transition
Many organizations get caught up in the “One Cloud To Rule Them All” mentality, trying to make everything the same and work with a single architecture to achieve the needed economies of scale, but ultimately the final decision should be made according to your situation.
It&8217;s important to remember that no matter what you do, you will have to deal with a transition period, which means you need to provide a viable path for your legacy tenants/apps to gradually make the switch.  But first, asses your situation:

If your workloads are all of the same type, then there’s not a strong case to offer separate platforms out of the gate.  Or, if you’re just getting started with cloud in your organization, it may be premature to do so; you may not yet have the required scale, or you may be happy with onboarding only those applications which are cloud ready.
When you have different types of workloads, with different needs &8212; for example, Telco/NFV vs Enteprise/IT vs BigData/IoT workloads &8212; you may want to think about different availability zones inside the same cloud, so specific nuances for each type can be addressed inside it’s own zone while maintaining one cloud configuration, life cycle management and service assurance perspective, including having similar hardware. (Having similar hardware makes it easier to keep spares on hand.)
If you find yourself in a situation where you want to innovate with your cloud platform, but you still need to deal with legacy workloads with conflicting requirements, then workload segmentation is highly advisable.  In this case, you&8217;ll probably want to break from the “One Cloud” mentality in favor of the flexibility of multiple clouds  If you try to satisfy both your &8220;innovation&8221; mindset and your legacy workload holders on one cloud, you&8217;ll likely disappoint both.

After making this choice, you may then plan your transition path accordingly.
Moving forward
Even if you do create a separate legacy cloud, you probably don&8217;t want to maintain it in perpetuity.  Think about your transition strategy; a basic and effective carrot and stick approach is to limit new features and cloud-native functionality to your agile cloud, and to bill/chargeback at higher rates in your legacy cloud (which are, at any rate, justified by the costs incurred to provide and support this option).
Whatever you ultimately decide, the most important thing to do is make sure you&8217;ve planned it out appropriately, rather than just going with the flow, so to speak. If you need to, contact a vendor such as Mirantis; they can help you do your planning and get to production as quickly as possible.
The post One cloud to rule them all &8212; or is it? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Six reasons OpenStack fails (Hint: it’s not the technology)

The post Six reasons OpenStack fails (Hint: it&;s not the technology) appeared first on Mirantis | The Pure Play OpenStack Company.
We know OpenStack is hard. But why?
Earlier this month, Christian Carrasco gave a keynote address at the OpenStack Days Silicon Valley conference, discussing six factors behind OpenStack evaluation and deployment failure &; and how to solve those problems. As Cloud Advisor at Tapjoy, Carrasco is architecting a 550-million-user cloud based on both private and public resources. He also has a history as CTO of a private cloud hardware company and two other startups focused on cloud technologies, so he brought a lot of insight into the causes of failure, pinpointing six primary points of failure.
Lesson 1: Leave the dogma at home
Dogmatic views, or beliefs accepted as fact without doubt, are blinders, he explained, and they can come from a variety of sources, including bad previous experiences with technology. For example, Carrasco’s experience with OpenStack five years earlier had been a negative experience because the platform just wasn’t ready. Fast-forward five years to 2016, and it’s now rock-solid for many applications, including his.
However, if dogma had prevailed, trying OpenStack again might have been out of the running. Keep in mind, though, that rebranding old technology as new, or new technology as old, or even rebranding fake technology as its legitimate counterpart can lead to a poor experience that gets associated with that real technology. (See Lesson 5.)
Lesson 2: Fear, doubt, uncertainty, and doom (aka FUDD) can cause problems
Remember when Linux was first launched? If you do, then you probably also remember the a proliferation of scare tactics. Your world will end if you use Linux! Nothing will work! Licensing is too confusing! Cats and dogs, living together, total chaos!
OpenStack has seen the same kind of FUDD. Every year, independent publications, public entities, and skewed statistical reports predict the death of OpenStack. And yet, OpenStack keeps on keeping on, taking over the private cloud market.
Lesson 3: Find the right distribution
The third reason Carrasco covered was the “You picked the wrong trunk” scenario. The latest version of open source software such as OpenStack is called the &;trunk&;, a base repository of code. The thing about trunk is that it requires lots of tweaking and the modules aren’t always in tune with each other. Community Linux trunks can have some configurations tweaked but not all, and it still requires a level of expertise, so deploying from trunk is not for less-experienced engineers.
Lesson 4: You are not a full-stack engineer
In today&8217;s world, where personnel often have to fulfill multiple roles, many engineers are being told they have to be &8220;full-stack engineers.&8221;
Carrasco, who has worked the full stack and still says he’s not a full-stack engineer, believes full-stack engineers are myths, and he makes a great argument for his belief. It’s really hard to be a full-stack engineer, he says, because you have to be proficient in every realm of the stack &8212; and it&8217;s not just the software. Just being proficient in software stack is difficult, but when you throw in the hardware side, as well as networking, security, and so on, being an expert in everything is a monumental, if not impossible, task.
Organizations need to be aware of the skillset of the people leading the OpenStack deployment and be sure they&8217;ve got all of their bases covered.
Lesson 5: You thought OpenStack was a better buggy for your horse
OpenStack isn’t necessarily a better buggy, or a cheaper method of doing something, or the open source way of doing something. Carrasco says it’s more of a paradigm shift, a new methodology that is still evolving, in the way data centers operate. And the reality is that sometimes this methodology isn’t ideal for traditional businesses.
Lesson 6: You didn&8217;t have a sufficient team
While the rumotrs that you need dozens of experts to successfully deploy OpenStack is an exaggeration, you&8217;re likely going to have problems if you try to deployed it alone, or with a very small team that isn&8217;t ready to deploy data center technology.
If you need help with your OpenStack deployment, there are plenty of options available for design, architecture, and verification of your stack, from automated tools to semi- and fully-managed services.
Along the same lines, some companies aren’t really ready for OpenStack yet, and it may not be economically feasible for a small company to hire a cloud team, purchase hardware, and rack up costs.
On the other hand, some companies lend themselves well to deployment, such as companies that were born online, are making the move to online, or are ready to stop using buggies and be committed and engaged to moving to the next generation of .
OK, so what do I do about it?
Carrasco offers two major solutions to help prevent OpenStack deployment failure.
The first thing Carrasco asks companies he advises is &8220;Where is your Cloud Officer?&8221; If you’ve made a multi-million dollar investment in your cloud and it’s a side project of some other team in your company, that&8217;s not a recipe for success. “What happens to clouds that become orphaned?” he asks. “They become security risks. They become a headache. Nobody wants to work with them, and they vanish,&8221; he says. There needs to be real ownership for your cloud to succeed, and a Cloud Officer will protect your cloud, prevent vendor lock-in, and bring the cloud in line with the organization’s initiatives.
The second solution he suggested is all about vendors. Despite the open nature and coopetition of OpenStack, according to Carrasco, the status quo consists of both public and private vendors fiercely guarding their territory and coming up with creative ways to lock users into their service or their cloud technology, etc., and few companies are creating ways to enable outside operability.
Carrasco’s ultimate vision for a solution is to adopt what he calls a hyper-converged cloud. In this architecture, you have your cloud and your assets powered by multiple vendors &8212; whoever you want to choose to power your cloud. This structure has an added advantage of opening possibilities for niche providers of services not offered by private or public clouds.
The point is not about technology, but about people being able to own their assets. Carrasco is instituting this concept successfully now at Tapjoy, but for this concept to work, interoperability standards are key. Oh, and to those who’d say it’s late for standards, Carrasco points to market research that shows cloud technology is still a tiny speck on the radar when compared to the market share of other tech industries.
So stop trying to make a better buggy, Carrasco says, and focus on making the next-generation cloud.
You can see the entire speech on the OpenStack Days Silicon Valley website.
The post Six reasons OpenStack fails (Hint: it&8217;s not the technology) appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Six DevOps myths and the realities behind them

The post Six DevOps myths and the realities behind them appeared first on Mirantis | The Pure Play OpenStack Company.
At OpenStack Days Silicon Valley 2016, Puppet Founder and CEO Luke Kanies dispelled the six most common misconceptions he’s encountered that prevent organizations from adopting and benefiting from DevOps.

Over a five-year period, Puppet conducted market research of 25,000 people that shows the adoption of DevOps is critical to building a great software company. Unfortunately, however, many companies find that the costs of the cultural change are too high. The result is that these firms often fail to become great software companies &; sometimes because even though they try to adopt the DevOps lifestyle, they do it in a such way that the change in a way doesn&;t have enough real value because the changes don’t go deep enough.

You see, all companies are becoming software companies, Kanies explained, and surveys have shown that success requires optimization of end-to-end software production. Organizations that move past barriers to change and go from the old processes to the new way of using DevOps tools and practices will be able to make the people on their team happy, spend more time on creating value rather than on rework, and deliver software faster.

Key points in the 2016 State of DevOps Report survey show that high-performing teams deploy 200 times more frequently than average teams, with over ,500 times shorter lead times, so the time between idea and production is minimal. Additionally, these teams see failure rates that are times lower than their non-DevOps counterparts, and they recover 24 times faster. The five-year span of the survey has also shown that the distance between top performers and average performers is growing.

In other words, the cost of not adopting DevOps processes is also growing.

Despite these benefits, however, for every reason to adopt DevOps, there are plenty of myths and cultural obstacles that hold organizations back.
Myth : There&8217;s no direct value to DevOps
The first myth Kanies discussed is that there’s no direct customer or business value for adopting DevOps practices. After all, how much good does it do customers to have teams deploying 200 times more frequently?

Quite a lot, as it happens. DevOps allows faster delivery of more reliable products and optimizes processes, which results in developing software faster. That means responding to customer problems more quickly, as well as drastically slashing time to market for new ideas and products. This increased velocity means more value for your business.
Myth 2: There&8217;s no ROI for DevOps in the legacy world
The second myth, that there’s no return on investment in applying DevOps to legacy applications, is based on the idea that DevOps is only useful for new technology. The problem with this view, Kanies says, is that the majority of the world still runs in legacy environments, effectively ruling out most of the existing IT ecosystem.

There are really good reasons not to ignore this reality when planning your DevOps initiatives. The process of DevOps doesn’t have to be all-or-nothing; you can make small changes to your process and make a significant difference, removing manual steps, and slow, painful, and error-prone processes.

What&8217;s more, in many cases, you can’t predict where returns will be seen, so there’s value in working across the entire organization. Kanies points out that it makes no sense to only utilize DevOps for the new, shiny stuff that no one is really using yet and neglect the production applications that users care about &8212; thus leaving them operating slowly and poorly.
Myth 3: Only unicorns can wield DevOps
Myth number three is that DevOps only works with “unicorn” companies and not traditional enterprise. Traditional companies want assurances that DevOps solutions and benefits work for their very traditional needs, and not just for new, from-scratch companies.

Kanies points out that DevOps is the new normal, and no matter where organizations are in the maturity cycle, they need to be able to figure out how to optimize the entire end-to-end software production, in order to gain the benefits of DevOps: reduced time to market, lower mean time to recovery, and higher levels of employee engagement.
Myth : You don&8217;t have enough time or people
The fourth myth is that improvement via DevOps requires spare time and people the organization doesn’t have. Two concepts at the root of this myth are the realities that no matter what you do, software must be delivered faster and more often and that costs must be maintained or decreased, and organizations don’t see how to do this &8212; especially if they take time to retool to a new methodology.

But DevOps is about time reclamation. First, it automates many tasks that computers can accomplish faster and more reliably and an overworked IT engineer. That much is obvious.  

But there&8217;s a second, less obvious way that DevOps enables you to reclaim time and money. Studies have shown that on average, SREs, sysadmins, and so on get interrupted every fifteen minutes &8212; and that it takes about thirty minutes to fully recover from an interruption. This means many people have no time to spend hours on a single, hard problem because they constantly get interrupted. Recognizing this problem and removing the interruptions can free up time for more value-added activity and free up needed capacity in the organization.
Myth : DevOps doesn&8217;t fit with regulations and compliance
Myth number five comes from companies subject to regulation and compliance who believe this precludes adoption of DevOps. However, with better software, faster recovery, faster deployments, and lower error rates, you can automate compliance as well. Organizations can integrate all of the elements of software development with auditing, security, and compliance to deliver higher value, and in fact, if these aren’t all done at once, companies are more than likely to experience a failure of some sort.
Myth : You don&8217;t really need it
Kanies says he hasn’t heard the sixth myth often, but once in a while, a company concludes it doesn’t have any problems that adopting DevOps would fix. But DevOps is really about being good at getting better, moving faster, and eliminating the more frustrating parts of the work, he explains.

The benefits of adopting DevOps are clear from Kanies’ points and from the data presented by the survey. As he says, the choice is really about whether to invest in change or to let your competitors do it first. Because the top performers are pulling ahead faster and faster, Kanies says, and “organizations don’t have a lot of time to make a choice.”

You can hear the entire talk on the OpenStack Days Silicon Valley site.The post Six DevOps myths and the realities behind them appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

How does the world consume private clouds?

The post How does the world consume private clouds? appeared first on Mirantis | The Pure Play OpenStack Company.
In my previous blog, why the world needs private clouds, we looked at ten reasons for considering a private cloud. The next logical question is how a company should go about building a private cloud.
In my view, there are four consumption models for OpenStack. Let’s look at each approach and then compare.

Approach : DIY
For the most sophisticated users, where OpenStack is super-strategic to the business, a do-it-yourself approach is appealing. Walmart, PayPal, and so on are examples of this approach.
In this approach, the user has to grab upstream OpenStack bits, package the right projects, fix bugs or add features as needed, then deploy and manage the OpenStack lifecycle. The user also has to “self-support” their internal IT/OPS team.
This approach requires recruiting and retaining a very strong engineering team that is adept at python, OpenStack, and working with the upstream open-source community. Because of this, I don’t think more than a handful companies can or would want to pursue this approach. In fact, we know of several users who started out on this path, but had to switch to a different approach because they lost engineers to other companies. Net-net, the DIY approach is not for the faint of heart.
Approach : Distro
For large sophisticated users that plan to customize a cloud for their own use and have the skills to manage it, an OpenStack distribution is an attractive approach.
In this approach, no upstream engineering is required. Instead, the company is responsible for deploying a known good distribution from a vendor and managing its lifecycle.
Even though this is simpler than DIY, very few companies can manage a complex, distributed and fast moving piece of software such as OpenStack &; a point made by Boris Renski in his recent blog Infrastructure Software is Dead. Therefore, most customers end up utilizing extensive professional services from the distribution vendor.
Approach : Managed Services
For customers who don’t want to deal with the hassle of managing OpenStack, but want control over the hardware and datacenter (on-prem or colo), managed services may be a great option.
In this approach, the user is responsible for the hardware, the datacenter, and tenant management; but OpenStack is fully managed by the vendor. Ultimately this may be the most appealing model for a large set of customers.
Approach : Hosted Private Cloud
This approach is a variation of the Managed Services approach. In this option, not only is the cloud managed, it is also hosted by the vendor. In other words, the user does not even have to purchase any hardware or manage the datacenter. In terms of look and feel, this approach is analogous to purchasing a public cloud, but without the &;noisy neighbor&; problems that sometimes arise.
Which approach is best?
Each approach has its pros and cons, of course. For example, each approach has different requirements in terms of engineering resources:

DIY
Distro
Managed Service
Hosted  Private Cloud

Need upstream OpenStack engineering team
Yes
No
No
No

Need OpenStack IT architecture team
Yes
Yes
No
No

Need OpenStack IT/ OPS team
Yes
Yes
No
No

Need hardware & datacenter team
Yes
Yes
Yes
No

Which approach you choose should also depend on factors such as the importance of the initiative, relative cost, and so on, such as:

DIY
Distro
Managed Service
Hosted  Private Cloud

How important is the private cloud to the company?
The business depends on private cloud
The cloud is extremely strategic to the business
The cloud is very strategic to the business
The cloud is somewhat strategic to the business

Ability to impact the community
Very direct
Somewhat direct
Indirect
Minimal

Cost (relative)
Depends on skills & scale
Low
Medium
High

Ability to own OpenStack operations
Yes
Yes
Depends if the vendor offers a transfer option
No

So as a user of an OpenStack private cloud you have four ways to consume the software.
The cost and convenience of each approach vary as per this simplified chart and need to be traded-off with respect to your strategy and requirements.
OK, so we know why you need a private cloud, and how you can consume one. But there&;s still one burning question: who needs it?
The post How does the world consume private clouds? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

DAY 1- OPENSTACK DAYS SILICON VALLEY 2016

The post DAY 1- OPENSTACK DAYS SILICON VALLEY 2016 appeared first on Mirantis | The Pure Play OpenStack Company.
THE UNLOCKED INFRASTRUCTURE CONFERENCE
By Catherine Kim

This year&;s OpenStack Days Silicon Valley, held once again at the Computer History Museum, carried a theme of industry maturity; we&8217;ve gone, as Mirantis CEO and co-Founder Alex Freedland said in his introductory remarks, from wondering if OpenStack was going to catch on to wondering where containers fit into the landscape to talking about production environments of both OpenStack and containers.

Here&8217;s a look at what you missed.
OpenStack: What Next?
OpenStack Foundation Executive Director Jonathan Bryce started the day off talking about the future of OpenStack. He&8217;s been traveling the globe visiting user groups and OpenStack Days events, watching as the technology takes hold in different parts of the world, but his predictions were less about what OpenStack could do and more about what people &; and other projects &8212; could do with it.

Standard frameworks, he said, provided the opportunity for large numbers of developers to create entirely new categories. For example, before the LAMP stack (Linux, Apache, MySQL and PHP) the web was largely made up of static pages, not the dynamic applications we have now. Android and iOS provided common frameworks that enable developers to release millions of apps a year, supplanting purpose-built machinery with a simple smartphone.

To make that happen, though, the community had to do two things: collaborate and scale. Just as the components of LAMP worked together, OpenStack needed to collaborate with other projects, such as Kubernetes, to reach its potential.

As for scaling, Jonathan pointed out that historically, OpenStack has been difficult to set up. It’s important to make success easier to duplicate. While there are incredible success stories out there, with some users using thousands of nodes, those users originally had to go through a lot of iterations and errors. For future developments, Jonathan felt it was important to share information about errors made, so that others can learn from those mistakes, making OpenStack easier to use.

To that end, the OpenStack foundation is continuing to produce content to help with specific needs, such as explaining the business benefits to a manager to more complex topics such as security. He also talked about the need to raise the size of the talent pool, and about the ability for students to take the Certified OpenStack Administrator exam (or others like it) to prove their capabilities in the market.
User talks
One thing that was refreshing about OpenStack Days Silicon Valley was the number of user-given talks. On day one we heard from Walmart, SAP, and AT&T, all of which have significantly transformed their organizations through the use of OpenStack.

OpenStack, Sean Roberts explained, enabled Walmart to make applications that can heal themselves, with failure scenarios that have rules about how they can recover from those failures. In particular, WalmartLabs, the online end of the company, had been making great strides with OpenStack, and in particular with a devops tool called OneOps. The tool makes it possible for them to manage their large number of nodes easily, and he suggested that it might do even better as an independent project under OpenStack.

Markus Riedinger talked about SAP and how it had introduced OpenStack. After making 23 acquisitions in a small period of time, the company was faced with a diverse infrastructure that didn&8217;t lend itself to collaboration. In the last few years it has begun to move towards cloud based work and in 2013 it started to move towards using OpenStack. Now the company has a container-based OpenStack structure based on Puppet, providing a clean separation of control and data, and a fully automatic system with embedded analytics and pre-manufactured PODs for capacity extension.  Their approach means that 1-2 people can take a data center from commissioned bare metal to an operational, scalable Kubernetes cluster running a fully configured OpenStack platform in less than a day.

Greg Stiegler discussed AT&T’s cloud journey, and Open Source and OpenStack at AT&T. He said that the rapid advancements in mobile data services have resulted in numerous benefits, and in turn this has exploded network traffic, with traffic expected to grow 10 times by 2020. To facilitate this growth, AT&T needed a platform, with a goal of remaining as close to trunk as possible to reduce technical debt. The result is the AT&T Integrated Cloud. Sarobh Saxena spoke about it at the OpenStack Summit in Austin earlier this year, but new today was the notion that the community effort should have a unified roadmap leader, with a strategy around containers that needs to be fully developed, and a rock solid core tent.

Greg finished up by saying that while AT&T doesn’t expect perfection, it does believe that OpenStack needs to be continually developed and strengthened. The company is grateful for what the community has always provided, and AT&T has provided an AT&T community team. Greg felt that the moral of his story was that by working together, community collaboration brings solutions at a faster rate, while weeding out mistakes through the experiences of others.
What venture capitalists think about open source
Well that got your attention, didn&8217;t it?  It got the audience&8217;s attention too, as Martin Casado, a General Partner from Adreessen Horowitz, started the talk by saying that current prevailing wisdom is that infrastructure is dead. Why? Partly because people don’t understand what the cloud is, and partly because they think that if the cloud is free, then they think “What else is there to invest in?” Having looked into it he thinks that view is dead wrong, and even believes that newcomers now have an unfair advantage.

Martin  (who in a former life was the creator of the &;software defined&; movement through the co-founding of SDN maker Nicira) said that for this talk, something is “software defined” if you can implement it in software and distribute it in software. For example, in the consumer space, the GPS devices have largely been replaced by software applications like Waze, which can be distributed to millions of phones, which themselves can run diverse apps to replace may functionalities that used to be &8220;wrapped in sheet metal&8221;.

He argued that infrastructure is following the same pattern. It used to be that the only common interface was internet or IP, but that we have seen a maturation of software that allows you to insert core infrastructure as software. Martin said that right now is one of these few times where there’s a market sufficient for building a company with a product that consists entirely of software.  (You still, however, need a sales team, sorry.)

The crux of the matter, though, is that the old model for Open Source has changed. The old model for Open Source companies was being a support company, however, now many companies will use Open Source to access customers and get credibility, but the actual commercial offering they have is a service. Companies such as Github (which didn&8217;t even invent Git) doing this have been enormously successful.
And now a word from our sponsors&;
The morning included several very short &8220;sponsor moments&8221;; two of which included very short tech talks.

The third was Michael Miller of Suse, who was joined onstage by Boris Renski from Mirantis. Together they announced that Mirantis and Suse would be collaborating with each other to provide support for SLES as both hosts and guests in Mirantis OpenStack, which already supports Ubuntu and Oracle Linux.

“At this point, there is only one conspicuous partner missing from this equation,” Renski said. Not to worry, he continued. SUSE has an expanded support offering, so in addition to supporting SUSE hosts, through the new partnership, Mirantis/SUSE customers with CentOS and RHEL hosts can also get support. “Mirantis  is now a one-stop shop for supporting OpenStack.”

Meanwhile,  Sujal Das, SVP of Marketing for Netronome, discussed networking and security and the many industry reports that highlight the importance of zero-trust defense security, with each VM and application needing to be trusted. OpenStack enables centralised control and automation in these types of deployments, but there are some challenges when using OVS and connection tracking, which affect VMs and the efficiency of the server. Ideally, you would like line red performance, but Netronome did some tests that show you do not get that performance with zero-trust security and OpenStack. Netronome is working on enhancements and adaptations to assist with this.

Finally, Evan Mouzakitis of Data Dog gave a great explanation of how you can look at events that happen when you are using OpenStack more closely to see not only what happened, but why. Evan explained that OpenStack uses RabbitMQ by default for message passing, and that once you can listen to that, you can know a lot more about what’s happening under the hood, and a lot more about the events that are occurring. (Hint: go to http://dtdg.co/nova-listen.)
Containers, containers, containers
Of course, the main thrust was OpenStack and containers, and there was no shortage of content along those lines.
Craig McLuckie of Google and Brandon Philips of CoreOS sat down with Sumeet Singh of AppFormix to talk about the future of OpenStack, namely the integration of OpenStack and Kubernetes. Sumeet started this discussion swiftly, asking Craig and Brandon “If we have Kubernetes, why do we need OpenStack?”

Craig said that enterprise needs hybrids of technologies, and that there is a lot of alignment between the two technologies, so  both can be useful for enterprises. Brandon also said that there’s a large incumbent of virtual machine users and they aren’t going to go away.

There’s a lot of integration work, but also a lot of other work to do as a community. Some is the next level of abstraction &; one of those things is rallying together to help software vendors to have a set of common standards for describing packages. Craig also believed that there’s a good opportunity to think about brokering of services and lifecycle management.

Craig also mentioned that he felt that we need to start thinking about how to bring the OpenStack and Cloud Native Computing foundations together and how to create working groups that span the two foundation’s boundaries.

In terms of using the two together, Craig said that from his experience he found that enterprises usually ask what it looks like to use the two. As people start to understand the different capabilities they shift towards it, but it’s very new and so it’s quite speculative right now.

Finally, Florian Leibert of Mesosphere, Andrew Randall of Tigera, Ken Robertson of Apcera, and Amir Levy of Gigaspaces sat down with Jesse Proudman of IBM to discuss &8220;The Next Container Standard&8221;.

Jesse started off the discussion by talking about how rapidly OpenStack has developed, and how in two short years containers have penetrated the marketplace. He questioned why that might be.

Some of the participants suggested that a big reason for their uptake is that containers drive adoption and help with inefficiencies, so customers can easily see how dynamic this field is in providing for their requirements.

A number of participants felt that containers are another wonderful tool in getting the job done and they’ll see more innovations down the road. Florian pointed out that containers were around before Docker, but what docker has done is that it has allowed individuals to use containers on their own websites. Containers are just a part of an evolution.

As far as Cloud Foundry vs Mesos or Kubernetes, most of the participants agreed that standard orchestration has allowed us to take a step higher in the model and that an understanding of the underlying tools can be used together &8212; as long as you use the right models. Amir argued that there is no need to take one specific technology’s corner, and that there will always be new technologies around the corner, but whatever we see today will be different tomorrow.

Of course there&8217;s the question of whether these technologies are complementary or competitive. Florian argued that it came down to religion, and that over time companies will often evolve to be very similar to one another. But if it is a religious decision, then who was making that decision?

The panel agreed that it is often the developers themselves who make decisions, but that eventually companies will choose to deliberately use multiple platforms or they will make a decision to use just one.

Finally, Jesse asked the panel about how the wishes of companies for a strong ROI affects OpenStack, leading to a discussion about the importance of really strong use cases, and showing customers how OpenStack can improve speed or flexibility.
Coming up
So now we head into day 2 of the conference, where it&8217;s all about thought leadership, community, and user stories. Look for commentary from users such as Tapjoy and thought leadership from voices such as James Staten from Microsoft, Luke Kanies of Puppet, and Adrian Cockroft of Battery Ventures.

 

 The post DAY 1- OPENSTACK DAYS SILICON VALLEY 2016 appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Unexpected continuous location tracking/energy change in android?

So this is really weird, but I have found what seems to be unexpected continuous location tracking that is causing noticeable battery drain on Android 6.0. Right now, it&;s looking like a change in an automatically updated component, so it is probably due to a closed source service or app. So this is in the style of the work from Vern Paxson&8217;s group on characterizing the observed behavior of third party software.
Has anybody else running Android 6.0 noticed a particularly large increase in power drain, with the GPS icon displayed continuously? I will be running additional tests in the coming days, but wanted to report the unusual behavior and see if other researchers have noticed it as well, or want to investigate it while it lasts.
Background

I&8217;ve been doing power profiling of power drain under various regimes as part of understanding the power/accuracy tradeoffs for my travel pattern tracking project. So I basically install apps with different data collection regimes on multiple test phones of the same make, model and OS version, and carry all of them around for comparison.

From last Thu/Fri/Sat, it looks like the power drain behavior on android has changed dramatically. In particular, it looks like some system component has GPS location turned on continuously, and is draining the battery quite dramatically. See details below.
This is a Nexus 6 running a stock android kernel (v 6.0.1, patch level: March 1, 2016), with no non-OEM apps installed other than mine, and with google maps location history turned off, so this must be due to unexpected background access by either the OS or some stock google app. And since I didn&8217;t update the OS, my guess is that it is a closed source component such as google play services or google maps that is automatically updated/patched.
Details
Here are the graphs for power drain on Sat v/s Tue v/s Thu v/s Fri. I think that the change happened sometime during the day on Thursday, because I know that the GPS icon was off on phones 2 and 4 on Thursday morning and was displayed on Thursday night. It was gone again when I rebooted on Thu, but came back again sometime on Friday. Has been on ever since then, even after rebooting.

Battery levels when tracking was off on the same phone (note the higher drain on Thu and the big change on Fri + Sat)

 Before we compare levels across phones, we need to understand the data collection regimes for each of them.

table#{
border:1px solid !important;
border:none !important;
border-collapse: collapse !important;white-space: pre;}
tablew3t945 td {
border:none !important;
border-top:1px solid !important;
border-bottom:1px solid !important;
}
tablew3t945 td {border-color:bbbbbb !important;}

Phone 1Phone 2Phone 3Phone 4

Sattracking offtracking offtracking offtracking off

Tuehigh, 1 secmed, 1 sechigh, 15-30 secmed, 15-30 sec

Thuhigh, 1 secmed, 1 sechigh, 30 sectracking off

Fri + Sathigh, 1 secmed, 1 sechigh, 30 sectracking off

Next Tuehigh, 1 secmed, 1 sechigh, 30 sectracking off

 

Battery levels across phones (note the abrupt phase change that happens on Fri+Sat, and how the change is staggered across phones, consistent with an automatically updated component)

It is clear that on Tuesday, phones 2 and 3 are fairly close to each other, and both are very different from phone 1. This is consistent with intuition and results before Thursday as well.
On Thursday, the difference between phone 1 and phone 2 is much less pronounced, and the difference between phone 2 and phone 3 is also much larger. On Friday and this Tuesday, there is essentially no difference between high and medium accuracy at the fast sampling rate (phone 1 and phone 2), and no difference between slow sampling and no tracking (phone 3 and phone 4).
I also note that the GPS icon is constantly turned on, even on the phone where the tracking is stopped.

Of course, this could be a bug in my code, but:

I didn&8217;t really change the code between Tue and Thu, and
I don&8217;t get the notifications about activity changes on the phone where it is turned off, and
my app does not show up in the location or battery drain screens

Next steps
In the next few days, I plan to poke around at this some more to see if I can figure out what&8217;s going on. Some thoughts are:

uninstall my app. This is very annoying because then I have to record the battery level manually, but I can suck it up for a day.
uninstall potential culprits &; google play services, maps, ??? It turns out that most of these are system services that cannot be uninstalled, but I can try disabling them.
&;&8211;> your suggestion here <&8212;&8212;&8212; If you have any thoughts on things to try, let me know!

We can do this together
This is complicated because we are trying to treat the phone like a natural phenomenon that we cannot control but can try to understand through observation. I&8217;d love to hear from other members of the community so that we can figure out whether google is really continuously tracking us without letting us know, and killing our battery while doing so.

Quelle: Amplab Berkeley