This Guy Was Arrested After He Smashed Up All The iPhones In An Apple Store

Apparently he was protesting &;consumer rights.&;

Wearing sunglasses, dangling iPhone headphones, and a thick glove, the man went phone by phone crushing them with the large iron ball.

Wearing sunglasses, dangling iPhone headphones, and a thick glove, the man went phone by phone crushing them with the large iron ball.

Twitter: @Quentin_IOS

In a video filmed by a bystander, the man yells that his rights as a consumer had been violated by Apple.

In a video filmed by a bystander, the man yells that his rights as a consumer had been violated by Apple.

Twitter: @Quentin_IOS

Apple “violated my rights and refused to refund me in accordance to the European consumer protection law,” the man shouts.

Apple "violated my rights and refused to refund me in accordance to the European consumer protection law,” the man shouts.

Twitter: @Quentin_IOS


View Entire List ›

Quelle: <a href="This Guy Was Arrested After He Smashed Up All The iPhones In An Apple Store“>BuzzFeed

Collaboration is king at Cloud Foundry Summit EU

When businesses collaborate on open technology projects, everyone wins. That was the prevailing message throughout the Cloud Foundry Summit in Frankfurt, Germany.
Operators, developers, users and cloud providers gathered to share best practices and reflect on the state of this growing community. In the two years since the Cloud Foundry Foundation was launched, the community has grown tremendously, as these highlights show:

More than 31,000 code commits
2,400-plus code contributors
More than 130 core contributors
65 member companies
17 new member companies in 2016
195 user groups
53,050 individuals
Contributors from 132 cities

Cloud Foundry Foundation CEO Sam Ramji called open source collaboration “a positive-sum game,” meaning that just by participating, members inherently benefit. “The more people who play, the more we win,” he said. “The more you give, the more that is available to everyone.”
Ramji also said that this is “the beginning of a 20-year revolution around what cloud platforms can be.”
It’s ultimately up to the community and its wide stakeholder base to ensure that the revolution is a productive one.
IBM Bluemix continues to grow
IBM offers the world’s largest Cloud Foundry environment with its IBM Bluemix platform. It was on full display during the conference in breakout sessions and even on the mainstage.
Michael “dr.max” Maximilien, a scientist, architect and engineer with the IBM Bluemix team, joined Simon Moser, an IBM senior technical staff member, during the opening keynote to provide an overview of some of the lessons they’ve learned from working in a Cloud Foundry environment.

&;Embrace the weirdness.&; @mosersd & @maximilien share lessons learned from @IBMBluemix at Summit. pic.twitter.com/kJXklTivQX
— IBM Cloud (@IBMcloud) September 27, 2016

The conversation continued with a number of breakout sessions highlighting the emergence of technology in general, particularly OpenWhisk, an IBM open-source, serverless offering. Maximilien told the crowd in his breakout session that OpenWhisk is a continuation of the IBM tradition of launching exciting, new open tech projects.
“We want to help lead the serverless movement,” he said. “Think of OpenWhisk as a push in that direction.”
Kim Bannerman, who leads the Technical Advocacy and Community team inside the Office of the CTO at IBM Blue Box, hosted a panel on serverless technology that featured Ruben Orduz and Tyler Britten, both technical advocates for IBM Blue Box, along with Casey West and Kenny Bastani of Pivotal.
It was clear that we’re still in early days for this technology, as much of the conversation revolved around the question, “What is serverless?” It will be some time before we start to see real-world use cases and more enterprises adopting it. Still, its potential is clear.
A few of the highlights from that session:

Is it Functions as a Service? Event-driven computing? At CloudFoundry Summit, the serverless discussion goes beyond buzzword. pic.twitter.com/pMpR4DeBZB
— IBM Cloud (@IBMcloud) September 27, 2016

Closing the gender gap
One noteworthy topic strung throughout the conference was the gender gap across the IT profession. While the industry is doing a better job of welcoming women into what’s been a traditionally male-dominated sector, there’s still a long way to go in hiring more female developers, ensuring equal pay and seeing more women at the executive level.
On Wednesday, Ursula Morgenstern, global head of consulting and systems integration at Atos, took to the mainstage to deliver a hopeful message that could represent the catalyst that brings more women into the field.

Problems exist at all levels: entering IT, being stuck in the middle and not getting to the top CloudFoundry @u_morgen pic.twitter.com/uVi2bgOqhC
— Paula Kennedy (@PaulaLKennedy) September 28, 2016

“It’s not just about gender. Ethnically diverse companies outperform their competitors by 35%” &; @u_morgen CloudFoundry pic.twitter.com/37c5YJQroF
— Cloud Foundry (@cloudfoundry) September 28, 2016

Later that day, IBM sponsored a diversity luncheon, which brought together Cloud Foundry community members to discuss issues and potential solutions for advocating for a more inclusive IT industry.
Moving forward
As the Cloud Foundry community looks toward the future, three of its leaders— Jason McGee, VP and CTO of IBM Cloud Platform; Duncan Johnston-Watt, CEO of Cloudsoft, and Stormy Peters, VP of Developer Relations at Cloud Foundry—explained what members must do to advance the cause and promote more interoperability and cooperation between foundations.

The post Collaboration is king at Cloud Foundry Summit EU appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

What’s the big deal about running OpenStack in containers?

The post What&;s the big deal about running OpenStack in containers? appeared first on Mirantis | The Pure Play OpenStack Company.
Ever since containers began their meteoric rise in the technical consciousness, people have been wondering what it would mean for OpenStack. Some of the predictions were dire (that OpenStack would cease to be relevant), some were more practical (that containers are not mini VMs, and anyway, they need resources to run on, and OpenStack still existed to manage those resources).
But there were a few people who realized that there was yet another possibility: that containers could actually save OpenStack.
Look, it&8217;s no secret that deploying and managing OpenStack is difficult at best, and frustratingly impossible at worst. So what if I told you that using Kubernetes and containers could make it easy?
Mirantis has been experimenting with container-based OpenStack for the past several years &; since before it was &;cool&; &8212; and lately we&8217;d decided on an architecture that would enable us to take advantage of the management capabilities and scalability that comes with the Kubernetes container orchestration engine.  (You might have seen the news that we&8217;ve also acquired TCP Cloud, which will help us jump our R&D forward about 9 months.)
Specifically, using Kubernetes as an OpenStack underlay lets us turn a monolithic software package into discrete services with well-defined APIs that can be freely distributed, orchestrated, recovered, upgraded and replaced &8212; often automatically based on configured business logic.
That said, it&8217;s more than just dropping OpenStack into containers, and talk is cheap. It&8217;s one thing for me to say that Kubernetes makes it easy to deploy OpenStack services.  And frankly, almost anything would be easier than deploying, say, a new controller with today&8217;s systems.
But what if I told you you could turn an empty bare metal node into an OpenStack controller just by adding a couple of tags to it?
Have a look at this video (you&8217;ll have to drop your information in the form, but it just takes a second):
Containerizing the OpenStack Control Plane on Kubernetes: auto-scaling OpenStack services
I know, right? Are you as excited about this as I am?
The post What&8217;s the big deal about running OpenStack in containers? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Let’s meet in Barcelona at the OpenStack Summit!

The post Let’s meet in Barcelona at the OpenStack Summit! appeared first on Mirantis | The Pure Play OpenStack Company.

As we count down the days to the OpenStack Summit in Barcelona on October 24-28, we’re getting ready to share memorable experiences, knowledge, and fun!

Come to booth C27 to see what we&;ve built with OpenStack, and join in an &;Easter Egg Hunt&; that will test your observational skills and knowledge of OpenStack, Containers, and Mirantis swag from prior summits. If you find enough Easter eggs, you&8217;re entered in our prize drawing for a $300 Visa gift card or an OpenStack certification exam from our OpenStack Training team ($400 value). And as always, we’re giving away more awesome swag you’ve come to expect from us.

If you&8217;d like to set up some time at the summit to talk with our team, simply contact us and we&8217;ll schedule a meeting.

REQUEST A MEETING

 
Free Training
Mirantis is also providing two FREE training courses based on our standard industry-leading curriculum. If you&8217;re interested in attending, please follow the links below to register:

Tuesday, October 25th: OpenStack Fundamentals
Wednesday, October 26th: Introduction to Kubernetes &; Docker

 
Mirantis Presentations
Here&8217;s where you can find us during the summit&;.
TUESDAY OCTOBER 25

Tuesday, 12:15pm-12:55pm
Level: Intermediate
Chasing 1000 nodes scale
(Dina Belova and Alex Shaposhnikov, Mirantis; Inria)

Tuesday, 12:15pm-12:55pm
Level: Intermediate
OpenStack: you can take it to the bank!
(Ivan Krovyakov, Mirantis; Sberbank)

Tuesday, 3:05pm-3:45pm
Level: Intermediate
Live From Oslo
(Oleksii Zamiatin, Mirantis; EasyStack, Red Hat, HP)

Tuesday, 3:55pm-4:35pm
Level: Intermediate
Is your cloud forecast a bit foggy?
(Oleksii Zamiatin, Mirantis; EasyStack, Red Hat, HP)

Tuesday, 5:05pm-5:45pm
Level: Intermediate
Kerberos and Health Checks and Bare Metal, Oh My! Updates to OpenStack Sahara in Newton.
(Nikita Konovalov and Vitaly Gridnev, Mirantis; Red Hat)

WEDNESDAY OCTOBER 26

Wednesday, 11:25am-12:05pm
Level: Intermediate
The race conditions of Neutron L3 HA&8217;s scheduler under scale performance
(Ann Taraday and Kevin Benton, Mirantis; Red Hat)

Wednesday, 11:25am-12:05pm
Level: Advanced
The race conditions of Neutron L3 HA&8217;s scheduler under scale performance
(Florin Stingaciu and Shaun O&8217;Meara, Mirantis)

Wednesday, 12:15pm-12:55pm
Level: Beginner
The Good, Bad and Ugly: OpenStack Consumption Models
(Amar Kapadia, Mirantis; IDC, EMC, Canonical)

Wednesday, 12:15pm-12:55pm
Level: Intermediate
OpenStack Journey in Tieto Elastic Cloud
(Jakub Pavlík, Mirantis TCP Cloud; Tieto)

Wednesday, 2:15pm-3:45pm
Level: Intermediate
User Committee Session
(Hana Sulcova, Mirantis TCP Cloud; Comcast, Workday, MIT)

Wednesday, 3:55pm-4:35pm
Level: Beginner
Lessons from the Community: What I&8217;ve Learned As An OpenStack Day Organizer
(Hana Sulcova, Mirantis TCP Cloud; Tesora, GigaSpaces, CloudDon, Intel, Huawei)

Wednesday, 3:05pm-3:45pm
Level: Beginner
Glare &; a unified binary repository for OpenStack
(Mike Fedosin and Kairat Kushaev, Mirantis)

Wednesday, 3:55pm-4:30pm
Level: Intermediate
OpenStack Requirements : What we are doing, what to expect and whats next
(Davanum Srinivas, Mirantis; RedHat)

Wednesday, 3:55pm-4:35pm
Level: Intermediate
Is OpenStack Neutron production ready for large scale deployments?
(Oleg Bondarev, Satish Salagame and Elena Ezhova, Mirantis)

Wednesday, 5:05pm-5:45pm
Level: Beginner
How Four Superusers Measure the Business Value of their OpenStack Cloud
(Kamesh Pemmaraju and Amar Kapadia, Mirantis)

THURSDAY OCTOBER 27

Thursday, 9:00am-9:40am
Level: Intermediate
Sleep Better at Night: OpenStack Cloud Auto­-Healing
(Mykyta Gubenko and Alexander Sakhnov, Mirantis)

Thursday, 11:00am-11:40am
Level: Advanced
OpenStack on Kubernetes &8211; Lessons learned
(Sergey Lukjanova, Mirantis; Intel, CoreOS)

Thursday, 11:00am-11:40am
Level: Intermediate
Unified networking for VMs and containers for Openstack and k8s using Calico and OVS
(Vladimir Eremin, Mirantis; Intel)

Thursday, 11:50am-12:30pm
Level: Intermediate
Kubernetes SDN Performance and Architecture Evaluation at Scale
(Jakub Pavlík and Marek Celoud, Mirantis TCP Cloud)

Thursday, 3:30pm-4:10pm
Level: Advanced
Ironic Grenade: Blowing up our upgrades.
(Vasyl Saienko, Mirantis; Intel)

Thursday, 3:30pm-4:10pm
Level: Beginner
Application Catalogs: understanding Glare, Murano and Community App Catalog
(Alexander Tivelkov and Kirill Zaitsev, Mirantis)

Thursday, 5:30pm-6:10pm
Level: Beginner
What&8217;s new in OpenStack File Share Services (Manila)
(Gregory Elkinbard, Mirantis; NetApp)
The post Let’s meet in Barcelona at the OpenStack Summit! appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Build and run your first Docker Windows Server container

Today, Microsoft announced the general availability of Server 2016, and with it, Docker engine running containers natively on Windows. This blog post describes how to get setup to run Docker Windows Containers on Windows 10 or using a Windows Server 2016 VM. Check out the companion blog posts on the technical improvements that have made Docker containers on Windows possible and the post announcing the Docker Inc. and Microsoft partnership.
Before getting started, It’s important to understand that Windows Containers run Windows executables compiled for the Windows Server kernel and userland (either windowsservercore or nanoserver). To build and run Windows containers, you have to have a Windows system with container support.
Windows 10 with Anniversary Update
For developers, Windows 10 is a great place to run Docker Windows containers and containerization support was added to the the Windows 10 kernel with the Anniversary Update (note that container images can only be based on Windows Server Core and Nanoserver, not Windows 10). All that’s missing is the Windows-native Docker Engine and some image base layers.
The simplest way to get a Windows Docker Engine is by installing the Docker for Windows public beta (direct download link). Docker for Windows used to only setup a Linux-based Docker development environment (slightly confusing, we know), but the public beta version now sets up both Linux and Windows Docker development environments, and we’re working on improving Windows container support and Linux/Windows container interoperability.
With the public beta installed, the Docker for Windows tray icon has an option to switch between Linux and Windows container development. For details on this new feature, check out Stefan Scherers blog post.
Switch to Windows containers and skip the next section.

Windows Server 2016
Windows Server 2016 is the where Docker Windows containers should be deployed for production. For developers planning to do lots of Docker Windows container development, it may be worth setting up a Windows Server 2016 dev system (in a VM, for example), at least until Windows 10 and Docker for Windows support for Windows containers matures.
For Microsoft Ignite 2016 conference attendees, USB flash drives with Windows Server 2016 preloaded are available at the expo. Not at ignite? Download a free evaluation version and install it on bare metal or in a VM running on Hyper-V, VirtualBox or similar. Running a VM with Windows Server 2016 is also a great way to do Docker Windows container development on macOS and older Windows versions.
Once Windows Server 2016 is running, log in and install the Windows-native Docker Engine directly (that is, not using &;Docker for Windows&;). Run the following in an Administrative PowerShell prompt:
# Add the containers feature and restart
Install-WindowsFeature containers
Restart-Computer -Force

# Download, install and configure Docker Engine
Invoke-WebRequest “https://download.docker.com/components/engine/windows-server/cs-1.12/docker.zip” -OutFile “$env:TEMPdocker.zip” -UseBasicParsing

Expand-Archive -Path “$env:TEMPdocker.zip” -DestinationPath $env:ProgramFiles

# For quick use, does not require shell to be restarted.
$env:path += “;c:program filesdocker”

# For persistent use, will apply even after a reboot.
[Environment]::SetEnvironmentVariable(“Path”, $env:Path + “;C:Program FilesDocker”, [EnvironmentVariableTarget]::Machine)

# You have to start a new PowerShell prompt at this point
dockerd –register-service
Start-Service docker
Docker Engine is now running as a Windows service, listening on the default Docker named pipe. For development VMs running (for example) in a Hyper-V VM on Windows 10, it might be advantageous to make the Docker Engine running in the Windows Server 2016 VM available to the Windows 10 host:
# Open firewall port 2375
netsh advfirewall firewall add rule name=”docker engine” dir=in action=allow protocol=TCP localport=2375

# Configure Docker daemon to listen on both pipe and TCP (replaces docker –register-service invocation above)
dockerd.exe -H npipe:////./pipe/docker_engine -H 0.0.0.0:2375 –register-service
The Windows Server 2016 Docker engine can now be used from the VM host  by setting DOCKER_HOST:
$env:DOCKER_HOST = “<ip-address-of-vm>:2375″
See the Microsoft documentation for more comprehensive instructions.
Running Windows containers
First, make sure the Docker installation is working:
> docker version
Client:
Version:      1.12.1
API version:  1.24
Go version:   go1.6.3
Git commit:   23cf638
Built:        Thu Aug 18 17:32:24 2016
OS/Arch:      windows/amd64
Experimental: true

Server:
Version:      1.12.2-cs2-ws-beta-rc1
API version:  1.25
Go version:   go1.7.1
Git commit:   62d9ff9
Built:        Fri Sep 23 20:50:29 2016
OS/Arch:      windows/amd64
Next, pull a base image that’s compatible with the evaluation build, re-tag it and to a test-run:
docker pull microsoft/windowsservercore:10.0.14393.206
docker tag microsoft/windowsservercore:10.0.14393.206 microsoft/windowsservercore
docker run microsoft/windowsservercore hostname
69c7de26ea48
Building and pushing Windows container images
Pushing images to Docker Cloud requires a free Docker ID. Storing images on Docker Cloud is a great way to save build artifacts for later user, to share base images with co-workers or to create build-pipelines that move apps from development to production with Docker.
Docker images are typically built with docker build from a Dockerfile recipe, but for this example, we’re going to just create an image on the fly in PowerShell.
“FROM microsoft/windowsservercore `n CMD echo Hello World!” | docker build -t <docker-id>/windows-test-image –
Test the image:
docker run <docker-id>/windows-test-image
Hello World!
Login with docker login and then push the image:
docker push <docker-id>/windows-test-image
Images stored on Docker Cloud available in the web interface and public images can be pulled by other Docker users.
Using docker-compose on Windows
Docker Compose is a great way develop complex multi-container consisting of databases, queues and web frontends. Compose support for Windows is still a little patchy and only works on Windows Server 2016 at the time of writing (i.e. not on Windows 10).
To try out Compose on Windows, you can clone a variant of the ASP.NET Core MVC MusicStore app, backed by a SQL Server Express 2016 database. If running this sample on Windows Server 2016 directly, first grab a Compose executable and make it is in your path. A correctly tagged microsoft/windowsservercore image is required before starting. Also note that building the SQL Server image will take a while.
git clone https://github.com/friism/Musicstore

cd Musicstore
docker build -t sqlserver:2016 -f .dockermssql-server-2016-expressDockerfile .dockermssql-server-2016-express.

docker-compose -f .srcMusicStoredocker-compose.yml up

Start a browser and open http://<ip-of-vm-running-docker>:5000/ to see the running app.
Summary
This post described how to get setup to build and run native Docker Windows containers on both Windows 10 and using the recently published Windows Server 2016 evaluation release. To see more example Windows Dockerfiles, check out the Golang, MongoDB and Python Docker Library images.
Please share any Windows Dockerfiles or Docker Compose examples your build with @docker on Twitter using the tag windows. And don’t hesitate to reach on the Docker Forums if you have questions.
More Resources:

Sign up to be notified of GA and the Docker Datacenter for Windows Beta
Register for a webinar: Docker for Windows Server
Learn more about the Docker and Microsoft partnership

The post Build and run your first Docker Windows Server container appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing the new Docs Repo on GitHub!

By John Mulhausen
The documentation team at Docker is excited to announce that we are consolidating all of our documentation into a single GitHub Pages-based repository on GitHub.
When is this happening?

The new repo is public now at https://github.com/docker/docker.github.io.
During the week of Monday, September 26th, any existing docs PRs need to be migrated over or merged.
We’ll do one last “pull” from the various docs repos on Wednesday, September 28th, at which time the docs/ folders in the various repos will be emptied.
Between the 28th and full cutover, the docs team will be testing the new repo and making sure all is well across every page.
Full cutover (production is drawing from the new repo, new docs work is pointed at the new repo, dissolution of old docs/ folders) is complete on Monday, October 3rd.

The problem with the status quo

Up to now, the docs have been all inside the various project repos, inside folders named “docs/” &; and to see the docs running on your local machine was a pain.
The docs were built around Hugo, which is not natively supported by GitHub, and took minutes to build, and even longer for us to deploy.
Even worse than all that, having the docs siloed by product meant that cross-product documentation was rarely worked on, and things like reusable partials (includes) weren’t being taken advantage of. It was difficult to have visibility into what constituted “docs activity” when pull requests pertained to both code and docs alike.

Why this solution will get us to a much better place

All of the documentation for all of Docker’s projects will now be open source!
It will be easier than ever to contribute to and stage the docs. You can use GitHub Pages’ *.github.io spaces, install Jekyll and run our docs, or just run a Docker command:
git clone https://github.com/docker/docker.github.io.git docs
cd docs
docker run -ti &;rm -v &;$PWD&;:/docs -p 4000:4000 docs/docstage
Doc releases can be done with milestone tags and branches that are super easy to reference, instead of cherry-picked pull requests (PRs) from several repos. If you want to use a particular version of the docs, in perpetuity, it will be easier than ever to retrieve them, and we can offer far more granularity.
Any workflows that require users to use multiple products can be modeled and authored easily, as authors will only have to deal with a single point of reference.
The ability to have “includes” (such as reusable instructions, widgets that enable docs functionality, etc) will be possible for the first time.

What does this mean for open source contributors?
Open source contributors will need to create both a code PR and a docs PR, instead of having all of the work live in one PR. We’re going to work to mitigate any inconvenience:

Continuous integration tests will eventually be able to spot when a code PR is missing docs and provide in-context, useful instructions at the right time that guide contributors on how to spin up a docs PR and link it to the code PR.
We are not going to enforce that a docs PR has to be merged before a code PR is merged, just that a docs PR exists. That means we should be able to merge your code PR just as quickly, if not more so, than in the past.
We will leave README instructions in the repos under their respective docs/ folders that point people to the correct docs repo.
We are adding “edit this page” buttons to every page on the docs so it will be easier than ever to locate what needs to be updated and fix it, right in the browser on GitHub.

We welcome contributors to get their feet wet, start looking at our new repo, and propose changes. We’re making it easier than ever to edit our documentation!
The post Announcing the new Docs Repo on GitHub! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Visit Docker @ Microsoft Ignite – Booth #758

 

Next week Microsoft will host over 20,000 IT executives, architects, engineers, partners and thought-leaders from around the world at Microsoft Ignite, September 25th-30th at the Georgia World Congress Center in Atlanta, Georgia.
Visit the Docker booth to learn how developers and IT pros can build, ship, and run any application, anywhere, across both Windows and Linux operating systems with Docker. By transforming modern application architectures for Linux and Windows applications, Docker allows business to benefit from a more agile development environment with a single journey for all their applications.
Don’t miss out! Docker experts will be on-hand to for in-booth demos to help you:

      Deploy your first Docker Windows container
      Learn about Docker containers on Windows Server 2016
      Manage your container environment with Docker Datacenter on Windows

Calling all Microsoft MVPs!

Attend our daily in booth theater session “Docker Containers for Linux and Windows&; with Docker evangelist Mike Coleman in the Docker booth @ 2PM every day. Session attendees will receive exclusive Docker and Microsoft swag.
To learn more about how Docker powers Windows containers, add these key Docker sessions to your Ignite agenda:
GS05: Reinvent IT infrastructure for business agility
Microsoft’s strategy centers on empowering you – the IT professionals &; to generate business value within your organizations. With Microsoft Azure and Azure Stack, you can leverage the power of cloud to drive business agility and developer productivity With the launch of Windows Server 2016 and Microsoft System Center 2016, you can accomplish more than ever before in your existing datacenters. And with Operations Management Suite, you can securely manage all of your on-premises and cloud infrastructure from one place. Microsoft Corporate VP Jason Zander discusses in-depth the latest technology innovations across all of these areas that help you reinvent your IT infrastructure, and be a hero within your organizations.
Speaker: Jason Zander, Microsoft
 
BRK3146: Dive into the new world of Windows Server and Hyper-V Containers
Applications need to be always available, globally accessible, scalable and secure in today’s 24/7 economy. Businesses must be able to deploy rapid updates and revisions at a lower cost with fewer resources than ever before to be competitive. Containers are an amazingly powerful technology for building, deploying and hosting applications that have been proven to reduce costs, improve efficiency and reduce deployment times &8211; making it a hot new feature in Windows Server 2016. We dive into the architecture features of the new container technology, talk about development and deployment experiences and best practices, along with some of the new Windows innovations such as Hyper-V Containers and Active Directory backed container identity.
Speakers: Taylor Brown, Microsoft & Patrick Lang, Microsoft
Thursday, September 29, 9:00am &8211; 10:15am, Room A1
BRK3147: Accelerate application delivery with Docker Containers and Windows Server 2016
Applications are changing and Docker is driving the containerization movement to deliver new microservices applications or provide a new construct to package legacy applications. Attend this session to learn how the combination of Docker, Linux, Microsoft Windows Server and Microsoft Azure technologies together deliver an application platform for hybrid cloud apps. Accelerate your app delivery and gain freedom to use any stack across a secure software supply chain.
Speakers: Mike Coleman, Docker & Taylor Brown, Microsoft
Thursday, September 29, 12:30pm &8211; 1:45pm, Room A411 &8211; A412
BRK3319: The Path to Containerization – transforming workloads into containers
Containers, micro-services and Docker are all the rage but what workloads are they used for? And how can you take advantage of these transformative new technologies? In this session you will hear from a user that has succeeded in taking their existing .Net application and migrated it into Windows containers proving them the agility and flexibility to further transform the application. But where do I start with containers? We will further cover concepts and best practices for identifying and migrating applications from existing deployments into containers and how to start down the path to microservice architectures.
Speakers: Taylor Brown, Microsoft & Matthew Roberts, Microsoft
To get ready for Ignite and to learn more about Docker, read the eBook Containers for the Virtualization Admin by Docker Technical Evangelist Mike Coleman.
More resources

Learn more about Docker for the Enterprise
Read the white paper: Docker for the Virtualization Admin
See all the integrations between Docker and Microsoft
Learn more about Docker Datacenter

The post Visit Docker @ Microsoft Ignite &8211; Booth 758 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

One cloud to rule them all — or is it?

The post One cloud to rule them all &; or is it? appeared first on Mirantis | The Pure Play OpenStack Company.
So you’ve sold your organization on private cloud.  Wonderful!  But to get that ROI you’re looking for, you need to scale quickly and get paying customers from your organization to fund your growing cloud offerings.
It’s the typical Catch-22 situation when trying to do something on the scale of private cloud: You can’t afford to build it without paying customers, but you can’t get paying customers without a functional offering.
In the rush to break the cycle, you onboard more and more customers.  You want to reach critical mass and become the de-facto choice within your organization.  Maybe you even have some competition within your organization you have to edge out.  Before long you end up taking anyone with money.  
And who has money?  In the enterprise, more often than not it&;s the bread and butter of the organization: the legacy workloads.
Promises are made.  Assurances are given.  Anything to onboard the customer.  “Sure, come as you are, you won’t have to rewrite your application; there will be no/minimal impact to your legacy workloads!”
But there&8217;s a problem here. Legacy workloads &8212; that is, those large, vertically scaled behemoths that don&8217;t lend themselves to &;cloud native&; principles &8212; present both a risk and an opportunity when growing your private cloud, depending how they are handled.
(Note: Just because a workload has been virtualized does not make it &8220;cloud-native&8221;. In fact, many virtualized workloads, even those implemented using SOA, service-oriented architecture, will not be cloud native. We&8217;ll talk more about classifying, categorizing and onboarding different workloads in a future article.)
&8220;Legacy&8221; cloud vs &8220;Agile&8221; cloud
The term &8220;legacy cloud&8221; may seem like a bit of an oxymoron, but hear me out. For years, surveys that ask people about their cloud use have had to include responses from people who considered vSphere cloud because the line between cloud and virtualization is largely irrelevant to most people.
Or at least it was, when there wasn&8217;t anything else.
But now there&8217;s a clear difference. Legacy cloud is geared towards these legacy workloads, while agile cloud is geared toward more &8220;cloud native&8221; workloads.
Let’s consider some example distinctions between a “Legacy Cloud” and an “Agile Cloud”. This table shows some of the design trade-offs between environments built to support legacy workloads versus those built without those restrictions:

Legacy Cloud
Agile Cloud

No new features/updates (platform stability emphasis), or very infrequently, limited & controlled
Regular/continuous deployment of latest and greatest features (platform agility emphasis)

Live Migration Support (redundancy in the platform instead of in the app), DRS (in case of ESXi hypervisors managed by VMWare)
Highly scalable and performant local storage, ability to support other performance enhancing features like huge pages.  No live migration security and operational burdens.

VRRP for Neutron L3 router redundancy
DVR for network performance & scalability; apps built to handle failure of individual nodes

LACP bonding for compute node network redundancy
SR-IOV for network performance; apps built to handle failure of individual nodes

Bring your own (specific) hardware
Shared, standard hardware defrayed with tenant chargeback policies (white boxes)

ESXi hypervisor or bare metal as a service (Ironic) to insulate data plane, and/or separate controllers to insulate control plane
OpenStack reference KVM deployment

A common theme here are features that force you to choose whether you are designing for performance & scalability (such as Neutron DVR) versus HA and resiliency (such as VRRP for Neutron L3 agents).
It’s one or the other, so introducing legacy workloads into your existing cloud can conflict with other objectives, such as increasing development velocity.
So what do you do about it?
If you find yourself in this situation, you basically have three choices:

Onboard tenants with legacy workloads and force them to potentially rewrite their entire application stack for cloud
Onboard tenants with legacy workloads into the cloud and hope everything works
Decline to onboard tenants/applications that are not cloud-ready

None of these are great options.  You want workloads to run reliably, but you also want to make the onboarding process easy without imposing large barriers of entry to tenants applications.
Fortunately, there&8217;s one more option: split your cloud infrastructure according to the types of workloads, and engineer a platform offering for each. Now, that doesn&8217;t necessarily mean a separate cloud.
The main idea is to architect your cloud so that you can provide a legacy-type environment for legacy workloads without compromising your vision for cloud-aware applications. There are two ways to do that:

Set up a separate cloud with an entirely new control plane for associated compute capacity.  This option offers a complete decoupling between workloads, and allows for changes/updates/upgrades to be isolated to other environments without exposing legacy workloads to this risk.
Use compute nodes such as ESXi hypervisor or bare metal (e.g., Ironic) for legacy workloads. This option maintains a single OpenStack control plane while still helping isolate workloads from OpenStack upgrades, disruptions, and maintenance activities in your cloud.  For example, ESXi networking is separate from Neutron, and bare metal is your ticket out of being the bad guy for rebooting hypervisors to apply kernel security updates.

Keep in mind that these aren’t mutually exclusive options; it is possible to do both.  
Of course each option come with their own downsides as well; an additional control plane involves additional overhead (to build and operate), and running a mixed hypervisor environment has its own set of engineering challenges, complications, and limitations.  Both options also add overhead when it comes to repurposing hardware.
There&8217;s no instant transition
Many organizations get caught up in the “One Cloud To Rule Them All” mentality, trying to make everything the same and work with a single architecture to achieve the needed economies of scale, but ultimately the final decision should be made according to your situation.
It&8217;s important to remember that no matter what you do, you will have to deal with a transition period, which means you need to provide a viable path for your legacy tenants/apps to gradually make the switch.  But first, asses your situation:

If your workloads are all of the same type, then there’s not a strong case to offer separate platforms out of the gate.  Or, if you’re just getting started with cloud in your organization, it may be premature to do so; you may not yet have the required scale, or you may be happy with onboarding only those applications which are cloud ready.
When you have different types of workloads, with different needs &8212; for example, Telco/NFV vs Enteprise/IT vs BigData/IoT workloads &8212; you may want to think about different availability zones inside the same cloud, so specific nuances for each type can be addressed inside it’s own zone while maintaining one cloud configuration, life cycle management and service assurance perspective, including having similar hardware. (Having similar hardware makes it easier to keep spares on hand.)
If you find yourself in a situation where you want to innovate with your cloud platform, but you still need to deal with legacy workloads with conflicting requirements, then workload segmentation is highly advisable.  In this case, you&8217;ll probably want to break from the “One Cloud” mentality in favor of the flexibility of multiple clouds  If you try to satisfy both your &8220;innovation&8221; mindset and your legacy workload holders on one cloud, you&8217;ll likely disappoint both.

After making this choice, you may then plan your transition path accordingly.
Moving forward
Even if you do create a separate legacy cloud, you probably don&8217;t want to maintain it in perpetuity.  Think about your transition strategy; a basic and effective carrot and stick approach is to limit new features and cloud-native functionality to your agile cloud, and to bill/chargeback at higher rates in your legacy cloud (which are, at any rate, justified by the costs incurred to provide and support this option).
Whatever you ultimately decide, the most important thing to do is make sure you&8217;ve planned it out appropriately, rather than just going with the flow, so to speak. If you need to, contact a vendor such as Mirantis; they can help you do your planning and get to production as quickly as possible.
The post One cloud to rule them all &8212; or is it? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Red Hat Confirms Over 40+ Accepted Sessions at OpenStack Summit Barcelona

This Fall&;s 2016 OpenStack Summit in Barcelona, Spain will be an exciting event. After a challenging issue with the voting system this time around (somehow prevented direct URLs to each session), the Foundation has posted the final session agenda, detailing the entire week&8217;s schedule of sessions and events. Once again, I am excited to see that based on community voting, Red Hat will be sharing over 40 sessions of technology overview and deep dives around OpenStack services for containers, storage, networking, compute, network functions virtualization (NFV), and much more. 
Red Hat is a Premiere sponsor in Barcelona this Fall and we are looking forward to sharing all of our general sessions, workshops, and full-day breakout track. To learn more about Red Hat&8217;s accepted sessions, have a look at the details below. Be sure to visit us at each session you can make, come by our booth in the Marketplace, which starts on Monday evening during the booth crawl, 6-7:30pm, or be sure to contact your Red Hat sales representative to meet with any of our executives, engineering, or product leaders face-to-face while in Barcelona. Either way, we look forward to seeing you all again in Spain in October! 
For more details on each session, click on the title below:

Tuesday October 25th
General sessions

Deploying and Operating a Production Application Cloud with OpenStack
 Chris Wright, Pere Monclus (PLUMgrid), Sandra O&8217;Boyle (Heavy Reading), Marcel Haerry (Swisscom)
11:25am-12:05pm

Delivering Composable NFV Services for Business, Residential & Mobile Edge
 Azhar Sayeed, Sharad Ashlawat (PLUMgrid)
12:15pm-12:55pm

I found a security bug, what happen&8217;s next?
 Tristan de Cacqueray and Matthew Booth
2:15pm-2:55pm

Failed OpenStack Update?! Now What?
 Roger lopez
2:15pm-2:55pm

OpenStack Scale and Performance Testing with Browbeat
Will Foster, Sai Sindhur Malleni, Alex Krzos
2:15pm-2:55pm

OpenStack and the Orchestration Options for Telecom / NFV
Chris Wright, Tobias Ford (AT&T), Hui Deng (China Mobile), Diego Lopez Garcia (Telefonica)
3:05pm-3:45pm

How to Work Upstream with OpenStack
Julien Danjou, Ashiq Khan (NTT), Ryota Mibu (NEC)
3:05pm-3:45pm

Live From Oslo
Kenneth Giusti, Joshua Harlow (Go Daddy), Oleksii Zamiatin (Mirantis), ChangBo Guo (EasyStack), Alexis Lee (HPE)
3:05pm-3:45pm

OpenStack and Ansible: Automation born in the Cloud
Keith Tenzer
3:05pm-3:45pm

Message Routing: a next-generation alternative to RabbitMQ
Kenneth Giusti, Andrew Smith
3:05pm-3:45pm

Pushing your QA upstream
Rodrigo Duarte Sousa
3:55pm-4:35pm

TryStack: The Free OpenStack Community Sandbox
Will Foster, Kambiz Aghaiepour
3:55pm-4:35pm

Kerberos and Health Checks and Bare Metal, Oh My! Updates to OpenStack Sahara in Newton
Elise Gafford, Nikita Konovalov (Mirantis), Vitaly Gridnev (Mirantis)
5:05pm-5:45pm

Wednesday October 26th

Feeling a bit deprecated? We are too. Let&8217;s work together to embrace the OpenStack Unified CLI.
 Darin Sorrentino, Chris Janiszewski
11:25am-12:55pm

The race conditions of Neutron L3 HA&8217;s scheduler under scale performance
John Schwarz, Ann Taraday (Mirantis), Kevin Benton (MIrantis)
11:25am-12:55pm

Barbican Workshop &; Securing the Cloud
Ade Lee, Douglas Mendizabel (Rackspace), Elvin Tubillara (IBM), Kaitlin Farr (John Hopkins University), Fernando Diaz (IBM)
11:25am-12:55pm

Cinder Always On &8211; Reliability And Scalability Guide
Gorka Eguileor, Michal Dulko (Intel)
12:15pm-12:55pm

OpenStack is an Application! Deploy and Manage Your Stack with Kolla-Kubernetes
Ryan Hallisey, Ken Wronkiewicz (Cisco), Michal Jastrzebski (Intel)
2:15pm-2:55pm

OpenStack Requirements : What we are doing, what to expect and whats next?
 Swapnil Kulkarni and Davanum Srinivas
3:55pm-4:35pm

Stewardship: bringing more leadership and vision to OpenStack
 Monty Taylor, Amrith Kumar (Tesora), Colette Alexander (Intel), Thierry Carrez (OpenStack Foundation)
3:55pm-4:35pm

Using OpenStack Swift to empower Turkcell&8217;s public cloud services
 Christian Schwede, Orhan Biyiklioglu (Turkcell) & Doruk Aksoy (Turkcell)
5:05pm-5:45pm

Lessons Learned from a Large-Scale Telco OSP+SDN Deployment
Guil Barros, Cyril Lopez, Vicken Krissian
5:05pm-5:45pm

KVM and QEMU Internals: Understanding the IO Subsystem
Kyle Bader
5:05pm-5:45pm

Effective Code Review
Dougal Matthews
5:55pm-6:35pm

Thursday October 27th

 Anatomy Of OpenStack Through The Eagle Eyes Of Troubleshooters
 Sadique Puthen
9:00am-9:40am

 The Ceph Power Show :: Hands-on Lab to learn Ceph &;The most popular Cinder backend&;
Brent Compton, Karan Singh
9:00am-9:40am

 Building self-healing applications with Aodh, Zaqar and Mistral
Zane Bitter, Lingxian Kong (Catalyst IT), Fei Long Wang (Catalyst IT)
9:00am-9:40am

 Writing A New Puppet OpenStack Module Like A Rockstar
Emilien Macchi
9:50am-10:30am

 Ambassador Community Report
Erwan Gallen, Kavit Munshi (Aptira), Jaesuk Ahn (SKT), Marton Kiss (Aptira), Akihiro Hasegawa (Bit-isle Equinix, Inc)
9:50am-10:30am

 VPP: the ultimate NFV vSwitch (and more!)?
Franck Baudin, Uri Elzur (Intel)
9:50am-10:30am

 Zuul v3: OpenStack and Ansible Native CI/CD
James Blair
11:00am-11:40am

 Container Defense in Depth
Thomas Cameron, Scott McCarty
11:50am-12:30pm

 Analyzing Performance in the Cloud : solving an elastic problem with a scientific approach
Alex Krzos, Nicholas Wakou (Dell)
11:50am-12:30pm

 One-stop-shop for OpenStack tools
Ruchika Kharwar
1:50pm-2:30pm

 OpenStack troubleshooting: So simple even your kids can do it
Vinny Valdez, Jonathan Jozwiak
1:50pm-2:30pm

 Solving Distributed NFV Puzzle with OpenStack and SDN
Rimma Iontel, Fernando Oliveira (VZ), Rajneesh Bajpai (BigSwitch)
2:40pm-3:20pm

 Ceph, now and later: our plan for open unified cloud storage
Sage Weil
2:40pm-3:20pm

 How to configure your cloud to be able to charge your users using official OpenStack components !
Julien Danjou, Stephane Albert (Objectif Libre), Christophe Sauthier (Objectif Libre)
2:40pm-3:20pm

 A dice with several faces: Coordinators, mentors and interns on OpenStack Outreachy internships
Victoria Martinez de la Cruz, Nisha Yadav (Delhi Tech Universty), Samuel de Medeiros Queiroz (HPE)
3:30pm-4:10pm

 Yo dawg I herd you like Containers, so we put OpenStack and Ceph in Containers
 Sean Cohen, Sebastien Han, Federico Lucifredi
3:30pm-4:10pm

 Picking an OpenStack Networking solution
Russell Bryant, Gal Sagie (Huawei), Kyle Mestery (IBM)
4:40pm-5:20pm

Forget everything you knew about Swift Rings &8211; here&8217;s everything you need to know about Swift Rings
Christian Schwede, Clay Gerrard (Swiftstack)
5:30pm-6:10pm

Quelle: RedHat Stack

How hybrid cloud management accelerates business

Global CEOs recognize cloud as critical to their business and understand that it is not always easy to manage their cloud infrastructures.
Organizations should be able to jump in and react quickly to changing demands, scale resources on the fly and accelerate performance across diverse resources. Automation and orchestration solutions are critical technologies to address this need. One of the biggest gaps that remains in many orchestration solutions is taking the cloud applications into production, which requires connecting to existing enterprise tools and adhering to existing policies.
&;IT organizations need orchestration solutions that can consistently implement service models, governance and policies across complex, heterogeneous environments — including cloud, virtual and legacy infrastructure,&; according to a 2013 IDC report on orchestration. This is where IBM Cloud Orchestrator can help organizations add value in the cloud. IBM Cloud Orchestrator provides cloud management for your IT services, allowing you to accelerate the delivery of software and infrastructure. It reduces the number of steps to manage public, private and hybrid clouds with an easy-to-use interface based on open standards. It also gives you access to ready-to-use content packs.
Here are some examples of how our clients have benefited from using IBM Cloud Orchestrator:
Fédération Française de Tennis (FFT), which managed and promoted the French Open, wanted to boost the global visibility of the tournament with a secure and cost-effective digital environment. It was looking for a flexible, reliable and high-performance IT infrastructure to manage unpredictable spikes in demand. With IBM Cloud Orchestrator software, FFT was able to automatically optimize workloads, dynamically create and allocate resources in real time, and deliver transparent and real-time access across resources. Jeremy Bottom, General Manager of FFT, said, “Partnering with IBM, we have demonstrated our ability to add to the excitement of tennis fans whether they are in the stands or at home.”
HBL, Pakistan’s largest bank, was looking to provide its enterprise customers the ability to execute high-volume transactions through a cash management portal. IBM Cloud Orchestrator software was used by the application team to manage HBL’s IT environment, orchestrate workloads across the PureApplication System, and oversee hardware, storage, networking and applications across the IT environment. The IBM platform delivered an immediate, up-front savings of approximately $500,000 in storage and hardware-related costs, as compared to conventional hardware and software procurement.
IBM Cloud Orchestrator can help customers rapidly implement more scalable and cost-effective data center management solutions across diverse, heterogeneous application and infrastructure.
Join us at IBM Edge 2016 to learn how you can simplify and automate your IT infrastructure across the hybrid cloud. Visit booth to get a brief demo on how you can put this into practice in your organization. Register today.
The post How hybrid cloud management accelerates business appeared first on .
Quelle: Thoughts on Cloud