Rackspace said to be close to private equity buyout

The post Rackspace said to be close to private equity buyout appeared first on Mirantis | The Pure Play OpenStack Company.
Rackspace, one of the founding companies behind OpenStack, is said to be close to a deal with Apollo Global Management to bring the company private at a value of between $3.4 billion and $4 billion. Rumors swirled around the company in 2014, but the company reportedly couldn&;t get the price it was looking for, said to be in the neighborhood of $6 billion.
Since then, Rackspace has changed its market strategy, exiting the commodity cloud business and focusing on &;managed services&;, in which customers pay for resources and for the &8220;fanatical support&8221; the company is known for. That &8220;fanatical support&8221; is now also offered for AWS and Azure. This week Rackspace also sold its Cloud Sites premium hosting business, which is separate from its cloud services and involves sites that start at $150/month, to Liquid Web for an undisclosed sum.
Resources

Is Cloud Provider Rackspace Going Private?
Is the Sale of Rackspace a Done Deal?
Rackspace Nears a Private Equity Buyout, Report says
Rackspace nears buyout; Going private could boost cloud managed services effort
Rackspace on the verge of private equity buyout | SiliconANGLE
Apollo Is Negotiating a Deal to Buy Cloud Company Rackspace
Rackspace warns of hit to UK business in H2 | TechMarketView
Rackspace Q2 report; Cloud Sites business sold to Liquid Web, no other strategic news
Rackspace sells Cloud Sites business to Liquid Web
Rackspace expands its managed security services to Microsoft’s Azure cloud
Rackspace manages security across clouds
Come Hear What We&8217;ve Learned at OpenStack Days: Silicon Valley
4 Winners and 3 Losers in Gartner’s Magic Quadrant for IaaS
Rackspace Reaches OpenStack Leadership Milestone, Six Years and One Billion Server Hours
Why OpenStack is Best as a Service

The post Rackspace said to be close to private equity buyout appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Snapchat Removes Filter After "Yellowface" Criticism

The company said it was supposed to be &;anime&;.

Everyone knows the best part about Snapchat is the filters. But a new filter was added that many people found offensive. “Yellowface” is when a non-Asian person dresses up or uses makeup (or in this case, a filter) to create cartoonish exaggeration of stereotypical asian features – it&;s considered offensive in the same way blackface is.

Snapchat told The Verge that the filter was “inspired by anime and meant to be playful.” The filter has been taken out of rotation due to the response.


View Entire List ›

Quelle: <a href="Snapchat Removes Filter After "Yellowface" Criticism“>BuzzFeed

DAY 1- OPENSTACK DAYS SILICON VALLEY 2016

The post DAY 1- OPENSTACK DAYS SILICON VALLEY 2016 appeared first on Mirantis | The Pure Play OpenStack Company.
THE UNLOCKED INFRASTRUCTURE CONFERENCE
By Catherine Kim

This year&;s OpenStack Days Silicon Valley, held once again at the Computer History Museum, carried a theme of industry maturity; we&8217;ve gone, as Mirantis CEO and co-Founder Alex Freedland said in his introductory remarks, from wondering if OpenStack was going to catch on to wondering where containers fit into the landscape to talking about production environments of both OpenStack and containers.

Here&8217;s a look at what you missed.
OpenStack: What Next?
OpenStack Foundation Executive Director Jonathan Bryce started the day off talking about the future of OpenStack. He&8217;s been traveling the globe visiting user groups and OpenStack Days events, watching as the technology takes hold in different parts of the world, but his predictions were less about what OpenStack could do and more about what people &; and other projects &8212; could do with it.

Standard frameworks, he said, provided the opportunity for large numbers of developers to create entirely new categories. For example, before the LAMP stack (Linux, Apache, MySQL and PHP) the web was largely made up of static pages, not the dynamic applications we have now. Android and iOS provided common frameworks that enable developers to release millions of apps a year, supplanting purpose-built machinery with a simple smartphone.

To make that happen, though, the community had to do two things: collaborate and scale. Just as the components of LAMP worked together, OpenStack needed to collaborate with other projects, such as Kubernetes, to reach its potential.

As for scaling, Jonathan pointed out that historically, OpenStack has been difficult to set up. It’s important to make success easier to duplicate. While there are incredible success stories out there, with some users using thousands of nodes, those users originally had to go through a lot of iterations and errors. For future developments, Jonathan felt it was important to share information about errors made, so that others can learn from those mistakes, making OpenStack easier to use.

To that end, the OpenStack foundation is continuing to produce content to help with specific needs, such as explaining the business benefits to a manager to more complex topics such as security. He also talked about the need to raise the size of the talent pool, and about the ability for students to take the Certified OpenStack Administrator exam (or others like it) to prove their capabilities in the market.
User talks
One thing that was refreshing about OpenStack Days Silicon Valley was the number of user-given talks. On day one we heard from Walmart, SAP, and AT&T, all of which have significantly transformed their organizations through the use of OpenStack.

OpenStack, Sean Roberts explained, enabled Walmart to make applications that can heal themselves, with failure scenarios that have rules about how they can recover from those failures. In particular, WalmartLabs, the online end of the company, had been making great strides with OpenStack, and in particular with a devops tool called OneOps. The tool makes it possible for them to manage their large number of nodes easily, and he suggested that it might do even better as an independent project under OpenStack.

Markus Riedinger talked about SAP and how it had introduced OpenStack. After making 23 acquisitions in a small period of time, the company was faced with a diverse infrastructure that didn&8217;t lend itself to collaboration. In the last few years it has begun to move towards cloud based work and in 2013 it started to move towards using OpenStack. Now the company has a container-based OpenStack structure based on Puppet, providing a clean separation of control and data, and a fully automatic system with embedded analytics and pre-manufactured PODs for capacity extension.  Their approach means that 1-2 people can take a data center from commissioned bare metal to an operational, scalable Kubernetes cluster running a fully configured OpenStack platform in less than a day.

Greg Stiegler discussed AT&T’s cloud journey, and Open Source and OpenStack at AT&T. He said that the rapid advancements in mobile data services have resulted in numerous benefits, and in turn this has exploded network traffic, with traffic expected to grow 10 times by 2020. To facilitate this growth, AT&T needed a platform, with a goal of remaining as close to trunk as possible to reduce technical debt. The result is the AT&T Integrated Cloud. Sarobh Saxena spoke about it at the OpenStack Summit in Austin earlier this year, but new today was the notion that the community effort should have a unified roadmap leader, with a strategy around containers that needs to be fully developed, and a rock solid core tent.

Greg finished up by saying that while AT&T doesn’t expect perfection, it does believe that OpenStack needs to be continually developed and strengthened. The company is grateful for what the community has always provided, and AT&T has provided an AT&T community team. Greg felt that the moral of his story was that by working together, community collaboration brings solutions at a faster rate, while weeding out mistakes through the experiences of others.
What venture capitalists think about open source
Well that got your attention, didn&8217;t it?  It got the audience&8217;s attention too, as Martin Casado, a General Partner from Adreessen Horowitz, started the talk by saying that current prevailing wisdom is that infrastructure is dead. Why? Partly because people don’t understand what the cloud is, and partly because they think that if the cloud is free, then they think “What else is there to invest in?” Having looked into it he thinks that view is dead wrong, and even believes that newcomers now have an unfair advantage.

Martin  (who in a former life was the creator of the &;software defined&; movement through the co-founding of SDN maker Nicira) said that for this talk, something is “software defined” if you can implement it in software and distribute it in software. For example, in the consumer space, the GPS devices have largely been replaced by software applications like Waze, which can be distributed to millions of phones, which themselves can run diverse apps to replace may functionalities that used to be &8220;wrapped in sheet metal&8221;.

He argued that infrastructure is following the same pattern. It used to be that the only common interface was internet or IP, but that we have seen a maturation of software that allows you to insert core infrastructure as software. Martin said that right now is one of these few times where there’s a market sufficient for building a company with a product that consists entirely of software.  (You still, however, need a sales team, sorry.)

The crux of the matter, though, is that the old model for Open Source has changed. The old model for Open Source companies was being a support company, however, now many companies will use Open Source to access customers and get credibility, but the actual commercial offering they have is a service. Companies such as Github (which didn&8217;t even invent Git) doing this have been enormously successful.
And now a word from our sponsors&;
The morning included several very short &8220;sponsor moments&8221;; two of which included very short tech talks.

The third was Michael Miller of Suse, who was joined onstage by Boris Renski from Mirantis. Together they announced that Mirantis and Suse would be collaborating with each other to provide support for SLES as both hosts and guests in Mirantis OpenStack, which already supports Ubuntu and Oracle Linux.

“At this point, there is only one conspicuous partner missing from this equation,” Renski said. Not to worry, he continued. SUSE has an expanded support offering, so in addition to supporting SUSE hosts, through the new partnership, Mirantis/SUSE customers with CentOS and RHEL hosts can also get support. “Mirantis  is now a one-stop shop for supporting OpenStack.”

Meanwhile,  Sujal Das, SVP of Marketing for Netronome, discussed networking and security and the many industry reports that highlight the importance of zero-trust defense security, with each VM and application needing to be trusted. OpenStack enables centralised control and automation in these types of deployments, but there are some challenges when using OVS and connection tracking, which affect VMs and the efficiency of the server. Ideally, you would like line red performance, but Netronome did some tests that show you do not get that performance with zero-trust security and OpenStack. Netronome is working on enhancements and adaptations to assist with this.

Finally, Evan Mouzakitis of Data Dog gave a great explanation of how you can look at events that happen when you are using OpenStack more closely to see not only what happened, but why. Evan explained that OpenStack uses RabbitMQ by default for message passing, and that once you can listen to that, you can know a lot more about what’s happening under the hood, and a lot more about the events that are occurring. (Hint: go to http://dtdg.co/nova-listen.)
Containers, containers, containers
Of course, the main thrust was OpenStack and containers, and there was no shortage of content along those lines.
Craig McLuckie of Google and Brandon Philips of CoreOS sat down with Sumeet Singh of AppFormix to talk about the future of OpenStack, namely the integration of OpenStack and Kubernetes. Sumeet started this discussion swiftly, asking Craig and Brandon “If we have Kubernetes, why do we need OpenStack?”

Craig said that enterprise needs hybrids of technologies, and that there is a lot of alignment between the two technologies, so  both can be useful for enterprises. Brandon also said that there’s a large incumbent of virtual machine users and they aren’t going to go away.

There’s a lot of integration work, but also a lot of other work to do as a community. Some is the next level of abstraction &; one of those things is rallying together to help software vendors to have a set of common standards for describing packages. Craig also believed that there’s a good opportunity to think about brokering of services and lifecycle management.

Craig also mentioned that he felt that we need to start thinking about how to bring the OpenStack and Cloud Native Computing foundations together and how to create working groups that span the two foundation’s boundaries.

In terms of using the two together, Craig said that from his experience he found that enterprises usually ask what it looks like to use the two. As people start to understand the different capabilities they shift towards it, but it’s very new and so it’s quite speculative right now.

Finally, Florian Leibert of Mesosphere, Andrew Randall of Tigera, Ken Robertson of Apcera, and Amir Levy of Gigaspaces sat down with Jesse Proudman of IBM to discuss &8220;The Next Container Standard&8221;.

Jesse started off the discussion by talking about how rapidly OpenStack has developed, and how in two short years containers have penetrated the marketplace. He questioned why that might be.

Some of the participants suggested that a big reason for their uptake is that containers drive adoption and help with inefficiencies, so customers can easily see how dynamic this field is in providing for their requirements.

A number of participants felt that containers are another wonderful tool in getting the job done and they’ll see more innovations down the road. Florian pointed out that containers were around before Docker, but what docker has done is that it has allowed individuals to use containers on their own websites. Containers are just a part of an evolution.

As far as Cloud Foundry vs Mesos or Kubernetes, most of the participants agreed that standard orchestration has allowed us to take a step higher in the model and that an understanding of the underlying tools can be used together &8212; as long as you use the right models. Amir argued that there is no need to take one specific technology’s corner, and that there will always be new technologies around the corner, but whatever we see today will be different tomorrow.

Of course there&8217;s the question of whether these technologies are complementary or competitive. Florian argued that it came down to religion, and that over time companies will often evolve to be very similar to one another. But if it is a religious decision, then who was making that decision?

The panel agreed that it is often the developers themselves who make decisions, but that eventually companies will choose to deliberately use multiple platforms or they will make a decision to use just one.

Finally, Jesse asked the panel about how the wishes of companies for a strong ROI affects OpenStack, leading to a discussion about the importance of really strong use cases, and showing customers how OpenStack can improve speed or flexibility.
Coming up
So now we head into day 2 of the conference, where it&8217;s all about thought leadership, community, and user stories. Look for commentary from users such as Tapjoy and thought leadership from voices such as James Staten from Microsoft, Luke Kanies of Puppet, and Adrian Cockroft of Battery Ventures.

 

 The post DAY 1- OPENSTACK DAYS SILICON VALLEY 2016 appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Mirantis and SUSE: Creating a One-Stop Shop for OpenStack Support on Major Linux Distros

The post Mirantis and SUSE: Creating a One-Stop Shop for OpenStack Support on Major Linux Distros appeared first on Mirantis | The Pure Play OpenStack Company.
This week, at OpenStack Silicon Valley, Mirantis and SUSE announced a partnership which will make Mirantis a &;one-stop shop&; for Mirantis OpenStack supported on all major Linux distributions. Mirantis does not ship a Linux distribution, but rather works with Linux distribution vendors on support of underlying Linux operating systems.This  partnership positions SUSE as Mirantis strategic Enterprise Linux partner providing Mirantis OpenStack customers with enterprise grade SLA’s for SUSE Linux Enterprise Server (SLES), Red Hat Enterprise Linux (RHEL), and CentOS. For ongoing support of Mirantis OpenStack running on Ubuntu, Mirantis and Canonical have had a collaborative relationship for several years, jointly supporting large customers like AT&T.
As a pure-play OpenStack provider, committed to freedom of choice, Mirantis will leverage this partnership to provide more flexibility to customers, reducing lock-in. While &8220;one-stop shop&8221; used to mean that a single technology vendor offered all the components of the solution, services, and support, Mirantis one-stop shop is about being a single source of support, services, and expertise to help customers in their cloud transformation journey using a wide range of certified best of breed technology selections. This approach is hugely valuable to large customers who may be broadly committed to a Linux distribution, but don&;t want to be locked into that choice, or limited in choosing other best-of-breed data center technologies to work with OpenStack..
Mirantis and SUSE will begin engineering collaboration upstream to fine tune Mirantis OpenStack on SUSE enterprise Linux leading to a certified/supported solution for customers. Additional upstream and downstream engineering/support collaboration will accelerate Mirantis taking on front line L1 and L2 support for the entire solution while SUSE provides L3 support for SLES, RHEL and CentOS.
Being free to run OpenStack on a preferred Linux distro is a big deal for enterprises — touching on every aspect of reliability, security, performance, usability, interoperability, and cost. In the past, such freedom has been hard to come by in the OpenStack space, because supporting production OpenStack on multiple spins requires both broad and specialized expertise. In some cases, vendors such as Red Hat have touted the value of Linux and OpenStack being &;co-engineered,&8217; effectively promoting lock-in. Mirantis has historically taken the opposite approach: as a pure-play OpenStack provider, we think of OpenStack as an application that should run on any host OS (or in containers, as our recent announcement about Kubernetes makes clear). This new partnership will help us deliver that kind of freedom of choice and reassurance to OpenStack customers in the real world.
The post Mirantis and SUSE: Creating a One-Stop Shop for OpenStack Support on Major Linux Distros appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Your Docker Agenda in August

From webinars to workshops, to conference talks, check out our list of events that are coming up in August!

North America | South America | Europe | Oceania | Asia | Africa | Official Docker Training Courses
 

Check out the @Docker agenda for August! Tons of awesome meetups, webinars & conferences!Click To Tweet

Official Docker Training Courses
View the full schedule of instructor led training courses here! Description of courses are below.

Docker Datacenter Training Series
Introduction to Docker
Docker Administration and Operations
Advanced Docker Operations
Managing Container Services with Universal Control Plane
Deploying Docker Datacenter
User Management and Troubleshooting UCP

North America
 
Aug 3rd: Docker Meetup at Docker HQ &; San Francisco, CA
Come and join us at Docker HQ on Wednesday for our 47th meetup! Ben Bonnefoy , a member of the Docker technical staff, will give an insight into Docker for Mac and Docker for Windows and then Nishant Totla , a software engineer in the core open source team, will give some updates on Docker .12. This will be followed by a talk by Neil Gehani , a Sr. Product Manager at HPE, on in-cluster testing. It will be a fun evening of learning, exchanging ideas and networking with pizza, beer and plenty of Docker stickers for everyone.
RSVP
Aug 3rd: Docker Meetup at Meltmedia &8211; Tempe, AZ
This meetup will focus on Docker for AWS, specifically running distributed apps from localhost to AWS.
RSVP
Aug 4th: Docker Meetup at Rackspace &8211; Austin, TX
A discussion about Docker Tips and Tricks.
RSVP
Aug 9th: Docker Meetup at CA Technologies &8211; Denver, CO
A talk about moving from SaaS to On-Premise with Docker, in particular how Docker made it possible to deploy a SaaS web application into firewalled networks and a journey of orchestrating together micro-service architecture from raw bash script to Replicated.
RSVP
Aug 11th: Docker Meetup at Full Sail Campus &8211; Orlando, FL
Docker Ecosystem and Use Case talks, followed by networking.
RSVP
Aug 11th: Docker Meetup at Braintree &8211; Chicago, IL
Ken Sipe will take the group through a look at the anatomy of a container including control groups (cgroups) and namespaces. Then there will be a discussion about Java&;s memory management and GC characteristics and how JRE characteristics change based on core count.
RSVP
Aug 16th: Docker Meetup at AEEC Innovation Lab &8211; Alexandria, VA
Docker Captain, Phil Estes, will present.
RSVP
Aug 16th: Docker Meetup at Datastax &8211; Santa Clara, CA
Databases, Image Management, In-cluster and Chaos Testing talks by Baruch Sadogursky, Ben Bromhead and Neil Gehani.
RSVP
Aug 16th: Docker Meetup at Impact Hub &8211; Santa Barbara, CA
This meetup will be about leveraging Docker + Compose for a real world dev environment. James Brown from Invoca will discuss how the move to Docker has benefited their development process.
RSVP
Aug 18th: Docker Meetup at CirrusMio &8211; Lexington, KY
Come and learn how others are using Docker! There will be two demos/talks scheduled for this meetup. The first will be about using Jenkins to build containers and the second will be about Docker in production.
RSVP
Aug 18th: Docker Meetup in Minneapolis &8211; Minneapolis, MN
The Container Summit City Series comes to Minneapolis on August 18th to continue the conversation surrounding containers in production! Bryan Cantrell, CTO of Joyent, will be joined in speaking by other expert users from companies that have been running containers in production for years and have experience with what solution stacks work best and what pitfalls to avoid.
RSVP
Aug 22nd: Docker Meetup at Issuetrak &8211; Virginia Beach , VA
Bret Fisher will tell all about DockerCon 2016 and what&8217;s in store for Docker in 1.12.
Aug 22nd &8211; 24th: LinuxCon/ ContainerCon &8211; Toronto, CA
There’s plenty of us at LinuxCon/ ContainerCon this year! Come see us at Booth to meet the Docker speakers and pick up your swag.
Aug 23rd: Docker and NATS Cloud Native Meetup During LinuxCon &8211; Toronto, Canada
The Docker Toronto meetup group and the Toronto NATS Cloud Native and IoT meetup group are joining forces to bring you a mega-meetup during LinuxCon! Riyaz Faizullabhoy from Docker will present on &;The Update Framework&8217; and , Diogo Monteiro will discuss implementing microservices with NATS. Raffi Der Haroutiounian will give an overview of NATS, Docker and Microservices.
Aug 23rd: Docker Meetup at the Iron Yard &8211; Houston, TX
Join us for our next meetup event!
RSVP
Aug 24th: Docker Meetup at CodeGuard &8211; Atlanta, GA
Talk by Eldon Stegall entitled, &8216;Abusing The Bridge: Booting a baremetal cluster from a docker container.&8217;
RSVP
Aug 28th &8211; 31h: VMworld 16 US &8211; Las Vegas, CA
Docker returns to VMworld this year and in Las Vegas! We’re launching our newest and biggest booth yet, so be sure to catch us at Booth . Yes, there will be swag given away.
Aug 31st: Docker Meetup in Salt Lake City &8211; Salt Lake City , UT
Come for a tutorial on new Docker 1.12 features and a review of DockerCon 2016 by Ryan Walls.
RSVP

South America
 
Aug 4th: Docker Meetup at Globant &8211; Córdoba, Argentina
Come for a talk on Docker for AWS. Talks by Florencia Caro, Ruben Dopazo, Carlos Santiago Moreno y Luis Barrueco.
RSVP
Aug 6th: Docker Meetup at Universidad Interamericana de Panamá &8211; Panamá, Panama
An introduction to Docker and Docker Cluster.
RSVP
Aug 9th: Docker Meetup at VivaReal&8211; Sao Paulo, Brazil
RSVP
Aug 13th: Docker Meetup at Microsoft Peru &8211; Lima, Peru
Join for a DockerCon recap.
RSVP
Aug 20th: Docker Meetup at Auditório-Unijorge Campus Comércio &8211; Salvador, Brazil
This is the beginning of the Docker Tour: the Docker Salvador meetup group&8217;s initiate to spread Docker technology among IT students in Salvador. This event will have two lectures for beginners where they can install the tool and learn Docker at ease in a friendly environment.
RSVP
Aug 23rd: Docker Meetup at Auditório Tecnopuc &8211; Porto Alegre, Brazil
A meetup to discuss PHP and Docker.
RSVP

Europe
 
Aug 3rd: Docker HandsOn &8211; Meet-Repeat C#+1 &8211; Hamburg, Germany
Aug 4th: Docker Meetup at SkyScanner Glasgow &8211; Glasgow, United Kingdom
What&8217;s new in Docker Land (@rawkode and @GJTempleton). Guy & I will be walking you through all the latest developments in Docker Land, including Docker Engine 1.12, Docker Compose 1.8, and Docker for Mac and Windows. Also well as these Docker updates, we&8217;ll be providing a quick review of DockerCon 2016 and highlighting some of the best talks for you to watch in your own time.
RSVP
Aug 8th: Docker Talk at Golang Conference &8211; Golang, UK
Speaking Docker Captain Tiffany Jernigan
Aug 9th: IOT RpiCar si ASP.NET Core + Docker &8211; Bucharest, Romania
Aug 10th:  Docker Meetup at KWORKS &8211; Istanbul, Turkey
Dockerizing a Complex Application Stack [w/Istanbul DevOps]
Aug 24th: Docker Meetup at Pipedrive &8211; Tallinn, Estonia
Let&8217;s share and discuss our experience with Docker ecosystem. More details of the content coming up!
RSVP
Aug 24th: Docker Meetup at Elastx &8211; Stockholm, Sweden
Continuously Deploying Containers To Docker Swarm Cluster. Speaker: Viktor Farcic (Docker Captain), & Senior Consultant, CloudBees. Abstract: Many of us have already experimented with Docker &8211; for example, running one of the pre-built images from Docker Hub. It is possible that your team might have recognized the benefits that Docker, in conjunction with experimentation, provides in building microservices and the advantages the technology could bring to development, testing, integration, and, ultimately, production.
RSVP
Aug 25th: Day of Containers &8211; Stockholm &8211; Stockholm, Sweden
Andrey Devyatkin & Viktor Farcic (Docker Captain) will give a talk &;Docker 101.&; If you are new to docker, this session is for you! In this sessions you will learn all the basics of docker and its main components. We will go through the the concept of containers, writing your own docker files, connecting data volumes, and basic orchestration with compose and swarm. Bring your laptops!
Aug 28th: Docker Meetup at Praqma &8211; Copenhagen, Denmark
Continuously Deploying Containers To Docker Swarm Cluster. Speaker: Viktor Farcic, Docker Captain & Senior Consultant, CloudBees. Abstract: Many of us have already experimented with Docker &8211; for example, running one of the pre-built images from Docker Hub. It is possible that your team might have recognized the benefits that Docker, in conjunction with experimentation, provides in building microservices and the advantages the technology could bring to development, testing, integration, and, ultimately, production.
RSVP
Aug 28th: Docker Talk at Agile Peterborough &8211; Peterborough, UK
Speaker Docker Captain Alex Ellis
Aug 28th: Docker Pre- Conference Meetup &8211; Praqma, Copenhagen
Speaker Docker Captain Viktor Farcic
Aug 29th: Docker Meetup at Praqma &8211; Copenhagen, Denmark
Laura Frank (Docker Captain) &8211; &8220;Stop being lazy and test your software.&8221; Testing software is necessary, no matter the size or status of your company. Introducing Docker to your development workflow can help you write and run your testing frameworks more efficiently, so that you can always deliver your best product to your customers and there are no excuses for not writing tests anymore. Jan Krag &8211; &8220;Docker 101.&8221; If you are new to docker, this session is for you! In this sessions you will learn all the basics of docker and its main components.
Viktor Farcic (Docker Captain)

Aug 31st: Docker Meetup at INCUBA &8211; Aarhus, Denmark
Rohde & Schwarz will give a talk about how they use Docker for development and test. HLTV.org will give a talk about how they use Docker to easily deploy microservices as part of their web platform.
RSVP
Aug 31st &8211; Sep 2: Software Circus &8211; Amsterdam, Netherlands
In Amsterdam for Software Circus? So is Docker! Speaking from Docker Ben Firshman

Asia
 
Aug 20th: Docker Meetup at Red Hat India Pvt. Ltd &8211; Bangalore, India
Docker for AWS and Azure &8211; Neependra Khare (Docker Captain), CloudYuga. Service Discovery and Load Balancing with Docker Swarm &8211; Ajeeth S. Raina (Docker Captain), Dell. Docker Application Bundle Overview &8211; Thomas Chacko. Logging as a service using Docker &8211; Manoj Goyal, Cisco. SDN-Like App Delivery Controller using Docker Swarm &8211; Prasad Rao, Avi Networks.
RSVP

Oceania 
Aug 1st: Docker Meetup in Auckland &8211; Auckland, New Zealand
Learn about all the new Docker features and offerings announced at DockerCon16 in Seattle!
RSVP
Aug 8th: Docker Meetup at Commbank &8211; Sydney, Australia
The Big Debate: AWS v Azure vs Google Cloud vs EMC Hybrid Cloud. One of the questions will help bring to light each platform&8217;s integration with the Docker ecosystem.
RSVP

Africa
Aug 6th: Docker Meetup at LakeHub &8211; Kisumu, Kenya
Please join us to learn about all the exciting announcements from DockerCon! Talk 1: What&8217;s New in Docker 1.12, by William Ondenge. In this presentation, William will describe Docker 1.12 new features and help you get your hands on the latest builds of Docker to try them on your own.
RSVP
// <![CDATA[
!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s); js.id=id;js.async=true;js.src="https://a248.e.akamai.net/secure.meetupstatic.com/s/script/2012676015776998360572/api/mu.btns.js?id=65gk05ie6n07ijoq3eq5vchs6f";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","mu-bootjs");
// ]]>
Quelle: https://blog.docker.com/feed/

In cloud’s ‘second wave,’ hybrid is the innovator’s choice

In the cloud business, there&;s plenty of &;tech talk&; about APIs, containers, object storage and any number of other IT topics. I don&8217;t discount its value, but my view of cloud is a little different because my job begins and ends with IBM clients&8217; success in adopting cloud, nothing more or less. As a result, I [&;]
The post In cloud&8217;s ‘second wave,’ hybrid is the innovator’s choice appeared first on Thoughts On Cloud.
Quelle: Thoughts on Cloud

AMPLab postdoc Julian Shun wins the ACM Doctoral Dissertation Award

I am very pleased to announce that Julian Shun has been awarded the ACM&;s doctoral dissertation award for his 2015 CMU doctoral thesis &;Shared-Memory Parallelism Can Be Simple, Fast, and Scalable&; which also won that year&8217;s CMU SCS distinguished dissertation award.
 
Julian currently works with me as a postdoc both in the Department of Statistics and in the AMP Lab in the EECS Department and is supported by a Miller Fellowship. His research focuses on fundamental theoretical and practical questions at the interface between computer science and statistics for large-scale data analysis. He is particularly interested in all aspects of parallel computing, especially parallel graph processing frameworks, algorithms, data structures and tools for deterministic parallel programming; and he has developed Ligra, a lightweight graph processing framework for shared memory.
 
More details can be found in the  official ACM announcement.
 
Join me in congratulating Julian!
Quelle: Amplab Berkeley

CACM Article on Randomized Linear Algebra

Each month the Communications of the ACM publishes an invited &;Review Article&; paper chosen from across the field of Computer Science.  These papers are intended to describe new developments of broad significance to the computing field, offer a high-level perspective on a technical area, and highlight unresolved questions and future directions.  The June 2016 issue of CACM contains a paper by AMPLab researcher Michael Mahoney and his colleague Petros Driness (of RPI, soon to be Purdue).  The paper, &8220;RandNLA: Randomized Numerical Linear Algebra,&8221; describes how randomization offers new benefits for large-scale linear algebra computations such as those that underlie a lot of the machine learning that is developed in the AMPLab and elsewhere.
Randomized Numerical Linear Algebra (RandNLA), a.k.a., Randomized Linear Algebra (RLA), is an interdisciplinary research area that exploits randomization as a computational resource to develop improved algorithms for large-scale linear algebra problems.  From a foundational perspective, RandNLA has its roots in theoretical computer science, with deep connections to mathematics (convex analysis, probability theory, metric embedding theory), applied mathematics (scientific computing, signal processing, numerical linear algebra), and theoretical statistics.  From an applied &8220;big data&8221; or &8220;data science&8221; perspective, RandNLA is a vital new tool for machine learning, statistics, and data analysis.  In addition, well-engineered implementations of RandNLA algorithms, e.g., Blendenpik, have already outperformed highly-optimized software libraries for ubiquitous problems such as least-squares.
Other RandNLA algorithms have good scalability in parallel and distributed environments.  In particular, AMPLab postdoc Alex Gittens has led an effort, in collaboration with researchers at Lawrence Berkeley National Laboratory and Cray, Inc., to explore the trade-offs of performing linear algebra computations such as RandNLA low-rank CX/CUR/PCA/NMF approximations at scale using Apache Spark, compared to traditional C and MPI implementations on HPC platforms, on LBNL&;s supercomputers versus distributed data center computations.  As Alex describes in more detail in his recent AMPLab blog post, this project outlines Spark&8217;s performance on some of the largest scientific data analysis workloads ever attempted with Spark, including using more than 48,000 cores on a supercomputing platform to compute the principal components of a 16TB atmospheric humidity data set.
Here are the key highlights from the CACM article on RandNLA.
1. Randomization isn&8217;t just used to model noise in data; it can be a powerful computational resource to develop algorithms with improved running times and stability properties as well as algorithms that are more interpretable in downstream data science applications.
2. To achieve best results, random sampling of elements or columns/rows must be done carefully; but random projections can be used to transform or rotate the input data to a random basis where simple uniform random sampling of elements or rows/ columns can be successfully applied.
3. Random sketches can be used directly to get low-precision solutions to data science applications; or they can be used indirectly to construct preconditioners for traditional iterative numerical algorithms to get high-precision solutions in scientific computing applications.
More details on RandNLA can be found by clicking here for the full article and the associated video interview.
Quelle: Amplab Berkeley

Scientific Matrix Factorizations In Spark at Scale

The canonical example of matrix decompositions, the Principal Components Analysis (PCA), is ubiquitous, with applications in many scientific fields including neuroscience, genomics, climatology, and economics. Increasingly, the data sets available to scientists are in range of hundreds of gigabytes or terabytes, and their analyses are bottle-necked by the computation of the PCA or related low-rank matrix decompositions like the Non-Negative Matrix factorization (NMF). The sheer size of these data sets necessitates distributed analyses. Spark is a natural candidate for implementation of these analyses. Together with my collaborators at Berkeley: Aditya Devarakonda, Michael Mahoney, James Demmel, and with teams at Cray, Inc. and NERSC&;s Data Analytic Services group, I have been investigating the performance of Spark at computing scientific matrix decompositions.
We used MLlib and ml-matrix to implement three scientifically useful decompositions: PCA, NMF, and a randomized CX decomposition for column subset selection. After implementing the same algorithms in MPI using standard linear algebra libraries, we characterized the runtime gaps between Spark and MPI. We found that Spark implementations range from 2x to 26x slower than the MPI implementations of the same algorithm. This highlights that there are still opportunities to improve the current support for large-scale linear algebra in Spark.
While there are well-engineered, high-quality HPC codes for computing the classical matrix decompositions in a distributed fashion, these codes are often difficult to deploy without specialized knowledge and skills. In some subfields of scientific computation, like numerical partial differential equations, these knowledge and skills are de rigueur, but in others, the majority of scientists lack sufficient expertise to apply these tools to their problems. As an example, we point out that despite the availability of the CFSR data set— a collection of samples of three-dimensional climate variables collected at 3 to 6 hours intervals over the course of 30+ years— climatologists have largely limited their analyses to 2D slices because of the difficulties involved with loading the entire dataset. The PCA decompositions we computed in the course of our investigations here are the first time that three-dimensional principal components have been extracted from a terabyte-scale subset of this dataset.
What do we mean by &;scientific&; matrix decompositions? There is a large body of work on using Spark and similar frameworks to compute low-precision stochastic matrix decompositions that are appropriate for machine learning and general statistical analyses. The communication patterns and precision requirements of these decompositions can differ from those of classical matrix decompositions that are used in scientific analyses, like the PCA, which is typically desired to be computed to high precision. We note that randomized algorithms like CX and CUR are also of scientific value, as they can be used to extract interpretable low-rank decompositions with provable approximation guarantees.
There are multiple advantages to implementing these decompositions in Spark. The first, and most attractive, is the accessibility of Spark to scientists who do not have prior experience with distributed computing. But also importantly, unlike traditional MPI-based codes which assume that the data is already present on the computational nodes, and require manual checkpointing, Spark provides an end-to-end system with sophisticated IO support and automatic fault-tolerance. However, because of Spark&8217;s bulk synchronous parallel programming model, we know that the performance of Spark-based matrix decomposition codes will lag behind that of MPI-based codes, even when implementing the same algorithm.
To better understand the trade-offs inherent in using Spark to compute scientific matrix decompositions, we focused on three matrix decompositions motivated by three particular scientific use-cases: PCA, for the analysis of the aforementioned climatic data sets; CX (a randomized column subset selection method), for an interpretable analysis of a mass spectrometry imaging (MSI) data set; and NMF, for the analysis of a collection of sensor readings from the Daya Bay high energy physics experiment. The datasets are described in Table 1.
Table 1: Descriptions of the data sets used in our experiments.
For each decomposition, we implemented the same algorithms for PCA, NMF, and CX in C+MPI and in Spark. Our data sets are all &8220;tall-and-skinny&8221; highly rectangular matrices. We used H5Spark, a Spark interface to the HDF5 library developed at NERSC, to load the HDF5 files. The end-to-end (including IO) compute times for the Spark codes and the MPI codes are summarized in Table 2 for different levels of concurrency.
Table 2: summary of Spark and MPI run-times.
The MPI codes range from 2 to 26 times faster than the Spark codes for PCA and NMF. The performance gap is lowest for the NMF algorithm at the highest level of concurrency, and highest for the PCA algorithm at the highest level of concurrency. Briefly, this difference is due to the fact that our NMF algorithm makes one pass over the dataset, so IO is the dominant cost, and this cost decreases as concurrency increases. On the other hand, the PCA algorithm is an iterative algorithm (Lanczos), which makes multiple passes over the dataset, so the dominant cost is due to synchronization and scheduling; these costs increase with the level of concurrency.
Figure 1: Spark overheads.
Figure 1 summarizes some sources of Spark overhead during each task. Here, &8220;task start delay&8221; measures the time from the start of the stage to the time the task reaches the executor; &8220;scheduler delay&8221; is the time between a task being received and being deserialized plus the time between result serialization and the driver receiving the task&8217;s completion message; &8220;task overhead time&8221; measures the time spent waiting on fetches, executor deserialization, shuffle writing, and serializing results; and &8220;time waiting until stage end&8221; is the time spent waiting for all other tasks in the stage to end.
Figure 2: Run times for rank 20 PCA decomposition of the 2.2TB ocean temperature data set on varying number of nodes.
To compute the PCA, we use an iterative algorithm that computes a series of distributed matrix-vector products until convergence to the decomposition is achieved. As the rank of the desired decomposition rises, the number of required matrix-vector products increases. Figure 2 shows the run times for the MPI and Spark codes when computing a rank 20 PCA of the 2.2TB ocean temperature data set on NERSC&8217;s Cori supercomputer, as the level of parallelism increases from 100 nodes (3,200 cores) to 500 nodes (16,000 cores). The overheads are reported as the sum over all stages of the mean overheads for the tasks in each stage. The remaining buckets are computational stages common to both the Spark and MPI implementations. We can see that that the dominant costs of computing the PCA here is the cost of synchronizing and scheduling the distributed matrix-vector products, and these costs increase with the level of concurrency. Comparing just the compute phases, Spark is less than 4 times slower than MPI at all levels of concurrency, with this gap decreasing as concurrency increases.
Figure 3: Run times for computing a rank 20 decomposition of the 16TB atmospheric humidity data set on 1522/1600 nodes.
Figure 3 shows the run times for the MPI and Spark codes for PCA at a finer level of granularity for the largest PCA run, computing a rank 20 decomposition of a 16TB atmospheric humidity matrix on 1600 of the 1630 nodes of the Cori supercomputer at NERSC. MPI scaled successfully to all 51200 cores, and Spark managed to launch 1522 of the requested executors, so scaled to 48704 cores. The running time for the Spark implementation is 1.2 hours while that for the MPI implementation is 2.7 minutes. Thus, for this PCA algorithm, Spark&8217;s performance is not comparable to MPI— we note that the algorithm we implemented is the same as the most scalable PCA algorithm currently available in MLlib.
Figure 4: Run times for NMF decomposition of the 1.6TB Daya Bay data set on a varying number of nodes.
By way of comparison, the Spark NMF implementation scales much better. Figure 4 gives the run time breakdowns for NMF run on 50 nodes, 100 nodes, and 300 nodes of Cori (1,600, 3,2000, and 16,000 cores respectively). This implementation uses a slightly modified version of the tall-skinny QR algorithm (TSQR) available in ml-matrix to reduce the dimensionality of the input matrix, and computes the NMF on this much smaller matrix locally on the driver. The TSQR is computed in one pass over the matrix, so the synchronization and scheduling overheads are minimized, and the dominant cost is the IO. The large task start delay in the 50 node case is due to the fact that the cluster is not large enough to hold the entire matrix in memory.
The performance of the CX decomposition falls somewhere in between that of the PCA and the NMF, because the algorithm involves only a few passes (5) over the matrix. Our investigation into the breakdown of the Spark overheads (and how they might be mitigated with more carefully designed algorithms) is on-going. We are also collaborating with climate scientists at LBL in the analysis of the three-dimensional climate trends extracted from the CFSR data.
Quelle: Amplab Berkeley