Azure N-Series preview availability

Today we’re delighted to announce that Azure N-Series Virtual Machines, the fastest GPUs in the public cloud, are now available in preview. N-Series instances are enabled with NVIDIA’s cutting edge GPUs to allow you to run GPU-accelerated workloads and visualize them. These powerful sizes come with the agility you have come to expect from Azure, paying per-minute of usage.

Our N-Series VMs are split into two categories. With the NC-Series (compute-focused GPUs), you will be able to run compute intensive HPC workloads using CUDA or OpenCL. This SKU is powered by Tesla K80 GPUs and offers the fastest computational GPU available in the public cloud. Furthermore, unlike other providers, these new SKUs expose the GPUs through discreet device assignment (DDA) which results in close to bare-metal performance. You can now crunch through data much faster with CUDA across many scenarios including energy exploration applications, crash simulations, ray traced rendering, deep learning and more. The Tesla K80 delivers 4992 CUDA cores with a dual-GPU design, up to 2.91 Teraflops of double-precision and up to 8.93 Teraflops of single-precision performance. Following are the Tesla K80 GPU sizes available:

 

NC6

NC12

NC24

Cores

6

(E5-2690v3)

12

(E5-2690v3)

24

(E5-2690v3)

GPU

1 x K80 GPU (1/2 Physical Card)

2 x K80 GPU (1 Physical Card)

4 x K80 GPU (2 Physical Cards)

Memory

56 GB

112 GB

224 GB

Disk

380 GB SSD

680 GB SSD

1.44 TB SSD

In addition to the NC-Series, focused on compute, the NV-Series is focused more on visualization. Data movement has traditionally been a challenge with HPC scenarios using large datasets produced in the cloud. With the Azure NV-Series, you’ll be able to use Tesla M60 GPUs and NVIDIA GRID in Azure for desktop accelerated applications and virtual desktops. With these powerful visualization GPUs in Azure, you will be able to visualize graphic-intensive workflows to get superior graphics capability and run single precision workloads such as encoding and rendering. The Tesla M60 delivers 4096 CUDA cores in a dual-GPU design with up to 36 streams of 1080p H.264. Following are the Tesla M60 GPU Sizes available:

 

NV6

NV12

NV24

Cores

6

(E5-2690v3)

12

(E5-2690v3)

24

(E5-2690v3)

GPU

1 x M60 GPU (1/2 Physical Card)

2 x M60 GPU (1 Physical Card)

4 x M60 GPU (2 Physical Cards)

Memory

56 GB

112 GB

224 GB

Disk

380 GB SSD

680 GB SSD

1.44 TB SSD

We’ve partnered with NVIDIA to deliver these capabilities in Azure including making sure the virtual machines are optimized to deliver the highest possible performance.

“Azure N-Series now makes GPU computing accessible for modern day Da Vincis, Curies and Einsteins to solve the world’s hardest problems. With over 400 industry applications accelerated by NVIDIA GPUs, Microsoft and NVIDIA are serving the world&;s most demanding users and powering amazing experiences in simulation, artificial intelligence, professional visualization.” – Ian Buck, VP of accelerated computing, NVIDIA

This preview offers the first public cloud support for this bleeding edge of specialized hardware. One of our release partners Teradici has been validating their PCoIP technology on these instances with fantastic results. In fact, with this preview, you will be able to utilize Teradici’s cutting edge protocol for VDI type scenarios including running desktop accelerated applications such as AutoCAD and Adobe Premier Pro.

“We’re thrilled that Microsoft has chosen our trusted, industry-leading Teradici PCoIP technology for its new N-Series instances in the Azure cloud,” said Dan Cordingley, CEO, Teradici. “The combination of PCoIP and Azure N-Series VMs is a winning formula that’s been optimized to deliver flawless user experiences from the cloud, complete with lossless imaging and 100 percent color accuracy. Now, as the visualization layer for Azure N-Series, PCoIP is helping customers further their digital transformation by enabling unprecedented collaboration on cloud datasets and graphically rich applications across multiple geographic regions, complete with the battle-hardened security of a fully encrypted pixel stream.”

Additionally, the technology will enable you to visualize your simulations in real time on Azure including having the ability to modify models and simulate them real time in Azure. This really closes the loop on “true HPC in the cloud” and allows customers to run their entire infrastructure and pipeline in Azure.

We’ve been working with various early adopters including BAFTA and Emmy award-winning studio in London Jellyfish Pictures. They have over 100 artists in London and additional artists across various geographies. They currently face a couple of challenges like remote artists having access to desktop accelerated applications whenever they’re not in the studio. Additionally, they’re currently rendering on CPU based virtual machines in Azure but with 4K and 8K on the horizon they have the need to render using GPU based rendering engines to not only cut their rendering times multiple fold but also get better results.

With Azure N-Series Jellyfish Picture’s artists are able to utilize the NV instances to run GPU and GRID accelerated applications such as Autodesk Maya. In addition, they’re able to speed up their rendering times by five times, utilizing the NC instances for CUDA based ray-tracing engines such as Octane and Redshift. This allows the studio to be more efficient and productive by focusing on their business value, which is producing and delivering cutting edge visual effects and simulations.

“By using Azure, we can scale dynamically to our production needs on demand. When using our on-premise we can run in a seamless hybrid model allowing us to fill in the gap when our current infrastructure isn’t sufficient. What makes this a really exciting offering is that this applies to both CPU and GPU based solutions. The result is that production deadlines are met without ever over provisioning.” – Jeremy Smith, CTO, Jellyfish Pictures

These customer scenarios are exactly what we hope to solve with this new offering. The preview will start in South Central region initially and will expand to additional regions in the next couple of months as we approach General Availability before the end of the year.

We encourage you to register your interest in the preview. To learn more about the technology and use cases for N-Series, check out our recent Channel 9 video. Pricing information can be found on the Virtual Machines pricing page.

We’re extremely excited to get our newest member of virtual machines in your hands and cannot wait to see what new use cases and scenarios they’re able to solve with GPUs in Azure.
Quelle: Azure

The Bet on Kubernetes, a Red Hat Perspective

Editor’s note: Today’s guest post is from a Kubernetes contributor Clayton Coleman, Architect on OpenShift at Red Hat, sharing their adoption of the project from its beginnings.Two years ago, Red Hat made a big bet on Kubernetes. We bet on a simple idea: that an open source community is the best place to build the future of application orchestration, and that only an open source community could successfully integrate the diverse range of capabilities necessary to succeed. As a Red Hatter, that idea is not far-fetched – we’ve seen it successfully applied in many communities, but we’ve also seen it fail, especially when a broad reach is not supported by solid foundations. On the one year anniversary of Kubernetes 1.0, two years after the first open-source commit to the Kubernetes project, it’s worth asking the question:Was Kubernetes the right bet?The success of software is measured by the successes of its users – whether that software enables for them new opportunities or efficiencies. In that regard, Kubernetes has succeeded beyond our wildest dreams. We know of hundreds of real production deployments of Kubernetes, in the enterprise through Red Hat’s multi-tenant enabled OpenShift distribution, on Google Container Engine (GKE), in heavily customized versions run by some of the world’s largest software companies, and through the education, entertainment, startup, and do-it-yourself communities. Those deployers report improved time to delivery, standardized application lifecycles, improved resource utilization, and more resilient and robust applications. And that’s just from customers or contributors to the community – I would not be surprised if there were now thousands of installations of Kubernetes managing tens of thousands of real applications out in the wild.I believe that reach to be a validation of the vision underlying Kubernetes: to build a platform for all applications by providing tools for each of the core patterns in distributed computing. Those patterns:simple replicated web softwaredistributed load balancing and service discoveryimmutable images run in containersco-location of related software into podssimplified consumption of network attached storageflexible and powerful resource schedulingrunning batch and scheduled jobs alongside service workloadsmanaging and maintaining clustered software like databases and message queuesAllow developers and operators to move to the next scale of abstraction, just like they have enabled Google and others in the tech ecosystem to scale to datacenter computers and beyond. From Kubernetes 1.0 to 1.3 we have continually improved the power and flexibility of the platform while ALSO improving performance, scalability, reliability, and usability. The explosion of integrations and tools that run on top of Kubernetes further validates core architectural decisions to be composable, to expose open and flexible APIs, and to deliberately limit the core platform and encourage extension.Today Kubernetes has one of the largest and most vibrant communities in the open source ecosystem, with almost a thousand contributors, one of the highest human-generated commit rates of any single-repository project on GitHub, over a thousand projects based around Kubernetes, and correspondingly active Stack Overflow and Slack channels. Red Hat is proud to be part of this ecosystem as the largest contributor to Kubernetes after Google, and every day more companies and individuals join us. The idea of Kubernetes found fertile ground, and you, the community, provided the excitement and commitment that made it grow.So, did we bet correctly? For all the reasons above, and hundreds more: Yes.What’s next?Happy as we are with the success of Kubernetes, this is no time to rest! While there are many more features and improvements we want to build into Kubernetes, I think there is a general consensus that we want to focus on the only long term goal that matters – a healthy, successful, and thriving technical community around Kubernetes. As John F. Kennedy probably said: > Ask not what your community can do for you, but what you can do for your communityIn a recent post to the kubernetes-dev list, Brian Grant laid out a great set of near term goals – goals that help grow the community, refine how we execute, and enable future expansion. In each of the Kubernetes Special Interest Groups we are trying to build sustainable teams that can execute across companies and communities, and we are actively working to ensure each of these SIGs is able to contribute, coordinate, and deliver across a diverse range of interests under one vision for the project.Of special interest to us is the story of extension – how the core of Kubernetes can become the beating heart of the datacenter operating system, and enable even more patterns for application management to build on top of Kubernetes, not just into it. Work done in the 1.2 and 1.3 releases around third party APIs, API discovery, flexible scheduler policy, external authorization and authentication (beyond those built into Kubernetes) is just the start. When someone has a need, we want them to easily find a solution, and we also want it to be easy for others to consume and contribute to that solution. Likewise, the best way to prove ideas is to prototype them against real needs and to iterate against real problems, which should be easy and natural.By Kubernetes’ second birthday, I hope to reflect back on a long year of refinement, user success, and community participation. It has been a privilege and an honor to contribute to Kubernetes, and it still feels like we are just getting started. Thank you, and I hope you come along for the ride!– Clayton Coleman, Contributor and Architect on Kubernetes and OpenShift at Red Hat. Follow him on Twitter and GitHub: @smarterclayton
Quelle: kubernetes

Happy Birthday Kubernetes. Oh, the places you’ll go!

Editor’s note, Today’s guest post is from an independent Kubernetes contributor, Justin Santa Barbara, sharing his reflection on growth of the project from inception to its future.Dear K8s,It’s hard to believe you’re only one – you’ve grown up so fast. On the occasion of your first birthday, I thought I would write a little note about why I was so excited when you were born, why I feel fortunate to be part of the group that is raising you, and why I’m eager to watch you continue to grow up!–JustinYou started with an excellent foundation – good declarative functionality, built around a solid API with a well defined schema and the machinery so that we could evolve going forwards. And sure enough, over your first year you grew so fast: autoscaling, HTTP load-balancing support (Ingress), support for persistent workloads including clustered databases (PetSets). You’ve made friends with more clouds (welcome Azure & OpenStack to the family), and even started to span zones and clusters (Federation). And these are just some of the most visible changes – there’s so much happening inside that brain of yours!I think it’s wonderful you’ve remained so open in all that you do – you seem to write down everything on Github – for better or worse. I think we’ve all learned a lot about that on the way, like the perils of having engineers make scaling statements that are then weighed against claims made without quite the same framework of precision and rigor. But I’m proud that you chose not to lower your standards, but rose to the challenge and just ran faster instead – it might not be the most realistic approach, but it is the only way to move mountains!And yet, somehow, you’ve managed to avoid a lot of the common dead-ends that other open source software has fallen into, particularly as those projects got bigger and the developers end up working on it more than they use it directly. How did you do that? There’s a probably-apocryphal story of an employee at IBM that makes a huge mistake, and is summoned to meet with the big boss, expecting to be fired, only to be told “We just spent several million dollars training you. Why would we want to fire you?”. Despite all the investment google is pouring into you (along with Redhat and others), I sometimes wonder if the mistakes we are avoiding could be worth even more. There is a very open development process, yet there’s also an “oracle” that will sometimes course-correct by telling us what happens two years down the road if we make a particular design decision. This is a parent you should probably listen to!And so although you’re only a year old, you really have an old soul. I’m just one of the many people raising you, but it’s a wonderful learning experience for me to be able to work with the people that have built these incredible systems and have all this domain knowledge. Yet because we started from scratch (rather than taking the existing Borg code) we’re at the same level and can still have genuine discussions about how to raise you. Well, at least as close to the same level as we could ever be, but it’s to their credit that they are all far too nice ever to mention it!If I would pick just two of the wise decisions those brilliant people made:Labels & selectors give us declarative “pointers”, so we can say “why” we want things, rather than listing the things directly. It’s the secret to how you can scale to great heights; not by naming each step, but saying “a thousand more steps just like that first one”.Controllers are state-synchronizers: we specify the goals, and your controllers will indefatigably work to bring the system to that state. They work through that strongly-typed API foundation, and are used throughout the code, so Kubernetes is more of a set of a hundred small programs than one big one. It’s not enough to scale to thousands of nodes technically; the project also has to scale to thousands of developers and features; and controllers help us get there.And so on we will go! We’ll be replacing those controllers and building on more, and the API-foundation lets us build anything we can express in that way – with most things just a label or annotation away! But your thoughts will not be defined by language: with third party resources you can express anything you choose. Now we can build Kubernetes without building in Kubernetes, creating things that feel as much a part of Kubernetes as anything else. Many of the recent additions, like ingress, DNS integration, autoscaling and network policies were done or could be done in this way. Eventually it will be hard to imagine you before these things, but tomorrow’s standard functionality can start today, with no obstacles or gatekeeper, maybe even for an audience of one.So I’m looking forward to seeing more and more growth happen further and further from the core of Kubernetes. We had to work our way through those phases; starting with things that needed to happen in the kernel of Kubernetes – like replacing replication controllers with deployments. Now we’re starting to build things that don’t require core changes. But we’re still still talking about infrastructure separately from applications. It’s what comes next that gets really interesting: when we start building applications that rely on the Kubernetes APIs. We’ve always had the Cassandra example that uses the Kubernetes API to self-assemble, but we haven’t really even started to explore this more widely yet. In the same way that the S3 APIs changed how we build things that remember, I think the k8s APIs are going to change how we build things that think.So I’m looking forward to your second birthday: I can try to predict what you’ll look like then, but I know you’ll surpass even the most audacious things I can imagine. Oh, the places you’ll go!– Justin Santa Barbara, Independent Kubernetes Contributor
Quelle: kubernetes

A Very Happy Birthday Kubernetes

Last year at OSCON, I got to reconnect with a bunch of friends and see what they have been working on. That turned out to be the Kubernetes 1.0 launch event. Even that day, it was clear the project was supported by a broad community — a group that showed an ambitious vision for distributed computing. Today, on the first anniversary of the Kubernetes 1.0 launch, it’s amazing to see what a community of dedicated individuals can do. Kubernauts have collectively put in 237 person years of coding effort since launch to bring forward our most recent release 1.3. However the community is much more than simply coding effort. It is made up of people — individuals that have given their expertise and energy to make this project flourish. With more than 830 diverse contributors, from independents to the largest companies in the world, it’s their work that makes Kubernetes stand out. Here are stories from a couple early contributors reflecting back on the project:Sam Ghods, services architect and co-founder at BoxJustin Santa Barbara, independent Kubernetes contributorClayton Coleman, contributor and architect on Kubernetes on OpenShift at Red HatThe community is also more than online GitHub and Slack conversation; year one saw the launch of KubeCon, the Kubernetes user conference, which started as a grassroot effort that brought together 1,000 individuals between two events in San Francisco and London. The advocacy continues with users globally. There are more than 130 Meetup groups that mention Kubernetes, many of which are helping celebrate Kubernetes’ birthday. To join the celebration, participate at one of the 20 parties worldwide: Austin, Bangalore, Beijing, Boston, Cape Town, Charlotte, Cologne, Geneva, Karlsruhe, Kisumu, Montreal, Portland, Raleigh, Research Triangle, San Francisco, Seattle, Singapore, SF Bay Area, or Washington DC.The Kubernetes community continues to work to make our project more welcoming and open to our kollaborators. This spring, Kubernetes and KubeCon moved to the Cloud Native Compute Foundation (CNCF), a Linux Foundation Project, to accelerate the collaborative vision outlined only a year ago at OSCON …. lifting a glass to another great year.– Sarah Novotny, Kubernetes Community Wonk
Quelle: kubernetes

Why OpenStack's embrace of Kubernetes is great for both communities

Today, Mirantis, the leading contributor to OpenStack, announced that it will re-write its private cloud platform to use Kubernetes as its underlying orchestration engine. We think this is a great step forward for both the OpenStack and Kubernetes communities. With Kubernetes under the hood, OpenStack users will benefit from the tremendous efficiency, manageability and resiliency that Kubernetes brings to the table, while positioning their applications to use more cloud-native patterns. The Kubernetes community, meanwhile, can feel confident in their choice of orchestration framework, while gaining the ability to manage both container- and VM-based applications from a single platform.The Path to Cloud NativeGoogle spent over ten years developing, applying and refining the principles of cloud native computing. A cloud-native application is:Container-packaged. Applications are composed of hermetically sealed, reusable units across diverse environments;Dynamically scheduled, for increased infrastructure efficiency and decreased operational overhead; and Microservices-based. Loosely coupled components significantly increase the overall agility, resilience and maintainability of applications.These principles have enabled us to build the largest, most efficient, most powerful cloud infrastructure in the world, which anyone can access via Google Cloud Platform. They are the same principles responsible for the recent surge in popularity of Linux containers. Two years ago, we open-sourced Kubernetes to spur adoption of containers and scalable, microservices-based applications, and the recently released Kubernetes version 1.3 introduces a number of features to bridge enterprise and cloud native workloads. We expect that adoption of cloud-native principles will drive the same benefits within the OpenStack community, as well as smoothing the path between OpenStack and the public cloud providers that embrace them.Making OpenStack betterWe hear from enterprise customers that they want to move towards cloud-native infrastructure and application patterns. Thus, it is hardly surprising that OpenStack would also move in this direction [1], with large OpenStack users such as eBay and GoDaddy adopting Kubernetes as key components of their stack. Kubernetes and cloud-native patterns will improve OpenStack lifecycle management by enabling rolling updates, versioning, and canary deployments of new components and features. In addition, OpenStack users will benefit from self-healing infrastructure, making OpenStack easier to manage and more resilient to the failure of core services and individual compute nodes. Finally, OpenStack users will realize the developer and resource efficiencies that come with a container-based infrastructure.OpenStack is a great tool for Kubernetes usersConversely, incorporating Kubernetes into OpenStack will give Kubernetes users access to a robust framework for deploying and managing applications built on virtual machines. As users move to the cloud-native model, they will be faced with the challenge of managing hybrid application architectures that contain some mix of virtual machines and Linux containers. The combination of Kubernetes and OpenStack means that they can do so on the same platform using a common set of tools.We are excited by the ever increasing momentum of the cloud-native movement as embodied by Kubernetes and related projects, and look forward to working with Mirantis, its partner Intel, and others within the OpenStack community to brings the benefits of cloud-native to their applications and infrastructure.–Martin Buhr, Product Manager, Strategic Initiatives, Google[1] Check out the announcement of Kubernetes-OpenStack Special Interest Group here, and a great talk about OpenStack on Kubernetes by CoreOS CEO Alex Polvi at the most recent OpenStack summit here.
Quelle: kubernetes

Challenges of a Remotely Managed, On-Premises, Bare-Metal Kubernetes Cluster

Today’s post is written by Bich Le, chief architect at Platform9, describing how their engineering team overcame challenges in remotely managing bare-metal Kubernetes clusters. IntroductionThe recently announced Platform9 Managed Kubernetes (PMK) is an on-premises enterprise Kubernetes solution with an unusual twist: while clusters run on a user’s internal hardware, their provisioning, monitoring, troubleshooting and overall life cycle is managed remotely from the Platform9 SaaS application. While users love the intuitive experience and ease of use of this deployment model, this approach poses interesting technical challenges. In this article, we will first describe the motivation and deployment architecture of PMK, and then present an overview of the technical challenges we faced and how our engineering team addressed them.Multi-OS bootstrap modelLike its predecessor, Managed OpenStack, PMK aims to make it as easy as possible for an enterprise customer to deploy and operate a “private cloud”, which, in the current context, means one or more Kubernetes clusters. To accommodate customers who standardize on a specific Linux distro, our installation process uses a “bare OS” or “bring your own OS” model, which means that an administrator deploys PMK to existing Linux nodes by installing a simple RPM or Deb package on their favorite OS (Ubuntu-14, CentOS-7, or RHEL-7). The package, which the administrator downloads from their Platform9 SaaS portal, starts an agent which is preconfigured with all the information and credentials needed to securely connect to and register itself with the customer’s Platform9 SaaS controller running on the WAN.Node managementThe first challenge was configuring Kubernetes nodes in the absence of a bare-metal cloud API and SSH access into nodes. We solved it using the node pool concept and configuration management techniques. Every node running the agent automatically shows up in the SaaS portal, which allows the user to authorize the node for use with Kubernetes. A newly authorized node automatically enters a node pool, indicating that it is available but not used in any clusters. Independently, the administrator can create one or more Kubernetes clusters, which start out empty. At any later time, he or she can request one or more nodes to be attached to any cluster. PMK fulfills the request by transferring the specified number of nodes from the pool to the cluster. When a node is authorized, its agent becomes a configuration management agent, polling for instructions from a CM server running in the SaaS application and capable of downloading and configuring software.Cluster creation and node attach/detach operations are exposed to administrators via a REST API, a CLI utility named qb, and the SaaS-based Web UI. The following screenshot shows the Web UI displaying one 3-node cluster named clus100, one empty cluster clus101, and the three nodes.Cluster initializationThe first time one or more nodes are attached to a cluster, PMK configures the nodes to form a complete Kubernetes cluster. Currently, PMK automatically decides the number and placement of Master and Worker nodes. In the future, PMK will give administrators an “advanced mode” option allowing them to override and customize those decisions. Through the CM server, PMK then sends to each node a configuration and a set of scripts to initialize each node according to the configuration. This includes installing or upgrading Docker to the required version; starting 2 docker daemons (bootstrap and main), creating the etcd K/V store, establishing the flannel network layer, starting kubelet, and running the Kubernetes appropriate for the node’s role (master vs. worker). The following diagram shows the component layout of a fully formed cluster.Containerized kubelet?Another hurdle we encountered resulted from our original decision to run kubelet as recommended by the Multi-node Docker Deployment Guide. We discovered that this approach introduces complexities that led to many difficult-to-troubleshoot bugs that were sensitive to the combined versions of Kubernetes, Docker, and the node OS. Example: kubelet’s need to mount directories containing secrets into containers to support the Service Accounts mechanism. It turns out that doing this from inside of a container is tricky, and requires a complex sequence of steps that turned out to be fragile. After fixing a continuing stream of issues, we finally decided to run kubelet as a native program on the host OS, resulting in significantly better stability.Overcoming networking hurdlesThe Beta release of PMK currently uses flannel with UDP back-end for the network layer. In a Kubernetes cluster, many infrastructure services need to communicate across nodes using a variety of ports (443, 4001, etc..) and protocols (TCP and UDP). Often, customer nodes intentionally or unintentionally block some or all of the traffic, or run existing services that conflict with the required ports, resulting in non-obvious failures. To address this, we try to detect configuration problems early and inform the administrator immediately. PMK runs a “preflight” check on all nodes participating in a cluster before installing the Kubernetes software. This means running small test programs on each node to verify that (1) the required ports are available for binding and listening; and (2) nodes can connect to each other using all required ports and protocols. These checks run in parallel and take less than a couple of seconds before cluster initialization.MonitoringOne of the values of a SaaS-managed private cloud is constant monitoring and early detection of problems by the SaaS team. Issues that can be addressed without intervention by the customer are handled automatically, while others trigger proactive communication with the customer via UI alerts, email, or real-time channels. Kubernetes monitoring is a huge topic worthy of its own blog post, so we’ll just briefly touch upon it. We broadly classify the problem into layers: (1) hardware & OS, (2) Kubernetes core (e.g. API server, controllers and kubelets), (3) add-ons (e.g. SkyDNS & ServiceLoadbalancer) and (4) applications. We are currently focused on layers 1-3. A major source of issues we’ve seen is add-on failures. If either DNS or the ServiceLoadbalancer reverse http proxy (soon to be upgraded to an Ingress Controller) fails, application services will start failing. One way we detect such failures is by monitoring the components using the Kubernetes API itself, which is proxied into the SaaS controller, allowing the PMK support team to monitor any cluster resource. To detect service failure, one metric we pay attention to is pod restarts. A high restart count indicates that a service is continually failing.Future topicsWe faced complex challenges in other areas that deserve their own posts: (1) Authentication and authorization with Keystone, the identity manager used by Platform9 products; (2) Software upgrades, i.e. how to make them brief and non-disruptive to applications; and (3) Integration with customer’s external load-balancers (in the absence of good automation APIs).ConclusionPlatform9 Managed Kubernetes uses a SaaS-managed model to try to hide the complexity of deploying, operating and maintaining bare-metal Kubernetes clusters in customers’ data centers. These requirements led to the development of a unique cluster deployment and management architecture, which in turn led to unique technical challenges.This article described an overview of some of those challenges and how we solved them. For more information on the motivation behind PMK, feel free to view Madhura Maskasky’s blog post.–Bich Le, Chief Architect, Platform9
Quelle: kubernetes

Weekly Roundup: Top 5 Most Popular Posts

This week, our readers enjoyed some big news, including the great milestone of making Docker 1.12, Docker for Mac and Docker for Windows generally available for production environments, answers to the ten most often asked Docker questions and more. As we begin a new week, let’s recap our top 5 most-read stories for the week of July 24, 2016:

1. Docker 1.12 Goes GA: Docker 1.12 adds the largest and most sophisticated set of features into a single release since the beginning of the Docker project.
2. Docker for Mac and Windows: Native development environment using hypervisors built into each operating system. (No more VirtualBox!)
3. Docker Questions: The ten most common Docker questions (and answers) asked by IT Admins.
4. Function as a Service: The Function-as-a-Service model and how to generate a function from an image on Docker hub via Chanwit Kaewkasi
5. 12 Factor Method: Using the twelve factors application to Dockerize Apps via Rafael Gomes

Top 5 Docker Posts — July 24, 2016 via @DockerClick To Tweet

Quelle: https://blog.docker.com/feed/

Announcing Docker 1.12 Hackathon winners

The judges have deliberated, our community has voted, and the results are in! We are happy to announce the top 5 submissions of the Docker 1.12 Hackathon.
In case you missed it, the theme of the hackathon was to build, ship, and run a distributed software application using a release candidate of Docker 1.12. We encouraged participants to hack the new features included in Docker 1.12, such as: Swarm Mode, Cryptographic node identity, Service API, and Build-in routing mesh.

A big thank you to our judges (below) for taking the time to review all the submissions! 

Phil Estes – Docker Captain and Senior Technical Staff Member at IBM Cloud Open Technologies
Arun Gupta – Docker Captain and VP of Developer Relations at Couchbase
Laura Frank – Docker Captain and Senior Engineer at Codeship
Mano Marks – Director of Developer Relations at Docker
Mike Coleman – Technical Evangelist at docker

 

Each of the judges rated the submissions from 1 &8211; 5 stars based on the following criteria:

Fit &8211; Does Docker improve the project or fundamentally enable it?
Efficiency &8211; Is this implementation small in size, easy to transport, quick to start up and run?
Integration &8211; Does the project fit well into other systems, or is it sufficiently complex itself to be its own system?
Transparency &8211; Can other people easily recreate your project now that you’ve shown how? Is your code open source?
Presentation &8211; How well did you present your project in the video? Does the video convey your hack clearly and do you cover all the important points?
Usefulness &8211; Popular vote on how many would people would use your hack. So keep your audience in mind!
Longevity &8211; Can the project be improved / built upon?
Bonus Point! &8211; If the submission included contributions and bug fixes to the Docker 1.12 Github repository your final score will be allotted one extra point

The Winners:
1st Place
Authors: Marcos Nils, Jonathan Leibiusky and Jimena Tapia
Title: Whaleprint
What does it do?: Whaleprint makes possible to use your current DAB files as swarm mode blueprints and will show you with extreme detail exactly which and how your services will be deployed/removed. At the same time it will also handle service update diffs describing precisely what things will change and what will be their new updated value.
Features:

Preview and apply your DAB changesets
Extend the current DAB format to support MOAR features.
Manage multiple stacks simultaneously
Fetch DAB&8217;s from an URL
Remove and deploy service stacks entirely
Allow to apply specific service update through the &8211;target option
Outputs relevant computed stack information like Published ports
Alternatively print complete plan detail instead of changesets only

Built With: golang, apis, Swarmkit

Prize: MakerBot – Replicator Mini as well as five Docker Hub private Repositories and one Docker Cloud extra node.

 
2nd Place
Authors: Weston McNamee and Stephen Bunn
Title: Swarm CI
What does it do?: SwarmCI is CI extension as well as (future) a stand-alone CI/CD platform. You can use SwarmCI to extend an existing CI system (Bamboo, TeamCity, Jenkins, etc) with a few steps:
1. Setup a Docker Swarm.
2. Converting existing build tasks, stages, and jobs to a single .swarmci file.
3. Configure a single task in your build to run a single command to delegate work to your Docker Swarm: python -m swarm
Features: SwarmCI solves many of the problems found in conventional CI/CD platforms.

Build tasks are run within a docker container, the image of which provided by the user, completely isolated from build agent and the rest of the host. In addition, the starting state of a build does not change between builds, as containers aren&8217;t reused.
The Docker image in which tasks are run within is provided by the user, thus, it can be completely customized. Never wait for an idle agent that has your requirements, or for the OPS team to configure an agent with new requirements for you.
Resource waste id decreased because builds run on the Swarm. As long as a node on the Swarm has the available CPU/Memory requirements for your build, your build runs. Scale your Swarm up/down to keep waste to a minimum.
With the SwarmCI as a extension, you can get isolated, distributed, parallel build tasks without sacrificing the use of additional build agents.

Prize : Apple Watch
 

 
3rd Place
Authors: Chris Crone, Quentin Hocquet and Matthieu Nottale
Title: Infinit
What does it do?: aggregates the local storage of each Docker Swarm node and provides a shared volume that can be mounted by any Docker service to make it stateful. As nodes are added/removed, the storage rebalances and scales automatically as expected ensuring availability and increased capacity.
 

Prize: 3rd place: Oculus Rift

4th Place
Authors:  Shrikrishna Holla and Akram Rameez
Title: dq &8211; Task Scheduler for Docker Functions
What does it do?: dq is a task Scheduler for Docker Functions. DQ is an amalgamation of the best features of a job scheduler like resque and a serverless platform like AWS lambda. It is meant to be self-hosted and called using REST APIs. All tasks run on docker containers defined in a spec similar to compose called dq.yml.
Features:

Call a background task with arguments [Ex: sending email]
Schedule a recurring task [Ex: backing up db]
Run a background service that handles HTTP requests and auto scales based on load [voting, spam checking]

Built With: javascript
VIDEO:

 

 
5th Place
Author: Wendy-Anne Daniel
Title: nginx webservices
What does it do?: It provides all the security testing tools via the metasploit framework, scapy, sully etc. on multiple Linux OS on multiple containers as well as the attack surfaces to try out exploits in a self-contained environment. With docker 1.12, they are turned into simple and complex services with interservice communication.

Prize: Docker swag
Congratulations to our winners and thank you to everybody who participated. You can congratulate the winners and check out the awesome submissions in the Docker 1.12 Hackathon gallery.  
Don’t stop coding just because the competition is over! Update your portfolio to inform your followers about new projects and get feedback from fellow hackers.

The results are in! Announcing the winners of the @Docker 1.12 Hackathon! To Tweet

Quelle: https://blog.docker.com/feed/

Introducing Dockercast – the Docker Podcast

Today, we’re thrilled to introduce the official Podcast. The Docker and container ecosystem is moving fast and it can be hard to catch up with the latest projects or features. Podcasts is an efficient medium for getting up to the speed with the latest news from the ecosystem on-demand. Now you can catch up wherever you are by playing or downloading Docker podcast episodes directly to your phone, laptop or tablet.  

In case you missed it, all the DockerCon sessions have been published on our Youtube Channel last month. However, we realize that not everyone has time to watch dozens of hours of video content and as a result we’re happy to announce that all DockerCon sessions are now available as podcast episodes!

Going forward this Dockercast will cover a wide range of topics including products, projects and contributions from active community members and partners with our host, Docker’s very own &8211;  John Willis. John is very familiar with how to run podcasts given that he started his own DevOps Café podcast with Damon Edwards back in 2010!
John Willis (@botchagalupe) is the Director of Ecosystem development for Docker, which he joined after the company he co-founded (SocketPlane, which focused on SDN for containers) was acquired by Docker in March 2015.
You can find the latest Dockercast episodes on the Itunes Store or via the SoundCloud RSS feed. 

Introducing dockercast the official Docker Podcast w/ @botchagalupe as a host!Click To Tweet

Quelle: https://blog.docker.com/feed/

Weekly Docker Roundup: 5 Stories You Don’t Want to Miss

This week, we take a look into the 1.12 native integration of Swarm Mode, debug Dockerized .NET core apps with VS Code, and learn why Docker CEO Ben Golub is leading the charge as the 1 IT disruptor. As we begin a new week, let’s recap our top 5 most-read stories for the week of July 31, 2016:

 
 
1. Swarm Mode Topology: In this video, Andrea Luzzardi, a software engineer for Docker, goes deep into the topology of Swarm and talks about the background of how Docker accomplished it.
2. Top 25 Disrupters: From CRN&8217;s Top 100 Executives of 2016, the executives that are leading    the charge beyond traditional IT challenges and revolutionizing the market by CRN.
3. Debug Dockerized .NET Core Apps: A step-by-step tutorial and walkthrough of how to generate a sample .NET Core app and then add Docker debugging support by Chris Myers.  
4. Dockercast: A new podcast focused on a range of Docker topics including; products, projects and contributions from the community.
5. Docker Hackathon Winners: Top 5 winning submissions of the Docker 1.12 Hackathon. Hacked features include; Swarm Mode, Cryptographic node identity, Service API, and Build-in routing mesh.

5 Docker stories you don’t want to miss this week via @DockerClick To Tweet

Quelle: https://blog.docker.com/feed/