Announcing Docker Birthday #4: Spreading the Docker Love!

Community is at the heart of and thanks to the hard work of thousands of maintainers, contributors, Captains, mentors, organizers, and the entire Docker community, the Docker platform is now used in production by companies of all sizes and industries.
To show our love and gratitude, it has become a tradition for Docker and our awesome network of meetup organizers to host Docker Birthday meetup celebrations all over the world. This year the celebrations will take place during the week of March 13-19, 2017. Come learn, mentor, celebrate, eat cake, and take an epic !
Docker Love
We wanted to hear from the community about why they love Docker!
Wellington Silva, Docker São Paulo meetup organizer said “Docker changed my life, I used to spend days compiling and configuring environments. Then I used to spend hours setting up using VM. Nowadays I setup an environment in minutes, sometimes in seconds.”

Love the new organization of commands in Docker 1.13!
— Kaslin Fields (@kaslinfields) January 25, 2017

Docker Santo Domingo organizer, Victor Recio said, “Docker has increased my effectiveness at work, currently I can deploy software to production environment without worrying that it will not work when the delivery takes place. I love docker and I&;m very grateful with it and whenever I can share my knowledge about docker with the young people of the communities of my country I do it and I am proud that there are already startups that have reach a Silicon Valley level.”

We love docker here at @Harvard for our screening platform. https://t.co/zpp8Wpqvk5
— Alan Aspuru-Guzik (@A_Aspuru_Guzik) January 12, 2017

Docker Birthday Labs
At the local birthday 4 meetups, there will be Docker labs and challenges to help attendees at all levels and welcome new members into the community. We’re partnering with CS schools, non-profit organizations, and local meetup groups to throw a series of events around the world. While the courses and labs are geared towards newcomers and intermediate level users, advanced and expert community members are invited to join as mentors to help attendees work through the materials.
Find a Birthday meetup near you!
There are already 44 Docker Birthday 4 celebrations scheduled around the world with more on the way! Check back as more events are announced.

Thursday, March 9th

Fulda, Germany

Saturday, March 11th

Madurai, India

Sunday, March 12th

Mumbai, India

Monday, March 13th

Dallas, TX
Grenoble, France
Liège, Belgium
Luxembourg, Luxembourg

Tuesday, March 14th

Austin, TX
Berlin, Germany
Las Vegas, NV
Malmö, Sweden
Miami, FL

Wednesday, March 15th

Columbus, OH
Istanbul, Turkey
Nantes, France
Phoenix, AZ
Prague, Czech Republic
San Francisco, CA
Santa Barbara, CA
Singapore, Singapore

Thursday, March 16th

Brussels, Belgium
Budapest, Hungary
Dhahran, Saudi Arabia
Dortmund, Germany
Iráklion, Greece
Montreal, Canada
Nice, France
Saint Louis, MO
Stuttgart, Germany
Tokyo, Japan
Washington, DC

Saturday, March 18th

Delhi, India
Hermosillo, Mexico
Kanpur, India
Kisumu, Kenya
Novosibirsk, Russia
Porto, Portugal
Rio de Janeiro, Brazil
Thanh Pho Ho Chi Minh, Vietnam

Monday, March 20th

London, United Kingdom
Milan, Italy

Thursday, March 23rd

Dublin, Ireland

Wednesday, March 29th

Colorado Springs, CO
Ottawa, Canada

Want to help us organize a Docker Birthday celebration in your city? Email us at meetups@docker.com for more information!
Are you an advanced Docker user? Join us as a mentor!
We are recruiting a network of mentors to attend the local events and help guide attendees through the Docker Birthday labs. Mentors should have experience working with Docker Engine, Docker Networking, Docker Hub, Docker Machine, Docker Orchestration and Docker Compose. Click here to sign up as a mentor.

Excited to LearnDocker at the 4th ! Join your local edition: http://dockr.ly/2jXcwz8 Click To Tweet

The post Announcing Docker Birthday 4: Spreading the Docker Love! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introduction to Salt and SaltStack

The post Introduction to Salt and SaltStack appeared first on Mirantis | Pure Play Open Cloud.
The amazing world of configuration management software is really well populated these days. You may already have looked at Puppet, Chef or Ansible but today we focus on SaltStack. Simplicity is at its core, without any compromise on speed or scalability. In fact, some users have up to 10,000 minions or more. In this article, we&;re going to give you a look at what Salt is and how it works.
Salt architecture
Salt remote execution is built on top of an event bus, which makes it unique. It uses a server-agent communication model where the server is called the salt master and the agents the salt minions.
Salt minions receive commands simultaneously from the master and contain everything required to execute commands locally and report back to salt master. Communication between master and minions happens over a high-performance data pipe that use ZeroMQ or raw TCP, and messages are serialized using MessagePack to enable fast and light network traffic. Salt uses public keys for authentication with the master daemon, then uses faster AES encryption for payload communication.
State description is done using YAML and remote execution is possible over a CLI, and programming or extending Salt isn’t a must.
Salt is heavily pluggable; each function can be replaced by a plugin implemented as a Python module. For example, you can replace the data store, the file server, authentication mechanism, even the state representation. So when I said state representation is done using YAML, I’m talking about the Salt default, which can be replaced by JSON, Jinja, Wempy, Mako, or Py Objects. But don’t freak out. Salt comes with default options for all these things, which enables you to jumpstart the system and customize it when the need arises.
Terminology
It&8217;s easy to be overwhelmed by the obscure vocabulary that Salt introduces, so here are the main salt concepts which make it unique.

salt master &; sends commands to minions
salt minions &8211; receives commands from master
execution modules &8211; ad hoc commands
grains &8211; static information about minions
pillar &8211; secure user-defined variables stored on master and assigned to minions (equivalent to data bags in Chef or Hiera in Puppet)
formulas (states) &8211; representation of a system configuration, a grouping of one or more state files, possibly with pillar data and configuration files or anything else which defines a neat package for a particular application.
mine &8211; area on the master where results from minion executed commands can be stored, such as the IP address of a backend webserver, which can then be used to configure a load balancer
top file &8211; matches formulas and pillar data to minions
runners &8211; modules executed on the master
returners &8211; components that inject minion data to another system
renderers &8211; components that run the template to produce the valid state of configuration files. The default renderer uses Jinja2 syntax and outputs YAML files.
reactor &8211; component that triggers reactions on events
thorium &8211; a new kind of reactor, which is still experimental.
beacons &8211; a little piece of code on the minion that listens for events such as server failure or file changes. When it registers on of these events, it informs the master. Reactors are often used to do self healing.
proxy minions &8211; components that translate Salt Language to device specific instructions in order to bring the device to the desired state using its API, or over SSH.
salt cloud &8211; command to bootstrap cloud nodes
salt ssh &8211; command to run commands on systems without minions

You’ll find a great overview of all of this on the official docs.
Installation
Salt is built on top of lots of Python modules. Msgpack, YAML, Jinja2, MarkupSafe, ZeroMQ, Tornado, PyCrypto and M2Crypto are all required. To keep your system clean, easily upgradable and to avoid conflicts, the easiest installation workflow is to use system packages.
Salt is operating system specific; in the examples in this article, I’ll be using Ubuntu 16.04 [Xenial Xerus]; for other Operating Systems consult the salt repo page.  For simplicity&8217;s sake, you can install the master and the minion on a single machine, and that&8217;s what we&8217;ll be doing here.  Later, we&8217;ll talk about how you can add additional minions.

To install the master and the minion, execute the following commands:
$ sudo su
# apt-get update
# apt-get upgrade
# apt-get install curl wget
# echo “deb [arch=amd64] http://apt.tcpcloud.eu/nightly xenial tcp-salt” > /etc/apt/sources.list
# wget -O – http://apt.tcpcloud.eu/public.gpg | sudo apt-key add –
# apt-get clean
# apt-get update
# apt-get install -y salt-master salt-minion reclass

Finally, create the  directory where you’ll store your state files.
# mkdir -p /srv/salt

You should now have Salt installed on your system, so check to see if everything looks good:
# salt –version
You should see a result something like this:
salt 2016.3.4 (Boron)

Alternative installations
If you can’t find packages for your distribution, you can rely on Salt Bootstrap, which is an alternative installation method, look below for further details.
Configuration
To finish your configuration, you&8217;ll need to execute a few more steps:

If you have firewalls in the way, make sure you open up both port 4505 (the publish port) and 4506 (the return port) to the Salt master to let the minions talk to it.
Now you need to configure your Minion to connect to your master.  Edit the file /etc/salt/minion.d/minion.conf  and Change the following lines as indicated below:

# Set the location of the salt master server. If the master server cannot be
# resolved, then the minion will fail to start.
master: localhost

# If multiple masters are specified in the ‘master’ setting, the default behavior
# is to always try to connect to them in the order they are listed. If random_master is
# set to True, the order will be randomized instead. This can be helpful in distributing

# Explicitly declare the id for this minion to use, if left commented the id
# will be the hostname as returned by the python call: socket.getfqdn()
# Since salt uses detached ids it is possible to run multiple minions on the
# same machine but with different ids, this can be useful for salt compute
# clusters.
id: saltstack-m01

# Append a domain to a hostname in the event that it does not exist.  This is
# useful for systems where socket.getfqdn() does not actually result in a
# FQDN (for instance, Solaris).
:

As you can see, we&8217;re telling the minion where to find the master so it can connect &; in this case, it&8217;s just localhost, but if that&8217;s not the case for you, you&8217;ll want to change it.  We&8217;ve also given this particular minion an id of saltstack-m01; that&8217;s a completely arbitrary name, so you can use whatever you want.  Just make sure to substitute in the examples!
Before being able you can play around, you&8217;ll need to restart the required Salt services to pick up the changes:
# service salt-minion restart
# service salt-master restart

Make sure services are also started at boot time:
# systemctl enable salt-master.service
# systemctl enable salt-minion.service

Before the master can do anything on the minion, the master needs to trust it, so accept the corresponding key of each of your minion as follows:
# salt-key
Accepted Keys:
Denied Keys:
Unaccepted Keys:
saltstack-m01
Rejected Keys:

Before accepting it, you can validate it looks good. First inspect it:
# salt-key -f saltstack-m01
Unaccepted Keys:
saltstack-m01:  98:f2:e1:9f:b2:b6:0e:fe:cb:70:cd:96:b0:37:51:d0

Then compare it with the minion key:
# salt-call –local key.finger
local:
98:f2:e1:9f:b2:b6:0e:fe:cb:70:cd:96:b0:37:51:d0

It looks the same, so go ahead and accept it:/span>
salt-key -a saltstack-m01

Repeat this process of installing salt-minion and accepting the keys to add new minions to your environment. Consult the documentation to get more details regarding the configuration of minions or more generally this documentation for all salt configuration options.
Remote execution
Now that everything&8217;s installed and configured, let&8217;s make sure it&8217;s actually working. The first, most obvious thing we could do with our master/minion infrastructure is to run a command remotely. For example we can test whether the minion is alive by using the test.ping command:
# salt ‘saltstack-m01′ test.ping
saltstack-m01:
   True
As you can see here, we&8217;re calling salt, and we&8217;re feeding it a specific minion, and a command to run on that minion.  We could, if we wanted to, send this command to more than one minion. For example, we could send it to all minions:
# salt ‘*’ test.ping
saltstack-m01:
   True
In this case, we have only one, but if there were more, salt would cycle through all of them giving you the appropriate response.
So that should get you started. Next time, we&8217;ll look at some of the more complicated things you can do with Salt.
The post Introduction to Salt and SaltStack appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

The next generation of IT issue resolution

For years, search engines have been the first place to go to look for information, from who batted cleanup for the 1979 Pittsburgh Pirates (Willie Stargell) to how to optimize WebSphere Application Servers.
While search engines have always been a great source of general information, they were not personalized for you and your situation. Until now.
The cognitive era is here, which means you can apply natural language interaction, moving beyond generic trends and standard benchmarks to get real-time, specifically targeted data and insights. It was this concept that drove our third Connect to Cloud Cognitive Build team to create the new Cognibot application.
IT operations managers around the world deal daily with the realities of trying to keep their environment not only up and running, but also optimized to the best of their ability. Industry experts have studied the four phases of IT service management when it came to repairing problems:

Mean time to identify
Mean time to know
Mean time to fix
Mean time to verify

The team noticed that while there was a great deal of information available for identifying a problem, there was significantly less available to recommend what to do to fix it.
That brings us back to search engines. In many cases, IT managers look at system reports and attempt to search online for possible causes and resolutions. Not only is that approach time consuming, but what they find is not tailored for their specific issue.
What if someone applied cognitive services to the problem?
The Cognibot project does just that. The team has created an interactive service that combines the knowledge of what one has already done to fix a problem, with the real data from the specific IT environment. It then adds Watson capabilities so users can ask natural-language questions and get response customized for their specific issue.
Here&;s an example: suddenly your IT department gets flagged with an issue in your WebSphere Application Server deployment. Normally, an IT subject matter expert will get called in to identify the issue, search for the solution, the execution the recovery plan.  What if we could streamline that process?  Your IT department still gets the notification, but instead of searching for the answer your Cognibot interface has already analyzed your real data, researched fixes that have worked in the past, and recommended a solution that will work in this case. All you have to do is click “accept” to enact the fix, helping reduce your mean time to repair drastically. Now that’s a cognitive solution.
Interested in learning more? Join us over the next few weeks as we track the team’s progress toward creating the first IT Operations consultant.
See how we are helping clients take advantage of the digital economy.
The post The next generation of IT issue resolution appeared first on news.
Quelle: Thoughts on Cloud

See the latest in multi-cloud management at InterConnect

I can’t wait for IBM InterConnect 2017, where we will preview the new hybrid cloud management platform we are building for our clients. Why? Read on.
More companies are investing in hybrid cloud strategies. Many businesses have a multi-cloud strategy. And that strategy must include a cloud management platform that agnostically manages any cloud so developers are empowered to innovate. And they need it soon. IDC predicts that by 2018, 65 percent of companies will have a management platform for self-service automation that powers developers.
It’s my belief that today, most cloud services are managed in workload and platform silos. That needs to change. It is imperative that companies embrace a holistic approach to cloud management for service reliability, cost, and accessibility. IT leaders need to treat all cloud services as if they are one unified environment, and eliminate the multiple and redundant tools used to manage the cloud.
That’s why at IBM InterConnect 2017, we will preview the new cloud management platform we are building for our clients. We’re planning to show a comprehensive, multi-cloud management solution for IT operations and developers. And with the cognitive capabilities of IBM Watson, we’ll show how to use operational analytics across multiple cloud providers to optimize and govern public and private clouds.

InterConnect will feature IT leaders who will share their automation and hybrid cloud successes and lessons learned. Here are just a few of the sessions and client stories at InterConnect:
Session : How Royal KPN leveraged IBM cloud technologies for automation and insourcing of operations work

With cloud automation, Royal KPN cut significant amount of cost and improved service level agreements by delivering a standardized process to IT operations. Learn how they achieved quality and speed of service, with policy-based governance and controls in the process flows such as management approvals.
Session : Hybrid cloud management: Trends, opportunities and IBM’s strategy

In this session, we will have analysts, IBM experts, and clients discuss multicloud management trends and directions along with real use cases and IBM’s role in shaping the future of cloud management.
Session : IT as business in SwissRe: ITSM processes using BPM in IBM Cloud Orchestrator

Swiss Re automated and standardized internal IT processes to reduce delivery time and increase flexibility. Join Swizz Re to discover the growing importance of BPM as a business driver for IT to better harvest the benefits resulting from digital transformation.
There&;s much more to explore. Join us at InterConnect and learn how you can manage complex, multicloud environments with ease using cloud agnostic managements tools, to reduce costs and improve your time-to-value. Our subject matter experts and executives will meet with business leaders. We’re also running a hands-on demo lab where attendees can see first-hand what the new cloud management platform looks like and how it applies to their job role.
Learn more about InterConnect and register today.
The post See the latest in multi-cloud management at InterConnect appeared first on news.
Quelle: Thoughts on Cloud

Cloud and cognitive technologies are transforming the future of retail

Though the retail industry is rapidly changing, one fact remains constant: the customer is king.
Some 35,000 attendees made their way to the National Retail Federation’s “Big Show” (NRF) at New York’s Javits Center last month for a first-hand look into the future of retail. Talk of digital transformation created buzz on and off the show floor.
Just south of the show at the IBM Bluemix Garage in Soho, some of the industry’s revolutionary leaders gathered for a roundtable discussion on how cloud and cognitive technologies are becoming an integral part of how retailers reach and meet shopper’s expectations.
Attendees included Staples CTO Faisal Masud; Shop Direct CIO Andy Wolfe; Retail Systems Research analyst Brian Kilcourse; Forbes retail and consumer trends contributor Barbara Thau; The New Stack journalist Darryl Taft; IBM Bluemix Garages Worldwide Director Shawn Murray; IBM Blockchain program director Eileen Lowry; and Pace University clinical professor of management and Entrepreneurship Lab director Bruce Bachenheimer. The group took a close look at how retailers experiment with new ways to give customers what they want and drive that transformation using cloud and cognitive computing.
Consumers drive tech adoption
Retail is a famously reactive business; it’s slow to adopt new technologies and innovation. However, in today’s consumer-driven age, retailers must quicken their pace, often setting aside internal strategies to tune into consumers’ demands and adopt the technology necessary to keep up.
Yet that’s often not the case. The IBM 2017 Customer Experience Index study found that 84 percent of retail brands offered no in-store mobile services and 79 percent did not give associates the ability to access a customer’s account information via a mobile device. These are key services for a seamless customer experience.
Retailers must capture the attention of consumers armed with smartphones and tablets. They are comparing product prices and reading reviews on social networks all the time. The hyper-connected consumer is the new norm, and understanding and engaging with them in real time is essential.
What customers really want
While retailers are busy selling, customer expectations are changing by the second. Retail is now about providing high-quality, engaging experiences.. Forward-thinking retailers use cloud infrastructure and AI-powered innovations such as cognitive chatbots to amplify and augment, not replace, the core human element of retail.
For example, for a retail recommendations strategy, Masud said that Watson Conversation on IBM Cloud helped Staples discover a gap between what the company assumed customers wanted and what they actually wanted. When Staples worked with IBM to develop its &;Easy Button&; virtual assistant, Masud said, “We thought we would just be making recommendations for more office supplies based on their purchases.”
What Staples found was that customers were also seeking solutions to help track administrative details in their office. “They wanted us to remember things for them like the catering company they used or the name of the person who signs for the delivery,” Masud said.
A cloud-powered, cognitive technology solution provides clear benefits for Staples. As it continues to learn customer orders and preferences, the office-supply-ordering tool continues to improve its predictive and order-recollection capability, making it more valuable to users for everyday tasks. Staples can bring the on-demand world to customers, allowing them to order anytime, anywhere and from any device.
“The one thing customers want is ease,” added Shop Direct CIO Andy Wolfe. He noted people want to easily shop online from whatever device or online channel they prefer. Shop Direct is the UK&;s second largest pureplay digital retailer.
Retailers must have actionable insights derived from backend systems data such as supply chains, as well as the data that customers produce and share.
Shop Direct had a wealth of data, but needed to identify the most important information, which is why the company adopted IBM Watson and IBM Cloud. Shop Direct wanted to better understand customers and run its business more efficiently to meet shoppers’ needs.
Wolfe and his team were able to use the power of cloud and cognitive to mine and understand data, turning it into a resource to personalize the company’s retail product offerings and make brands even more affordable for customers.
The future of retail and technology
“There will always be retail,” said Brian Kilcourse, analyst at Retail Systems Research. “It will just be different.”
The nature of shopping is evolving from a purposeful trip to a store or a website toward the &8220;ubiquitous shopping era&8221;: shopping everywhere, by any means, all the time. This has created a significant challenge for retailers to create an operationally sustainable and engaging experience that inspires loyalty as customers hop from store to web to mobile to social and back again.
That’s where cognitive and cloud comes into play. Retailers can harness the power of data from their business and their customers to better personalize, contextualize and understand who customers are and offer them the products they want when they want them.
Timing and convenience are key for customers now. Cloud and cognitive technologies enable brands to authentically connect with consumers in an agile and scalable way. Cloud is no longer an IT trend. With apps, chatbots and new ways to reach customers, it is the platform keeping retailers available to consumers and in business.
Learn more about IBM Cloud retail solutions.
The post Cloud and cognitive technologies are transforming the future of retail appeared first on news.
Quelle: Thoughts on Cloud

Watson makes building management as a service possible

Heating, ventilation, air conditioning and lighting represent the largest energy costs for businesses and are prime targets for suppliers of Smart Building systems. Vendors claim that understanding detailed energy usage patterns while being able to control and manage consumption based on that information will quickly deliver bottom line results.
Building management as a service with IBM Watson
PhotonStar, a leading British designer and manufacturer of intelligent lighting solutions, uses the cloud-based IBM Watson Internet of Things (IoT) Platform to help deliver an affordable, integrated building management system that can be retrofitted to almost any building to reduce operational costs and increase service levels for building owners and tenants.
The company’s new product, halcyon cloudBMS, is based on PhotonStar&;s next-generation wireless lighting control system, halcyonPRO2. With a halcyonPRO2 platform in each building and configurable cloud-based analytics, cloudBMS delivers an extremely capable, multi-site building-management-as-a-service (BMaaS) solution. The low cost of entry and monthly subscription approach enables owners of small- to medium-sized businesses to reduce energy and operating costs and discover new insights into their operations.
Getting started with building management services
PhotoStar CEO James MacKenzie
PhotonStar CEO James McKenzie said that, historically, PhotonStar was in the LED lighting business. Around 2008, the company began adding microprocessors to its products to help with circadian lighting systems that dynamically change spectral content throughout the day to mimic the light of the sun.
The company has a patented color-mix technology called ChromaWhite that allows it to manage spectral content via multiple LED channels efficiently.
The initial push to expand beyond lighting came from customers. “They started saying, ‘It&8217;s all very well having smart lights, this is great and saves us energy, but all these other environmental factors need controlling, too,’ ” McKenzie said.
Emergency lighting in the UK, for example, must be tested once each month. PhotonStar’s lighting customers in large installations already had onsite staff, but those with many remote locations had to send out a facilities person to each location on a monthly basis just to turn a key and test the system.
If you’ve got a large building, you usually can afford to have a facilities person on-site all the time, so that doesn&8217;t really cost anything. The expensive situation is where you’ve got lots of remote sites. A typical, 350-site retail outlet would require 4,200 emergency lighting tests per year. With Halcyon, the test is conducted monthly and reported via cloud and email, ensuring safety compliance at the lowest cost.
Nobody needs to visit, and the cost savings give a payback in less than one year.
Cost savings of intelligent control
Intelligent control has been shown to deliver 50 percent energy savings in wired control buildings. But 80 percent of the building stock in the developed world already exists, and businesses can’t afford to add that wired infrastructure to existing buildings.
PhotonStar started looking at the broader challenge of facilities management in existing buildings. The company has control functions over lighting, ventilation and air-conditioning, as well as emergency lighting, which costs people money.
One good way to do this cost-effectively is to start with the halcyonPRO2. It&8217;s based on industry-standard ARM technology and wireless protocols such as WiFi and 6LowPAN because it&8217;s so cheap and flexible. So how is that expanded to help manage energy in buildings?
This all sounds quite ambitious, but IoT technology is very cost-effective. Only one’s imagination limits what can be done.
Intelligent business management with cloud
PhotonStar started down that path in 2014 and started expanding halcyon into these other areas. By 2015, it was effectively a building management system by itself, but facilities managers with multiple sites have to make all the really important decisions centrally.
For example, in retail outlets or large offices, managers must aggregate globally the control functions and dashboard them, manage them and examine them. And then, of course, ultimately businesses should intelligently manage all their buildings.
PhotonStar’s leaders realized the company needed to connect the system to the cloud if it wanted to be able to deliver an effective service across multiple locations. And that’s when the company’s cloudBMS was born, building on the Halcyon wireless control system.
PhotonStar built its cloudBMS product and service on top of the IBM Watson IoT platform.
A version of this story originally appeared on the Watson Internet of Things blog.
IBM clients are poised for success using the IBM Cloud as their foundation.
The post Watson makes building management as a service possible appeared first on news.
Quelle: Thoughts on Cloud

Announcing the DockerCon speakers and sessions

Today we’re excited to share the launch the 2017 agenda. With 100+ DockerCon speakers, 60+ breakout sessions, 11 workshops, and hands on labs, we’re confident that you’ll find the right content for your role (Developer, IT Ops, Enterprise) or your level of Docker expertise (Beginner, Intermediate, Advanced).

View the announced schedule and speakers lineup  

Announced sessions include:
Use Case

0 to 60 with Docker in 5 Months: How a Traditional Fortune 40 Company Turns on a Dime by Tim Tyler (MetLife)
Activision&;s Skypilot: Delivering Amazing Game Experiences through Containerized Pipelines by Tom Shaw (Activision)
Cool Genes: The Search for a Cure Using Genomics, Big Data, and Docker by James Lowey (TGEN)
The Tale of Two Deployments: Greenfield and Monolith Docker at Cornell by Shawn Bower and Brett Haranin (Cornell University)
Taking Docker From Local to Production at Intuit by JanJaap Lahpor (Intuit)

The Use Case track at @dockercon looks great w/ @tomwillfixit @JanJaapLahpor @drizzt51 dockercon Click To Tweet

Using Docker

Docker for Devs by John Zaccone (Ippon Technologies)
Docker for Ops by Scott Couton (Puppet)
Docker for Java Developers by Arun Gupta (Couchbase) and Fabiane Nardon (TailTarget)
Docker for .NET Developers by Michele Bustamonte (Solliance)
Creating Effective Images by Abby Fuller (AWS)
Troubleshooting Tips from a Docker Support Engineer by Jeff Anderson (Docker)
Journey to Docker Production: Evolving Your Infrastructure and Processes by Bret Fisher (Independent DevOps Consultant)
Escape From Your VMs with Image2Docker by Elton Stoneman (Docker) and Jeff Nickoloff (All in Geek Consulting)

Excited about the Using @Docker track @dockercon cc @JohnZaccone @scottcoulton @arungupta&;Click To Tweet

Docker Deep Dive &; Presented by Docker Engineering

What&8217;s New in Docker by Victor Vieux
Under the Hood with Docker Swarm Mode by Drew Erny and Nishant Totla
Modern Storage Platform for Container Environments by Julien Quintard
Secure Substrate: Least Privilege Container Deployment by Diogo Monica and Riyaz Faizullabhoy
Docker Networking: From Application-Plane to Data-Plane by Madhu Venugopal
Plug-ins: Building, Shipping, Storing, and Running by Anusha Ragunathan and Nandhini Santhanam
Container Publishing through Docker Store by Chinmayee Nirmal and Alfred Landrum
Automation and Collaboration Across Multiple Swarms Using Docker Cloud by Fernando Mayo and Marcus Martins
Making Docker Datacenter (DDC) Work for You by Vivek Saraswat

Everything you need to know in the @Docker Deep Dive track at @dockercon w/ @vieux @diogomonica&8230;Click To Tweet

Black Belt

Monitoring, the Prometheus Way by Julius Volz (Prometheus)
Everything You Thought You Already Knew About Orchestration by Laura Frank (Codeship)
Cilium &8211; Network and Application Security with BPF and XDP (Noiro Networks)
What Have Namespaces Done For You Lately? By Liz Rice (Microscaling Systems)
Securing the Software Supply Chain with TUF and Docker by Justin Cappos (NYU)
Container Performance Analysis by Brendan Gregg (Netflix)
Securing Containers, One Patch at a Time by Michael Crosby (Docker)

Excited about the Black Belt track track at @dockercon w/ @lizrice @juliusvolz @rhein_wein @tgraf__&8230;Click To Tweet

Workshops &8211; Presented by Docker Engineering and Docker Captains

Docker Security
Hands-on Docker for Raspberry Pi
Modernizing Monolithic ASP.NET Applications with Docker
Introduction to Enterprise Docker Operations
Docker Store for Publishers

[Tweet &;Check out the new dockercon workshops w/@alexellisuk @EltonStoneman @lilybguo @kristiehow &8220;
Convince your manager
Do you really want to go to DockerCon, but are having a hard time convincing your manager to send you? Have you already explained that sessions, training and hands-on exercises are definitely worth the financial investment and time away from your desk?
Well, fear not! We’ve put together a few more resources and reasons to help convince your manager that DockerCon 2017 on April 17-20, is an invaluable experience you need to attend.

Announcing @dockercon 2017 speakers & sessions dockercon Click To Tweet

The post Announcing the DockerCon speakers and sessions appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

What’s next for containers and standardization?

Containers are all the rage among developers who use open source software to build, test and run applications.
In the past couple of years, container interest and usage have grown rapidly. Nearly every major cloud provider and vendor has announced container-based solutions. Meanwhile, a proliferation of container-related start-ups has also appeared.
Hybrid solutions are the future of . They allow developers to more quickly and easily package applications to run across multiple environments. The open standardization of container runtimes and image specifications will help enable portability in a multi-cloud ecosystem.
While I welcome the spread of ideas in this space, the promise of containers as a source of application portability requires the establishment of certain standards. A little over a year ago, the Open Container Initiative (OCI) was founded with the mission of promoting a set of common, minimal, open standards and specifications around container technology. Since then, the OCI community has made a lot of progress.
In terms of developer activity, the OCI community has been busy. Last year the project saw 3000-plus project commitments from 128 different authors across 36 different organizations. With the addition of the Image Format specification project, OCI expanded its initial scope from just the runtime specification. We also added new developer tools projects such as runtime-tools and image-tools.
These serve as repositories for conformance testing tools and have been instrumental in gearing up for the upcoming v1.0 release. We’ve also recently created a new project within OCI called go-digest (which was donated and migrated from docker/go-digest). This provides a strong hash-identity implementation in Go and serves as a common digest package across the container ecosystem.
Regarding early adoption, Docker has supported the OCI technology through containerd. Recently, Docker announced it is spinning out its core container runtime functionality into a standalone component, incorporating it into a separate project called containerd and donating it to a neutral foundation in early 2017. Containerd will feature full OCI support, including the extended OCI image specification.
And Docker is only one example. The Cloud Foundry community was also an early consumer of OCI. It embedded runc through Garden as the cornerstone of its container runtime technology. The Kubernetes project is incubating a new Container Runtime Interface (CRI) that adopts OCI components through implementations like CRI-O and rklet. The rkt community is adopting OCI technology already and is planning to leverage the reference OCI container runtime runc in 2017. The Apache Mesos community is currently building out support for the OCI image specification. AWS recently announced its support of draft OCI specifications in its latest ECR release. IBM is also strongly committed to adopting the OCI draft specifications. The adoption is live today as part of the IBM Bluemix Container Service.
We are getting closer to launching the v1.0 release. The milestone release of the OCI Runtime and Image Format Specifications version 1.0 will hopefully be available later in 2017, drawing the industry that much closer to standardization and true portability. To that end, we’ll be launching an official OCI Certification program once the v1.0 release is out. With OCI certification, folks will be confident that their OCI-certified solutions meet a high set of criteria that deliver agile, interoperable solutions.
There is still a lot of work to do. The OCI community will be onsite at several industry events, including IBM InterConnect. The success of the OCI community depends on a wide array of contributions from across the industry. The door is always open, so please come join us in shaping the future of container technology.
If you’re interested in contributing, I recommend joining the OCI developer community, which is open to everyone. If you’re building products on OCI technology, I recommend joining as a member and participating in the upcoming certification program. Please follow us on Twitter to stay in touch: @OCI_ORG.
The post What&;s next for containers and standardization? appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Watson, what should I watch tonight?

The world is consuming an amazing amount of streaming video content.
From silly internet videos to binge watching hours of the latest trending show, the average person’s appetite for streaming video content is tremendous. By 2021, the research firm MarketsandMarkets expects the market to reach $70 billion. To compete in this increasingly crowded space, providers must find a way to differentiate themselves to keep subscribers engaged and actively paying subscriber fees.
There&;s a simple method for reducing churn while engaging and pleasing viewers: providers must understand, at a deep level, what individual subscribers want to watch and point them directly to that content. This highly personalized approach to serving up streaming video content becomes possible when providers add machine-learning technologies such as IBM Watson to their streaming services.
Stop subscriber attrition with in-platform cognitive capabilities
IBM Watson can find patterns in the way people interact with video content, from the selections they make, to how often they rewind or pause, to which videos they have abandoned midstream or watched repeatedly. By identifying and analyzing commonalities between the types of programming a viewer enjoys, Watson can suggest options the viewer might not have even considered.
Today, a viewer&8217;s search for video content typically only involves basic metadata, such as the title, genre, and so on. However, Watson can amass advanced metadata about what happens inside streaming videos. It can index and catalog at a much deeper level.  This includes the ability to index spoken word, visual imaging, tone and much more.
These capabilities enable subscribers to interact with content in entirely new ways. In the future, a viewer could say, “Show me a movie to help me sleep,&; or “Show me a move where people overcome difficult challenges,” and the library could bring up videos that help a subscriber’s mood or outlook in a new way.
A smarter way to plan a video content strategy
Watson collects viewer data from video platforms over time and combines that data from other sources such as social channels, third-party reviews, global trends, and other content including geospatial and real-time weather information. As that happens, providers can compile data-rich profiles of individual subscribers and proactively predict targeted and highly relevant content recommendations.
Watson&8217;s capabilities can also help providers identify users who are likely to drop off their platforms so they can take steps to prevent that from happening. For example, if customers who enjoy watching romantic comedies are particularly prone to churn, the provider can examine its current offerings and decide if it should license more movies. Or it might find that it has a wide range of similar content that would appeal to this segment of subscribers it could recommend instead.
The opportunities that this technology will uncover in improving customer experience and recommendation success on a video streaming platform are limitless. In a future blog, I will explore how cognitive systems can help with areas such as content acquisition and creation, marketing strategies, advertising intelligence, and general business decision making.
Learn more about IBM Cloud Video.
The post Watson, what should I watch tonight? appeared first on news.
Quelle: Thoughts on Cloud