5 reasons to attend DockerCon 2017

2017 is for the hackers, the makers and those who want to build tools of mass innovation.
In April, 5,000 of the best and brightest will come together to share and learn from different experiences, diverse backgrounds, and common interests. We know that part of what makes DockerCon so special is what happens in the hallways, not just the main stage. Those spontaneous connections between attendees, and the endless networking and learning opportunities, are where the most meaningful interactions occur.

If you haven’t been to a DockerCon yet, you may not know what you are missing. To try to explain why DockerCon 2017 is a must attend conference, we took the liberty of putting together the Top 5 reasons to join us April 17-20 in Austin, Texas.

The.Best.Content. From beginner to deep dive, DockerCon brings together the brightest minds to talk about their passion. Those passions range from tracing containers, building containers from scratch, monitoring and storage, to creating effective images. The list goes on.
Experts Everywhere. Want to meet the maintainers and tech leads of the Docker project? DockerCon! The community members that put together the coolest IoT hack to make walking in between sessions fun? DockerCon! What about chatting directly with the developers and IT professionals at Fortune 500 enterprises that are transforming their organizations by using Docker? DockerCon!
A Hallway Track like you’ve never experienced. DockerCon took conference networking to a new level last year with Bump Up. We can’t wait to share what we have planned this year that will make connecting, learning, and sharing with other like-minded attendees one of the most valuable takeaways of the event.  
DockerCon For All. DockerCon will always be an open and inclusive event for all. We are excited to announce the launch of this year’s DockerCon Diversity Scholarship. The scholarship’s purpose is to provide financial support and guidance to members of the Docker Community who are traditionally underrepresented through on-site mentorship and a scholarship to attend DockerCon.
Community & Docker Swag. As a part of Docker’s community, you already know that it rocks, thanks to you! Now just imagine the energy when 5,000 of us are in one room doing what we love! Now imagine we all just got the most amazing Docker swag to top it off! We are talking backpacks, t-shirts, umbrellas, scarves, LEGO whales &; this year will be no exception.

We hope you’ve read to this point and are so inspired to be a part of something innovative and unique that you’ll join us in Austin for DockerCon 2017. And in case you need some extra help convincing a manager to let you go, we’ve put together a few more resources and a request letter for you to use.

5 reasons to attend DockerCon 2017 in Austin &8211; April 17-20Click To Tweet

The post 5 reasons to attend DockerCon 2017 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing Docker Secrets Management

Containers are changing how we view apps and infrastructure. Whether the code inside containers is big or small, container architecture introduces a change to how that code behaves with hardware &; it fundamentally abstracts it from the infrastructure. believes that there are three key components to container security and together they result in inherently safer apps.

A critical element of building safer apps is having a secure way of communicating with other apps and systems, something that often requires credentials, tokens, passwords and other types of confidential information—usually referred to as application secrets. We are excited to introduce Docker Secrets, a container native solution that strengthens the Trusted Delivery component of container security by integrating secret distribution directly into the container platform.
With containers, applications are now dynamic and portable across multiple environments. This  made existing secrets distribution solutions inadequate because they were largely designed for static environments. Unfortunately, this led to an increase in mismanagement of application secrets, making it common to find insecure, home-grown solutions, such as embedding secrets into version control systems like GitHub, or other equally bad—bolted on point solutions as an afterthought.

Introducing Docker Secrets Management
We fundamentally believe that apps are safer if there is a standardized interface for accessing secrets. Any good solution will also have to follow security best practices, such as encrypting secrets while in transit; encrypting secrets at rest; preventing secrets from unintentionally leaking when consumed by the final application; and strictly adhere to the principle of least-privilege, where an application only has access to the secrets that it needs—no more, no less.
By integrating secrets into Docker orchestration, we are able to deliver a solution for the secrets management problem that follows these exact principles.
The following diagram provides a high-level view of how the Docker swarm mode architecture is applied to securely deliver a new type of object to our containers: a secret object.

In Docker, a secret is any blob of data, such as a password, SSH private key, TLS Certificate, or any other piece of data that is sensitive in nature. When you add a secret to the swarm (by running docker secret create), Docker sends the secret over to the swarm manager over a mutually authenticated TLS connection, making use of the built-in Certificate Authority that gets automatically created when bootstrapping a new swarm.
 
$ echo “This is a secret” | docker secret create my_secret_data –
 
Once the secret reaches a manager node, it gets saved to the internal Raft store, which uses NACL’s Salsa20Poly1305 with a 256-bit key to ensure no data is ever written to disk unencrypted. Writing to the internal store gives secrets the same high availability guarantees that the the rest of the swarm management data gets.
When a swarm manager starts up, the encrypted Raft logs containing the secrets is decrypted using a data encryption key that is unique per-node. This key, and the node’s TLS credentials used to communicate with the rest of the cluster, can be encrypted with a cluster-wide key encryption key, called the unlock key, which is also propagated using Raft and will be required on manager start.
When you grant a newly-created or running service access to a secret, one of the manager nodes (only managers have access to all the stored secrets stored) will send it over the already established TLS connection exclusively to the nodes that will be running that specific service. This means that nodes cannot request the secrets themselves, and will only gain access to the secrets when provided to them by a manager &8211; strictly for the services that require them.
 
$ docker service  create –name=”redis” –secret=”my_secret_data” redis:alpine
 
The  unencrypted secret is mounted into the container in an in-memory filesystem at /run/secrets/<secret_name>.
$ docker exec $(docker ps –filter name=redis -q) ls -l /run/secrets
total 4
-r–r–r–    1 root     root            17 Dec 13 22:48 my_secret_data
 
If a service gets deleted, or rescheduled somewhere else, the manager will immediately notify all the nodes that no longer require access to that secret to erase it from memory, and the node will no longer have any access to that application secret.
$ docker service update –secret-rm=”my_secret_data” redis

$ docker exec -it $(docker ps –filter name=redis -q) cat /run/secrets/my_secret_data

cat: can’t open ‘/run/secrets/my_secret_data': No such file or directory

 
Check out the Docker secrets docs for more information and examples on how to create and manage your secrets. And a special shout out to Laurens Van Houtven (https://lvh.io) in collaboration with the Docker security and core engineering team to help make this feature a reality.

Get safer apps for dev and ops w/new Docker secrets management Click To Tweet

Safer Apps with Docker
Docker secrets is designed to be easily usable by developers and IT ops teams to build and run safer apps. Docker secrets is a container first architecture designed to keep secrets safe and used only when needed by the exact container that needs that secret to operate. From defining apps and secrets with Docker Compose through an IT admin deploying that Compose file directly in Docker Datacenter, the services, secrets, networks and volumes will travel securely, safely with the application.
Resources to learn more:

Download Docker and get started today
Try secrets in Docker Datacenter
Read the Documentation
Attend an upcoming webinar

The post Introducing Docker Secrets Management appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing Docker Datacenter on 1.13 with Secrets, Security Scanning, Content Cache and more

It’s another exciting day with a new release of Docker Datacenter (DDC) on 1.13. This release includes loads of new features around app services, security, image distribution and usability.  
Check out the upcoming webinar on Feb 16th for a demo of all the latest features.
Let’s dig into some of the new features:
Integrated Secrets Management
This release of Docker Datacenter includes integrated support for secrets management from development all the way to production.

This feature allows users to store confidential data (e.g. passwords, certificates) securely on the cluster and inject these secrets to a service. Developers can reference the secrets needed by different services in the familiar Compose file format and handoff to IT for deployment in production. Check out the blog post on Docker secrets management for more details on implementation. DDC integrates secrets and adds several enterprise-grade enhancements, including lifecycle management and deployment of secrets in the UI, label-based granular access control for enhanced security, and auditing users’ access to secrets via syslog.
Image Security Scanning and Vulnerability Monitoring
Another element of delivering safer apps is around the ability to ensure trusted delivery of the code that makes up that app. In addition to Docker Content Trust (already available in DDC), we are excited to add Docker Security Scanning to enable binary level scanning of images and their layers. Docker Security Scanning creates a bill of materials (BOM) of your image and checks packages and versions against a number of  CVE databases. The BOM is stored and checked regularly against the CVE databases, so if a new vulnerability is reported against an existing package, any user can be notified of the new vulnerability. Additionally, system admins can integrate their CI and build systems with the scanning service using the new registry webhooks.

 

The latest features secrets, security scanning, caching and moreClick To Tweet

HTTP Routing Mesh (HRM)
Previously available as an experimental feature, the HTTP (Hostname) based routing mesh is available for production in this release.  HRM extends the existing swarm-mode networking routing mesh by enabling you to route HTTP-based hostnames to your services.

New features in this release include ability to manage HRM for a service via the UI, HTTPS pass-through support via SNI protocol, using multiple HRM networks for application isolation, and sticky sessions integration. See the screenshot below for how HRM can be easily configured within the DDC admin UI.

Compose for Services
This release of DDC has increased support for managing complex distributed applications in the form of stacks&;groups of services, networks, and volumes. DDC allows users to create stacks via Compose files (version 3.1 yml) and deploy through both the UI and CLI. Developer can specify the stack via the familiar Compose file format; for a seamless handoff, IT can cut and paste that the Compose file and deploy services into production.
 
Once deployed, DDC users are able to manage stacks directly through the UI and click into individual services, tasks, networks, and volumes to manage their lifecycle operations.

Content Cache
For companies with app teams that are distributed across a number of locations and want to maintain centralized control of images, developer performance is top of mind. Having developers connect to repositories thousands of miles away make not always make sense when considering latency and bandwidth.  New for this release is the ability to set up satellite registry caches for faster pulls of Docker images. Caches can be assigned to individual users or configured by each user based on their current location. The registry caches can be deployed in a variety of scenarios including;  high availability and in complex cache-chaining scenarios for the most stringent datacenter environments.

Registry Webhooks
To better integrate with external systems, DDC now includes webhooks to notify external systems of registry events. These events range from push or pull events in individual repositories, security scanning events, create or deletion of repositories, and system events like garbage collection. With this full set of integration points, you can fully automate your continuous integration environment and docker image build process.
Usability Improvements
As always, we have added a number of features to refine and continuously improve the system usability for both developers and IT admins.

Cluster and node level metrics on CPU, memory, and disk usage. Sort nodes by usage in order to quickly troubleshoot issues, and the metrics are also rolled up into the dashboard for a bird’s eye view of resource usage in the cluster.
Smoother application update process with support for rollback during rolling updates, and status notifications for service updates.
Easier installation and configuration with the ability to copy a Docker Trusted Registry install command directly from the Universal Control Plane UI
Additional LDAP/AD configuration options in the Universal Control Plane UI
Cloud templates on AWS and Azure to deploy DDC in a few clicks

These new features and more are featured in a Docker Datacenter demo video series

Get started with Docker Datacenter
These are just the latest set of features to join the Docker Datacenter

Learn More about Docker Secrets Management
Get the FREE 30 day trial 
Register for an upcoming webinar

The post Introducing Docker Datacenter on 1.13 with Secrets, Security Scanning, Content Cache and more appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing the DockerCon speakers and sessions

Today we’re excited to share the launch the 2017 agenda. With 100+ DockerCon speakers, 60+ breakout sessions, 11 workshops, and hands on labs, we’re confident that you’ll find the right content for your role (Developer, IT Ops, Enterprise) or your level of Docker expertise (Beginner, Intermediate, Advanced).

View the announced schedule and speakers lineup  

Announced sessions include:
Use Case

0 to 60 with Docker in 5 Months: How a Traditional Fortune 40 Company Turns on a Dime by Tim Tyler (MetLife)
Activision&;s Skypilot: Delivering Amazing Game Experiences through Containerized Pipelines by Tom Shaw (Activision)
Cool Genes: The Search for a Cure Using Genomics, Big Data, and Docker by James Lowey (TGEN)
The Tale of Two Deployments: Greenfield and Monolith Docker at Cornell by Shawn Bower and Brett Haranin (Cornell University)
Taking Docker From Local to Production at Intuit by JanJaap Lahpor (Intuit)

The Use Case track at @dockercon looks great w/ @tomwillfixit @JanJaapLahpor @drizzt51 dockercon Click To Tweet

Using Docker

Docker for Devs by John Zaccone (Ippon Technologies)
Docker for Ops by Scott Couton (Puppet)
Docker for Java Developers by Arun Gupta (Couchbase) and Fabiane Nardon (TailTarget)
Docker for .NET Developers by Michele Bustamonte (Solliance)
Creating Effective Images by Abby Fuller (AWS)
Troubleshooting Tips from a Docker Support Engineer by Jeff Anderson (Docker)
Journey to Docker Production: Evolving Your Infrastructure and Processes by Bret Fisher (Independent DevOps Consultant)
Escape From Your VMs with Image2Docker by Elton Stoneman (Docker) and Jeff Nickoloff (All in Geek Consulting)

Excited about the Using @Docker track @dockercon cc @JohnZaccone @scottcoulton @arungupta&;Click To Tweet

Docker Deep Dive &; Presented by Docker Engineering

What&8217;s New in Docker by Victor Vieux
Under the Hood with Docker Swarm Mode by Drew Erny and Nishant Totla
Modern Storage Platform for Container Environments by Julien Quintard
Secure Substrate: Least Privilege Container Deployment by Diogo Monica and Riyaz Faizullabhoy
Docker Networking: From Application-Plane to Data-Plane by Madhu Venugopal
Plug-ins: Building, Shipping, Storing, and Running by Anusha Ragunathan and Nandhini Santhanam
Container Publishing through Docker Store by Chinmayee Nirmal and Alfred Landrum
Automation and Collaboration Across Multiple Swarms Using Docker Cloud by Fernando Mayo and Marcus Martins
Making Docker Datacenter (DDC) Work for You by Vivek Saraswat

Everything you need to know in the @Docker Deep Dive track at @dockercon w/ @vieux @diogomonica&8230;Click To Tweet

Black Belt

Monitoring, the Prometheus Way by Julius Volz (Prometheus)
Everything You Thought You Already Knew About Orchestration by Laura Frank (Codeship)
Cilium &8211; Network and Application Security with BPF and XDP (Noiro Networks)
What Have Namespaces Done For You Lately? By Liz Rice (Microscaling Systems)
Securing the Software Supply Chain with TUF and Docker by Justin Cappos (NYU)
Container Performance Analysis by Brendan Gregg (Netflix)
Securing Containers, One Patch at a Time by Michael Crosby (Docker)

Excited about the Black Belt track track at @dockercon w/ @lizrice @juliusvolz @rhein_wein @tgraf__&8230;Click To Tweet

Workshops &8211; Presented by Docker Engineering and Docker Captains

Docker Security
Hands-on Docker for Raspberry Pi
Modernizing Monolithic ASP.NET Applications with Docker
Introduction to Enterprise Docker Operations
Docker Store for Publishers

[Tweet &;Check out the new dockercon workshops w/@alexellisuk @EltonStoneman @lilybguo @kristiehow &8220;
Convince your manager
Do you really want to go to DockerCon, but are having a hard time convincing your manager to send you? Have you already explained that sessions, training and hands-on exercises are definitely worth the financial investment and time away from your desk?
Well, fear not! We’ve put together a few more resources and reasons to help convince your manager that DockerCon 2017 on April 17-20, is an invaluable experience you need to attend.

Announcing @dockercon 2017 speakers & sessions dockercon Click To Tweet

The post Announcing the DockerCon speakers and sessions appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Adventures in GELF

If you are running apps in containers and are using Docker’s GELF logging driver (or are considering using it), the following musings might be relevant to your interests.
Some context
When you run applications in containers, the easiest logging method is to write on standard output. You can’t get simpler than that: just echo, print, write (or the equivalent in your programming language!) and the container engine will capture your application’s output.
Other approaches are still possible, of course; for instance:

you can use syslog, by running a syslog daemon in your container or exposing a /dev/log socket;
you can write to regular files and share these log files with your host, or with other containers, by placing them on a volume;
your code can directly talk to the API of a logging service.

In the last scenario, this service can be:

a proprietary logging mechanism operated by your cloud provider, e.g. AWS CloudWatch or Google Stackdriver;
provided by a third-party specialized in managing logs or events, e.g. Honeycomb, Loggly, Splunk, etc.;
something running in-house, that you deploy and maintain yourself.

If your application is very terse, or if it serves very little traffic (because it has three users, including you and your dog), you can certainly run your logging service in-house. My orchestration workshop even has a chapter on logging which might give you the false idea that running your own ELK cluster is all unicorns and rainbows, while the truth is very different and running reliable logging systems at scale is hard.
Therefore, you certainly want the possibility to send your logs to somebody else who will deal with the complexity (and pain) that comes with real-time storing, indexing, and querying of semi-structured data. It’s worth mentioning that these people can do more than just managing your logs. Some systems like Sentry are particularly suited to extract insights from errors (think traceback dissection); and many modern tools like Honeycomb will deal not only with logs but also any kind of event, letting you crossmatch everything together to find out the actual cause of that nasty 3am outage.
But before getting there, you want to start with something easy to implement, and free (as much as possible).
That’s where container logging comes handy. Just write your logs on stdout, and let your container engine do all the work. At first, it will write plain boring files; but later, you can reconfigure it to do something smarter with your logs — without changing your application code.
Note that the ideas and tools that I discuss here are orthogonal to the orchestration platform that you might or might not be using: Kubernetes, Mesos, Rancher, Swarm … They can all leverage the logging drivers of the Docker Engine, so I’ve got you covered!
The default logging driver: json-file
By default, the Docker Engine will capture the standard output (and standard error) of all your containers, and write them in files using the JSON format (hence the name json-file for this default logging driver). The JSON format annotates each line with its origin (stdout or stderr) and its timestamp, and keeps each container log in a separate file.
When you use the docker log command (or the equivalent API endpoint), the Docker Engine reads from these files and shows you whatever was printed by your container. So far, so good.
The json-file driver, however, has (at least) two pain points:

by default, the log files will grow without bounds, until you run out of disk space;
you cannot make complex queries such as “show me all the HTTP requests for virtual host api.container.church between 2am and 7am having a response time of more than 250ms but only if the HTTP status code was 200/OK.”

The first issue can easily be fixed by giving some extra parameters to the json-file driver in Docker to enable log rotation. The second one, however, requires one of these fancy log services that I was alluding to.
Even if your queries are not as complex, you will want to centralize your logs somehow, so that:

logs are not lost forever when the cloud instance running your container disappears;
you can at least grep the logs of multiple containers without dumping them entirely through the Docker API or having to SSH around.

Aparté: when I was still carrying a pager and taking care of the dotCloud platform, our preferred log analysis technique was called “Ops Map/Reduce” and involved fabric, parallel SSH connections, grep, and a few other knick-knacks. Before you laugh of our antiquated techniques, let me ask you how your team of 6 engineers dealt with the log files of 100000 containers 5 years ago and let’s compare our battle scars and PTSD-related therapy bills around a mug of tea, beer, or other suitable beverage. ♥
Beyond json-file
Alright, you can start developing (and even deploying) with the default json-file driver, but at some point, you will need something else to cope with the amount of logs generated by your containers.
That’s where the logging drivers come handy: without changing a single line of code in your application, you can ask your faithful container engine to send the logs somewhere else. Neat.
Docker supports many other logging drivers, including but not limited to:

awslogs, if you’re running on Amazon’s cloud and don’t plan to migrate to anything else, ever;
gcplogs, if you’re more a Google person;
syslog, if you already have a centralized syslog server and want to leverage it for your containers;
gelf

I’m going to stop the list here because GELF has a few features that make it particulary interesting and versatile.
GELF
GELF stands for Graylog Extended Log Format. It was initially designed for the Graylog logging system. If you haven’t heard about Graylog before, it’s an open source project that pioneered “modern” logging systems like ELK. In fact, if you want to send Docker logs to your ELK cluster, you will probably use the GELF protocol! It is an open standard implemented by many logging systems (open or proprietary).
What’s so nice about the GELF protocol? It addresses some (if not most) of the shortcomings of the syslog protocol.
With the syslog protocol, a log message is mostly a raw string, with very little metadata. There is some kind of agreement between syslog emitters and receivers; a valid syslog message should be formatted in a specific way, allowing to extract the following information:

a priority: is this a debug message, a warning, something purely informational, a critical error, etc.;
a timestamp indicating when the thing happened;
a hostname indicating where the thing happened (i.e. on which machine);
a facility indicating if the message comes from the mail system, the kernel, and such and such;
a process name and number;
etc.

That protocol was great in the 80s (and even the 90s), but it has some shortcomings:

as it evolved over time, there are almost 10 different RFCs to specify, extend, and retrofit it to various use-cases;
the message size is limited, meaning that very long messages (e.g.: tracebacks) have to be truncated or split across messages;
at the end of the day, even if some metadata can be extracted, the payload is a plain, unadorned text string.

GELF made a very risqué move and decided that every log message would be a dict (or a map or a hash or whatever you want to call them). This “dict” would have the following fields:

version;
host (who sent the message in the first place);
timestamp;
short and long version of the message;
any extra field you would like!

At first you might think, “OK, what’s the deal?” but this means that when a web servers logs a request, instead of having a raw string like this:
127.0.0.1 – frank [10/Oct/2000:13:55:36 -0700] “GET /apache_pb.gif HTTP/1.0″ 200 2326
 
You get a dict like that:
{
 “client”: “127.0.0.1”,
 “user”: “frank”,
 “timestamp”: “2000-10-10 13:55:36 -0700″,
 “method”: “GET”,
 “uri”: “/apache_pb.gif”,
 “protocol”: “HTTP/1.0″,
 “status”: 200,
 “size”: 2326
}
 
This also means that the logs get stored as structured objects, instead of raw strings. As a result, you can make elaboarate queries (something close to SQL) instead of carving regexes with grep like a caveperson.
OK, so GELF is a convenient format that Docker can emit, and that is understood by a number of tools like Graylog, Logstash, Fluentd, and many more.
Moreover, you can switch from the default json-file to GELF very easily; which means that you can start with json-file (i.e. not setup anything in your Docker cluster), and later, when you decide that these log entries could be useful after all, switch to GELF without changing anything in your application, and automatically have your logs centralized and indexed somewhere.
Using a logging driver
How do we switch to GELF (or any other format)?
Docker provides two command-line flags for that:

–log-driver to indicate which driver to use;
–log-opt to pass arbitrary options to the driver.

These options can be passed to docker run, indicating that you want this one specific container to use a different logging mechanism; or to the Docker Engine itself (when starting it) so that it becomes the default option for all containers.
(If you are using the Docker API to start your containers, these options are passed to the create call, within the HostConfig.LogConfig structure.)
The “arbitrary options” vary for each driver. In the case of the GELF driver, you can specify a bunch of options but there is one that is mandatory: the address of the GELF receiver.
If we have a GELF receiver on the machine 1.2.3.4 on the default UDP port 12201, you can start your container as follows:

docker run
&;log-driver gelf &8211;log-opt gelf-address=udp://1.2.3.4:12201
alpine echo hello world
The following things will happen:

the Docker Engine will pull the alpine image (if necessary)
the Docker Engine will create and start our container
the container will execute the command echo with arguments hello world
the process in the container will write hello world to the standard output
the hello world message will be passed to whomever is watching (i.e. you, since you started the container in the foreground)
the hello world message will also be caught by Docker and sent to the logging driver
the gelf logging driver will prepare a full GELF message, including the host name, the timestamp, the string hello world, but also a bunch of informations about the container, including its full ID, name, image name and ID, environment variables, and much more;
this GELF message will be sent through UDP to 1.2.3.4 on port 12201.

Then, hopefully 1.2.3.4 receives the UDP packet, proecesses it, writes the message to some persistent indexed store, and allows you to retrieve or query it.
Hopefully.
I would tell you an UDP joke, but
If you have ever been on-call or responsible for other people’s code, you are probably cringing by now. Our precious logging message is within a UDP packet that might or might not arrive to our logging server (UDP has no transmission guarantees). If our logging server goes away (a nice wording for “crashes horribly”), our packet might arrive, but our message will be obliviously ignored, and we won’t know anything about it. (Technically, we might get an ICMP message telling us that the host or port is unreachable, but at that point, it will be too late, because we won’t even know which message this is about!)
Perhaps we can live with a few dropped messages (or a bunch, if the logging server is being rebooted, for instance). But what if we live in the Cloud, and our server evaporates? Seriously, though: what if I’m sending my log messages to an EC2 instance, and for some reason that instance has to be replaced with another one? The new instance will have a different IP address, but my log messages will continue to stubbornly go to the old address.
DNS to the rescue
An easy technique to work around volatile IP addresses is tu use DNS. Instead of specifying 1.2.3.4 as our GELF target, we will use gelf.container.church, and make sure that this points to 1.2.3.4. That way, whenever we need to send messages to a different machine, we just update the DNS record, and our Docker Engine happily sends the messages to the new machine.
Or does it?
If you have to write some code sending data to a remote machine (say, gelf.container.church on port 12345), the simplest version will look like this:

Resolve gelf.container.church to an IP address (A.B.C.D).
Create a socket.
Connect this socket to A.B.C.D, on port 12345.
Send data on the socket.

If you must send data multiple times, you will keep the socket open, both for convenience and efficiency purposes. This is particularly important with TCP sockets, because before sending your data, you have to go through the “3-way handshake” to establish the TCP connection; in other words, the 3rd step in our list above is very expensive (compared to the cost of sending a small packet of data).
In the case of UDP sockets, you might be tempted to think: “Ah, since I don’t need to do the 3-way handshake before sending data (the 3rd step in our list above is essentially free), I can go through all 4 steps each time I need to send a message!” But in fact, if you do that, you will quickly realize that you are now stumped by the first step, the DNS resolution. DNS resolution is less expensive than a TCP 3-way handshake, but barely: it still requires a round-trip to your DNS resolver.
Aparté: yes, it is possible to have very efficient local DNS resolvers. Something like pdns-recursor or dnsmasq running on localhost will get you some craaazy fast DNS response time for cached queries. However, if you need to make a DNS request each time you need to send a log message, it will add an indirect, but significant, cost to your application, since every log line will generate not only one syscall, but three. Damned! And some people (like, almost everyone running on EC2) are using their cloud provider’s DNS service. These people will incur two extra network packets for each log line. And when the cloud provider’s DNS is down, logging will be broken. Not cool.
Conclusion: if you log over UDP, you don’t want to resolve the logging server address each time you send a message.
Hmmm … TCP to the rescue, then?
It would make sense to use a TCP connection, and keep it up as long as we need it. If anything horrible happens to the logging server, we can trust the TCP state machine to detect it eventually (because timeouts and whatnots) and notify us. When that happens, we can then re-resolve the server name and re-connect. We just need a little bit of extra logic in the container engine, to deal with the unfortunate scenario where the write on the socket gives us an EPIPE error, also known as “Broken pipe” or in plain english “the other end is not paying attention to us anymore.”
Let’s talk to our GELF server using TCP, and the problem will be solved, right?
Right?
Unfortunately, the GELF logging driver in Docker only supports UDP.
(╯°□°)╯︵ ┻━┻
At this point, if you’re still with us, you might have concluded that computing is just a specialized kind of hell, that containers are the antichrist, and Docker is the harbinger of doom in disguise.
Before drawing hasty conclusions, let’s have a look at the code.
When you create a container using the GELF driver, this function is invoked, and it creates a new gelfWriter object by calling gelf.NewWriter.
Then, when the container prints something out, eventually, the Log function of the GELF driver is invoked. It essentially writes the message to the gelfWriter.
This GELF writer object is implemented by an external dependency, github.com/Graylog2/go-gelf.
Look, I see it coming, he’s going to do some nasty fingerpointing and put the blame on someone else’s code. Despicable!
Hot potato
Let’s investigate this package, in particular the NewWriter function, the Write method, and the other methods called by the latter, WriteMessage and writeChunked. Even if you aren’t very familiar with Go, you will see that these functions do not implement any kind of reconnection logic. If anything bad happens, the error bubbles up to the caller, and that’s it.
If we conduct the same investigation with the code on the Docker side (with the links in the previous section), we reach the same conclusions. If an error occurs while sending a log message, the error is passed to the layer above. There is no reconnection attempt, neither in Docker’s code, nor in go-gelf’s.
This, by the way, explains why Docker only supports the UDP transport. If you want to support TCP, you have to support more error conditions than UDP. To phrase things differently: TCP support would be more complicated and more lines of code.
Haters gonna hate
One possible reaction is to get angry at the brave soul who implemented go-gelf, or the one who implemented the GELF driver in Docker. Another better reaction is to be thankful that they wrote that code, rather than no code at all!
Workarounds
Let’s see how to solve our logging problem.
The easiest solution is to restart our containers whenever we need to “reconnect” (technically, resolve and reconnect). It works, but it is very annoying.
A slightly better solution is to send logs to 127.0.0.1:12201, and then run a packet redirector to “bounce” or “mirror” these packets to the actual logger; e.g.:
socat UDP-LISTEN:12201 UDP:gelf.container.church:12201

This needs to run on each container host. It is very lightweight, and whenever gelf.container.church is updated, instead of restarting your containers, you merely restart socat.
(You could also send your log packets to a virtual IP, and then use some fancy iptables -t nat … -j DNAT rules to rewrite the destination address of the packets going to this virtual IP.)
Another option is to run Logstash on each node (instead of just socat). It might seem overkill at first, but it will give you a lot of extra flexibility with your logs: you can do some local parsing, filtering, and even “forking,” i.e. deciding to send your logs to multiple places at the same time. This is particularly convenient if you are switching from one logging system to another, because it will let you feed both systems in parallel for a while (during a transition period).
Running Logstash (or another logging tool) on each node is also very useful if you want to be sure that you don’t lose any log message, because it would be the perfect place to insert a queue (using Redis for simple scenarios, or Kafka if you have stricter requirements).
Even if you end up sending your logs to a service using a different protocol, the GELF driver is probably the easiest one to setup to connect Docker to e.g. Logstash or Fluentd, and then have Logstash or Fluentd speak to the logging service with the other protocol.
UDP packets sent to localhost can’t be lost, except if the UDP socket runs out of buffer space. This could happen if your sender (Docker) is faster than your receiver (Logstash/Fluentd), which is why we mentioned a queue earlier: the queue will allow the receiver to drain the UDP buffer as fast as possible to avoid overflows. Combine that with a large enough UDP buffer, and you’ll be safe.
Future directions
Even if running a cluster-wide socat is relatively easy (especially with Swarm mode and docker service create –mode global), we would rather have a good behavior out of the box.
There are already some GitHub issues related to this: , , and . One of the maintainers has joined the conversation and there are some people at Docker Inc. who would love to see this improved.
One possible fix is to re-resolve the GELF server name once in a while, and when a change is detected, update the socket destination address. Since DNS provides TTL information, it could even be used to know how long the IP address can be cached.
If you need better GELF support, I have good news: you can help! I’m not going to tell you “just send us a pull request, ha ha ha!” because I know that only a very small number of people have both the time and expertise to do that — but if you are one of them, then by all means, do it! There are other ways to help, though.
First, you can monitor the GitHub issues mentioned above (23679 and 17904). If the contributors and maintainers ask for feedback, indicate what would (or wouldn’t) work for you. If you see a proposition that makes sense, and you just want to say “+1” you can do it with GitHub reactions (the “thumbs up” emoji works perfectly for that). And if somebody proposes a pull request, testing it will be extremely helpful and instrmental to get it accepted.
If you look at one of these GitHub issues, you will see that there was already a patch proposed a long time ago; but the person who asked for the feature in the first place never tested it, and as a result, it was never merged. Don’t get me wrong, I’m not putting the blame on that person! It’s a good start to have a GitHub issue as a kind of “meeting point” for people needing a feature, and people who can implement it.
It’s quite likely that in a few months, half of this post will be obsolete because the GELF driver will support TCP connections and/or correctly re-resolve addresses for UDP addresses!

The post Adventures in GELF appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Q&A: 15 Questions AWS Users Ask About DDC For AWS

Docker is deployed across all major cloud service providers, including AWS. So when we announced Docker Datacenter for AWS (which makes it even easier to deploy DDC on AWS) and showed live demos of the solution at AWS re:Invent 2016 it was no surprise that we received a ton of interest about the solution. Docker Datacenter for AWS, as you can guess from its name, is now the easiest way to install and stand up the Docker Datacenter (DDC)  stack on an AWS EC2 cluster. If you are an AWS user and you are looking for an enterprise container management platform, then this blog will help answer questions you have about using DDC on AWS.
In last week’s webinar,  Harish Jayakumar,  Solutions Engineer at Docker, provided a solution overview and demo to showcase how the tool works, and some of the cool features within it. You can watch the recording of the webinar below:

We also hosted a live Q&A session at the end where we opened up the floor to the audience and did our best to get through as many questions as we could. Below, are fifteen of the questions that we received from the audience. We selected these because we believe they do a great job of representing the overall set of inquiries we received during the presentation. Big shout out to Harish for tag teaming the answers with me.
Q 1: How many VPCs are required to create a full cluster of UCP, DTR and the workers.
A: With the DDC Template it creates one new VPC along with its subnets and security groups. More details here:  https://ucp-2-1-dtr-2-2.netlify.com/datacenter/install/aws/
However, if you do want to use DDC with your existing VPC you can always deploy DDC directly without using the Cloud Formation template if you would like.
Q 2: Is the $150/monthly cost  per instance. Is this for an EC2 instance?
A: Yes, the $150/month cost is per EC2 instance. This is our monthly subscription model and is is purchasable directly on Docker Store. We also offer have annual subscriptions that are currently priced at $1,500 per node/per year or $3,000 per node/per year. You can view all pricing here.
Q 3: Would you be able to go over how to view logs for each containers? And what&;s the type of log output that UCP shows in the UI?
A: Within the UCP UI you can click on the “Resources” tab-> and then go to “Containers.” Once you have selected “Containers,  you can click on each individual container and see the logs within the UI.

Q 4: How does the resource allocation work? Can we over allocate CPU or RAM?
A: Yes. By default, each container’s access to the host machine’s CPU cycles is unlimited, but you can set various constraints to limit a given container’s access to the host machine’s CPU cycles. For RAM, Docker can enforce hard memory limits, which allow the container to use no more than a given amount of user or system memory. Or you Docker can provide soft limits, which allow the container to use as much memory as it needs unless certain conditions are met, such as when the kernel detects low memory or contention on the host machine. You can find more details here: https://docs.docker.com/engine/admin/resource_constraints/
Q 5: Can access to the console via UCP be restricted via RBAC constraints?
A: Yes. Here is a blog explaining access controls in detail:

https://blog.docker.com/2016/03/role-based-access-control-docker-ucp-tutorial/

Q 6: Can we configure alerting from Docker Datacenter based on user definable ined criteria (e.g. resource utilization of services)?
A: Yes, but with a little tweaking. Everything with Docker is event driven- so you can configure to trigger alerts for each event and take the necessary action. Within the UI, you can see all of the usage of resources listed. You have the ability to set how you want to see the notifications associated with it.
Q 7: Is there a single endpoint in front of the three managers?
A: Within UCP, we suggest teams deploy three managers to ensure high availability of the cluster. As far as the single endpoint, you can configure one if you would like. For example, you can configure an ELB in AWS to be in front of those three (3) managers and then they can reach to that one load balancer instead of accessing the individual manager with their ip.
Q 8: Do you have to use DTR or can you use alternative registries such as AWS ECR, Artifactory, etc.?
A: With the Cloud Formation template, it is only DTR. Docker Datacenter is the end to end enterprise container management solution and DTR/UCP are integrated. This means they share several components between them. They also have SSO enabled between the components so the same LDAP/AD group can be used. Also, the solution ensures a secure software supply chain including signing and scanning. The chain is only made possible when using the full solution. The images are signed and scanned by DTR and because of integration you can simply enable UCP to not run containers based of images that haven’t been signed. We call this policy enforcement.
Q 9: So there is a single endpoint in front of the mgrs (like a Load balancer) where I can config my docker cli to?
A: Yes, that is correct.
Q 10: How many resources on the VMs or Physical machines are needed to run Docker Datacenter on prem? Let&8217;s say for three UCP manager nodes and three worker nodes.
A: The CloudFormation template does it all for you. However, if you plan to install DDC outside of the Cloud Formation template here are the infrastructure requirements you should consider:

https://docs.docker.com/ucp/installation/system-requirements/

(installed on CommerciallySupported Engine https://docs.docker.com/cs-engine/install/)
Q 11: How does this demo of DDC for AWS compare to https://aws.amazon.com/quickstart/architecture/docker-ddc/
A: It is the same. But stay tuned, as we will be providing an updated version in the coming weeks.
Q 12: If you don&8217;t use a routing mesh, would you need to route to each specific container? How do you know their individual IPs? Is it possible to have a single-tenant type of architecture where each user has his own container running?
A: The routing mesh is available as part of the engine. It’s turned on by default and it routes to containers cluster wide. Before the Routing mesh ( prior to Docker 1.12) you will have to route to a specific container and its port. It does not have to be the ip specifically. You can route host names to specific services from within the UCP UI. We also introduced the concept of alias &; where you can associate a container by its name and the engine has a built in DNS to handle the routing for you. However, I would encourage looking at routing mesh, which is available in Docker 1.12 and above.
Q 13: Are you using Consul as a K/V store for the overlay network ?
A: No we are not using Consul as the K/V store nor does Docker require an external K/V store. The state is stored using a distributed database on the manager nodes called Raft store.  Manager nodes are part of a Raft consensus group. This enables them to share information and elect a leader. A leader is the central authority maintaining the state, which includes lists of nodes, services and tasks across the swarm in addition to making scheduling decisions.
Q 14: How do you work with node draining in the context of Auto Scaling Groups (ASG)?
A: The node drain drains all the workloads from a node. It prevents a node from receiving new tasks from the manager. It also means the manager stops tasks running on the node and launches replica tasks on a node with ACTIVE availability. The node does remaining the ASG group.
Q 15: Is DDC for AWS dependent on AWS EBS?
A: We use EBS volumes for the instances, but we aren&8217;t using it for persistent storage, more of a local disk cache. Data there will go away if instance goes away.
To get started with Docker Datacenter for AWS, sign up for a free 30-day trial at www.docker.com/trial.
Enjoy!
 

Meet the easiest way to deploy @Docker Datacenter on AWS!Click To Tweet

The post Q&;A: 15 Questions AWS Users Ask About DDC For AWS appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Q&A: 15 Questions AWS Users Ask About DDC For AWS

Docker is deployed across all major cloud service providers, including AWS. So when we announced Docker Datacenter for AWS (which makes it even easier to deploy DDC on AWS) and showed live demos of the solution at AWS re:Invent 2016 it was no surprise that we received a ton of interest about the solution. Docker Datacenter for AWS, as you can guess from its name, is now the easiest way to install and stand up the Docker Datacenter (DDC)  stack on an AWS EC2 cluster. If you are an AWS user and you are looking for an enterprise container management platform, then this blog will help answer questions you have about using DDC on AWS.
In last week’s webinar,  Harish Jayakumar,  Solutions Engineer at Docker, provided a solution overview and demo to showcase how the tool works, and some of the cool features within it. You can watch the recording of the webinar below:

We also hosted a live Q&A session at the end where we opened up the floor to the audience and did our best to get through as many questions as we could. Below, are fifteen of the questions that we received from the audience. We selected these because we believe they do a great job of representing the overall set of inquiries we received during the presentation. Big shout out to Harish for tag teaming the answers with me.
Q 1: How many VPCs are required to create a full cluster of UCP, DTR and the workers.
A: With the DDC Template it creates one new VPC along with its subnets and security groups. More details here:  https://ucp-2-1-dtr-2-2.netlify.com/datacenter/install/aws/
However, if you do want to use DDC with your existing VPC you can always deploy DDC directly without using the Cloud Formation template if you would like.
Q 2: Is the $150/monthly cost  per instance. Is this for an EC2 instance?
A: Yes, the $150/month cost is per EC2 instance. This is our monthly subscription model and is is purchasable directly on Docker Store. We also offer have annual subscriptions that are currently priced at $1,500 per node/per year or $3,000 per node/per year. You can view all pricing here.
Q 3: Would you be able to go over how to view logs for each containers? And what&;s the type of log output that UCP shows in the UI?
A: Within the UCP UI you can click on the “Resources” tab-> and then go to “Containers.” Once you have selected “Containers,  you can click on each individual container and see the logs within the UI.

Q 4: How does the resource allocation work? Can we over allocate CPU or RAM?
A: Yes. By default, each container’s access to the host machine’s CPU cycles is unlimited, but you can set various constraints to limit a given container’s access to the host machine’s CPU cycles. For RAM, Docker can enforce hard memory limits, which allow the container to use no more than a given amount of user or system memory. Or you Docker can provide soft limits, which allow the container to use as much memory as it needs unless certain conditions are met, such as when the kernel detects low memory or contention on the host machine. You can find more details here: https://docs.docker.com/engine/admin/resource_constraints/
Q 5: Can access to the console via UCP be restricted via RBAC constraints?
A: Yes. Here is a blog explaining access controls in detail:

https://blog.docker.com/2016/03/role-based-access-control-docker-ucp-tutorial/

Q 6: Can we configure alerting from Docker Datacenter based on user definable ined criteria (e.g. resource utilization of services)?
A: Yes, but with a little tweaking. Everything with Docker is event driven- so you can configure to trigger alerts for each event and take the necessary action. Within the UI, you can see all of the usage of resources listed. You have the ability to set how you want to see the notifications associated with it.
Q 7: Is there a single endpoint in front of the three managers?
A: Within UCP, we suggest teams deploy three managers to ensure high availability of the cluster. As far as the single endpoint, you can configure one if you would like. For example, you can configure an ELB in AWS to be in front of those three (3) managers and then they can reach to that one load balancer instead of accessing the individual manager with their ip.
Q 8: Do you have to use DTR or can you use alternative registries such as AWS ECR, Artifactory, etc.?
A: With the Cloud Formation template, it is only DTR. Docker Datacenter is the end to end enterprise container management solution and DTR/UCP are integrated. This means they share several components between them. They also have SSO enabled between the components so the same LDAP/AD group can be used. Also, the solution ensures a secure software supply chain including signing and scanning. The chain is only made possible when using the full solution. The images are signed and scanned by DTR and because of integration you can simply enable UCP to not run containers based of images that haven’t been signed. We call this policy enforcement.
Q 9: So there is a single endpoint in front of the mgrs (like a Load balancer) where I can config my docker cli to?
A: Yes, that is correct.
Q 10: How many resources on the VMs or Physical machines are needed to run Docker Datacenter on prem? Let&8217;s say for three UCP manager nodes and three worker nodes.
A: The CloudFormation template does it all for you. However, if you plan to install DDC outside of the Cloud Formation template here are the infrastructure requirements you should consider:

https://docs.docker.com/ucp/installation/system-requirements/

(installed on CommerciallySupported Engine https://docs.docker.com/cs-engine/install/)
Q 11: How does this demo of DDC for AWS compare to https://aws.amazon.com/quickstart/architecture/docker-ddc/
A: It is the same. But stay tuned, as we will be providing an updated version in the coming weeks.
Q 12: If you don&8217;t use a routing mesh, would you need to route to each specific container? How do you know their individual IPs? Is it possible to have a single-tenant type of architecture where each user has his own container running?
A: The routing mesh is available as part of the engine. It’s turned on by default and it routes to containers cluster wide. Before the Routing mesh ( prior to Docker 1.12) you will have to route to a specific container and its port. It does not have to be the ip specifically. You can route host names to specific services from within the UCP UI. We also introduced the concept of alias &; where you can associate a container by its name and the engine has a built in DNS to handle the routing for you. However, I would encourage looking at routing mesh, which is available in Docker 1.12 and above.
Q 13: Are you using Consul as a K/V store for the overlay network ?
A: No we are not using Consul as the K/V store nor does Docker require an external K/V store. The state is stored using a distributed database on the manager nodes called Raft store.  Manager nodes are part of a Raft consensus group. This enables them to share information and elect a leader. A leader is the central authority maintaining the state, which includes lists of nodes, services and tasks across the swarm in addition to making scheduling decisions.
Q 14: How do you work with node draining in the context of Auto Scaling Groups (ASG)?
A: The node drain drains all the workloads from a node. It prevents a node from receiving new tasks from the manager. It also means the manager stops tasks running on the node and launches replica tasks on a node with ACTIVE availability. The node does remaining the ASG group.
Q 15: Is DDC for AWS dependent on AWS EBS?
A: We use EBS volumes for the instances, but we aren&8217;t using it for persistent storage, more of a local disk cache. Data there will go away if instance goes away.
To get started with Docker Datacenter for AWS, sign up for a free 30-day trial at www.docker.com/trial.
Enjoy!
 

Meet the easiest way to deploy @Docker Datacenter on AWS!Click To Tweet

The post Q&;A: 15 Questions AWS Users Ask About DDC For AWS appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Convince your manager to send you to DockerCon

Has it sunk in yet that DockerCon is in roughly 2 months? That’s right, this year we gather in April as a community and ecosystem in Austin, Texas for 3 days of deep learning and networking (with a side serving of Docker fun). DockerCon is the annual community and industry event for makers and operators of next generation distributed apps built with containers. If Docker is important to your daily workflow or your business, you and your team (reach out for group discounts) should attend this conference to stay up to date on the latest progress with the Docker platform and ecosystem.
Do you really want to go to DockerCon, but are having a hard time convincing your manager on pulling the trigger to send you? Have you already explained that sessions, training and hands-on exercises are definitely worth the financial investment and time away from your desk?
Well, fear not! We’ve put together a few more resources and reasons to help convince your manager that DockerCon 2017 on April 17-20, is an invaluable experience you need to attend.
Something for everyone
DockerCon is the best place to learn and share your experiences with the industry’s greatest minds and the guarantee to bring learnings back to your team. We will have plenty of learning materials and networking opportunities specific to our 3 main audiences:
1. Developers
From programming language specific workshops such as Docker for Java developers or modernizing monolithic ASP.NET applications with Docker, to sessions on building effective Docker images or using Docker for Mac, Docker for Windows and Docker Cloud, the DockerCon agenda will showcase plenty of hands-on content for developers.

2. IT Pros
The review committee is also making sure to include lots of learning materials for IT pros. Register now if you want to attend the orchestration and networking workshops as they will sold out soon. Here is the list of Ops centric topics, which will be covered in the breakout sessions: tracing, container and cluster monitoring, container orchestration, securing the software supply chain and the Docker for AWS and Docker for Azure editions.

3. Enterprise
Every DockerCon attendee will also have the opportunity to learn how Docker offers an integrated Container-as-a-Service platform for developers and IT operations to collaborate in the enterprise software supply chain. Companies with a lot of experience running Docker in production will go over their reference architecture and explain how they brought security, policy and controls to their application lifecycle without sacrificing any agility or application portability. Use case sessions will be heavy on technical detail and implementation advice.
Proof is in the numbers

According to surveyed DockerCon attendees, 91% would recommend investing in DockerCon again, not to mention DockerCon 2016 scored an improved NPS of 61.
DockerCon continues to grow due to high demand. DockerCon attendance has increased 900% since 2014 and +25% in the just the last year. We hope to continue to welcome more to DockerCon and the community each year while preserving the intimacy of the conference.
87% of surveyed attendees said the content and subject matter was relevant to their professional endeavours.  

Take part in priceless learning opportunities
At the heart of DockerCon are amazing learning opportunities from not just the Docker team but the entire community. This year we will provide event more tools and resources to facilitate professional networking, learning and mentorship opportunities based on areas of expertise and interest. No matter your expertise level, DockerCon is the one place where you can not only learn and ask, but also teach and help. To us, this is what makes DockerCon unlike any other conference.
Leave motivated to create something amazing
The core theme of every DockerCon is to celebrate the makers within us all. Through the robust content and pure energy of the community, we are confident that you will leave DockerCon inspired to return to work to apply all of your new knowledge and best practices to your line of work. Don’t believe us? Just check out our closing session of 2016 that featured cool hacks created by community and Docker team members.
DockerCon Schedule
We have extended the conference this year to 3 days with instructor led workshops beginning on Monday afternoon. General sessions, breakout sessions and ecosystem expo will take place Tuesday &; Wednesday. We’ve added the extra day to the conference to help your over crammed agendas with repeat top sessions, hands on labs and mini summits that will take place on Thursday, April 20.

Plan Your Trip

Sending an employee to a conference is an investment and can be a big expense. Below you will find a budget template to help you plan for your trip. Ready to send an email to your boss about DockerCon ? Here is the sample letter you can use as a starting point.
We invite you to join the community and us at DockerCon 2017, and we hope you find this packet of event information useful, including a helpful letter you can use to send to your manager to justify your trip and build a budget estimate. We are confident there’s something at DockerCon for everyone, so feel free to share within your company and networks.

Convince your manager to send you to @dockercon &8211; the 1st and largest container conferenceClick To Tweet

The post Convince your manager to send you to DockerCon appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Remove the obstacles delaying your large-scale cloud adoption

Many CIOs recognize the value of cloud, but there can be significant barriers to large-scale cloud adoption.
Company leaders find their organizations unprepared to migrate the optimal workloads to cloud for early success, facing security challenges they didn’t anticipate, making IT decisions that don’t make financial sense for the business, or taking a minimal or ad hoc approach to governance and organizational issues.
All these scenarios can unintentionally create sub-optimal cloud environments. Organizations looking to avoid or remove such obstacles, and free up IT resources to reinvest in more strategic projects, can join more than 20,000 thought leaders and industry experts at IBM InterConnect 2017, 19 &; 23 March, in Las Vegas.
At InterConnect, speakers will discuss how organizations can prepare for cloud&;s impact and ensure successful implementations. Here’s a sample of some of the featured topics in the cloud managed services area:
Learn how IBM Cloud Migration Services brings end-to-end cloud services to businesses of all sizes
If resource and skill limitations make moving to a hybrid cloud infrastructure a continuing challenge for your organization, learn how specific pre-migration preparation and migration services can ensure a successful move. Get proof from our experts in the field.
Do you know the six questions every CIO should ask about cloud managed services security?
When evaluating a cloud service provider, asking the right security questions is critical in determining if the solution is a good fit. During this breakout session, meet with our cloud security experts to understand why not all cloud providers can deliver adequate security, the six security questions you should ask and what to listen for to assure you have the right coverage.
Calculate your infrastructure savings from implementing IBM Cloud Managed Services in 20 minutes
In 20 minutes, you can get a personalized report of your potential annual savings from implementing IBM Cloud Managed Services based on your unique information. You can even select a unique estimator tool to use for SAP and Oracle landscapes.
How to stop governance and organizational issues from delaying your large-scale cloud adoption plans
Industry research and IBM experience indicate that governance and organizational issues continue to be a significant barrier to large-scale cloud adoption. This session addresses how organizations can prepare for cloud&8217;s impact with IBM cloud governance, organization health check and framework services to shore up capability to enable and manage cloud services
Learn more about InterConnect and register. You can use the preview tool to review all sessions, based on your area of interest.
The post Remove the obstacles delaying your large-scale cloud adoption appeared first on news.
Quelle: Thoughts on Cloud

Remove the obstacles delaying your large-scale cloud adoption

Many CIOs recognize the value of cloud, but there can be significant barriers to large-scale cloud adoption.
Company leaders find their organizations unprepared to migrate the optimal workloads to cloud for early success, facing security challenges they didn’t anticipate, making IT decisions that don’t make financial sense for the business, or taking a minimal or ad hoc approach to governance and organizational issues.
All these scenarios can unintentionally create sub-optimal cloud environments. Organizations looking to avoid or remove such obstacles, and free up IT resources to reinvest in more strategic projects, can join more than 20,000 thought leaders and industry experts at IBM InterConnect 2017, 19 &; 23 March, in Las Vegas.
At InterConnect, speakers will discuss how organizations can prepare for cloud&;s impact and ensure successful implementations. Here’s a sample of some of the featured topics in the cloud managed services area:
Learn how IBM Cloud Migration Services brings end-to-end cloud services to businesses of all sizes
If resource and skill limitations make moving to a hybrid cloud infrastructure a continuing challenge for your organization, learn how specific pre-migration preparation and migration services can ensure a successful move. Get proof from our experts in the field.
Do you know the six questions every CIO should ask about cloud managed services security?
When evaluating a cloud service provider, asking the right security questions is critical in determining if the solution is a good fit. During this breakout session, meet with our cloud security experts to understand why not all cloud providers can deliver adequate security, the six security questions you should ask and what to listen for to assure you have the right coverage.
Calculate your infrastructure savings from implementing IBM Cloud Managed Services in 20 minutes
In 20 minutes, you can get a personalized report of your potential annual savings from implementing IBM Cloud Managed Services based on your unique information. You can even select a unique estimator tool to use for SAP and Oracle landscapes.
How to stop governance and organizational issues from delaying your large-scale cloud adoption plans
Industry research and IBM experience indicate that governance and organizational issues continue to be a significant barrier to large-scale cloud adoption. This session addresses how organizations can prepare for cloud&8217;s impact with IBM cloud governance, organization health check and framework services to shore up capability to enable and manage cloud services
Learn more about InterConnect and register. You can use the preview tool to review all sessions, based on your area of interest.
The post Remove the obstacles delaying your large-scale cloud adoption appeared first on news.
Quelle: Thoughts on Cloud