Ensuring Container Image Security on OpenShift with Red Hat CloudForms

In December 2016, a major vulnerability, CVE-2016-9962 (&;on-entry vulnerability&;), was found in the Docker engine which allowed local root users in a container to gain access to file-descriptors of a process launched or moved into the container from another namespace. In a Banyan security report, they found that over 30% of official images in Docker Hub contain high priority security vulnerabilities. And FlawCheck surveyed enterprises asking for their top security concern regarding containers in production environments. “Vulnerabilities and malware,” at 42%, was the top security concern among those surveyed. Clearly security is a top concern for organizations that are looking to run containers in production.
At Red Hat, we are continuously improving our security capabilities and introduced a new container scanning feature with CloudForms 4.2 and OpenShift 3.4. This new feature allows CloudForms to flag images in the container registry in which it has found vulnerabilities, and OpenShift to deny execution of that image the next time someone tries to run that image.

CloudForms has multiple capabilities on how a container scan can be initiated:

A scheduled scan of the registry
An automatic scan based on newly discovered images in the registry
A manual execution of the scan via Smart-tate Analysis

Having this unique scanning feature with native integration in OpenShift is a milestone in container security as it provides near real time monitoring of your images within the OpenShift environment.
The following diagram illustrates the flow happening when an automatic scan is performed.

CloudForms monitors the OpenShift Provider and checks for new images in the registry. If it finds a new image, CloudForms triggers a scan.
CloudForms makes a secure call to OpenShift and requests a scanning container to be scheduled.
OpenShift schedules a new pod on an available node.
The scanning container is started.
The scanning container pulls down a copy of the image to scan.
The image to scan is unpacked and its software contents (RPMs) are sent to CloudForms.
CloudForms may also initiate an OpenSCAP scan of the container.
Once the OpenSCAP scan finishes, the results are uploaded and a report is generated from the CloudForms UI.
If the scan found any vulnerabilities, CloudForms calls OpenShift to flag the image and prevent it from running.

The next time someone tries to start the vulnerable image, OpenShift alerts the user that the image execution was blocked based on the policy set by CloudForms.

As you can see, Red Hat CloudForms can be used as part of your IT security and compliance management to assist in identifying and validating that workloads are secure across your infrastructure stack, starting with hosts and virtual machines, instances in the cloud, or containers.
Quelle: CloudForms

Why 80 percent of companies are increasing use of cloud managed services

This is the first in a two-part interview series with Lynda Stadtmueller, vice president of cloud services for the analyst firm Frost & Sullivan.
Thoughts on Cloud (ToC): A recent survey by Frost & Sullivan reported that 80 percent of US companies are planning to increase their use of cloud managed services. What factors are driving this increase?
Lynda Stadtmueller, vice president of cloud services, Frost & Sullivan: There are two main factors driving this increase. is more complex and the stakes are now higher than ever.
With cloud, businesses know they have a tremendous technology delivery model at their fingertips, but they don’t always know how to harness it. They might not have the expertise on staff. The self-service cloud might be more complex than they expected.
Additionally, the stakes for getting it right are high. As a result, they’re turning to specialists who can provide the management overlay to make sure that workloads are secure, efficient and cost controlled.
ToC: Does that 80 percent include companies that already use a managed cloud hosting solution and plan to increase those services?
Source: 2015 Frost & Sullivan cloud survey of US-based IT decision makers
LS: Yes. There are more types of cloud managed services available now than in the past. For example, a company using some sort of cloud infrastructure management may realize that they have non-cloud legacy applications that aren’t running as efficiently as they would prefer. The right provider can bring the benefits of cloud to legacy applications. In these cases, companies are adding that to their managed services agreements. They&;re adding more workloads, more infrastructure and more applications to the cloud.
ToC: Is driving cloud value in legacy applications the single biggest reason for that type of increase?
LS: It&8217;s a big one. Interestingly, in many companies these decisions are made separately. The person who manages the SAP workload may not be the same person who makes decisions about cloud infrastructure services.
And yet, as the company moves from point solutions to a holistic hybrid cloud strategy, that&8217;s when those collaborative conversations are happening. At a higher level, the organization may decide it can move its most challenging workloads into a cloud managed service model and recognize the those benefits across multiple lines of business.
Come back soon for part two of our interview with Lynda Stadtmueller. To learn more about the value of cloud managed services, watch a short webcast featuring insights from Frost & Sullivan, “How Managed Cloud Services Can Help You Achieve Your Business Goals.”
The post Why 80 percent of companies are increasing use of cloud managed services appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

3 Things you’ll learn about private cloud at InterConnect

Many companies are embracing a private cloud strategy to run their business. They want to reduce cost and effort while improving agility, IT processes and resource scalability. Private cloud addresses these needs. And it offers a dedicated, single-tenant cloud environment either on-site or off-premises.
InterConnect 2017, the industry’s premier cloud conference, is the perfect place to learn about private cloud implementations and best practices from experts and peers. Three key things we’re showcasing through sessions, panels, labs and hands-on demos at InterConnect:

It’s easy to adopt private cloud
Private cloud can transform your business
IBM can help get you there quickly

Here are just a few highlights to work into your schedule.
Session : Five years of business value with PureApplication at DTCO, from bleeding edge to proven technology
The Dutch Tax and Customs Office needed to keep pace with new releases of all the different components of their technology stack. They aimed to simplify their software and application environment to accelerate application delivery. Adopting DevOps, IBM PureApplication, and patterns for WebSphere and Master Data Management (MDM), DTCO not only gained faster time to market but also realized the full potential of private cloud.
Session : How DevOps enhanced quality and speed of delivery for the Israeli Government’s Welfare Department
The Israeli Government’s Welfare Department wanted to deliver applications to market faster and with higher quality. To get there, they used an agile, DevOps approach. They brought in IBM UrbanCode Deploy to automate deployments. And they complimented that with IBM Bluemix Local System to streamline app environment provisioning. As a result, the organization improved app delivery time from three months to three weeks and reduced provisioning times from two weeks to 50 minutes.
Session : IBM Bluemix Private Cloud for cloud service providers: Materna&;s experiences and technical insight
IT consulting company Materna succeeded in capturing new customers with a cloud based delivery of its solution on IBM Bluemix Private Cloud. In this session, their executives will walk through their process of adopting cloud technologies. They will discuss how they decided which workloads to run on the cloud and how they addressed multi-tenancy, audit and compliance, networking and more.
Session : IBM PureApplication and Bluemix Local System Patterns: Roadmap and directions
Pre-built, customizable application patterns help you deploy application environments faster and more reliably across your private cloud. In this session, experts will discuss how patterns can help you improve application time-to-market so you can focus more on innovating and serving your clients.
Now that you know about a few of the private cloud sessions at InterConnect, it’s time for you to act. Register now and get ready for an incredible experience of learning and networking with your peers and some of the top experts in private cloud. See you there.
The post 3 Things you’ll learn about private cloud at InterConnect appeared first on news.
Quelle: Thoughts on Cloud

Azure Command Line 2.0 now generally available

Back in September, we announced Azure CLI 2.0 Preview. Today, we’re announcing the general availability of the vm, acs, storage and network commands in Azure CLI 2.0. These commands provide a rich interface for a large array of use cases, from disk and extension management to  container cluster creation.

Today’s announcement means that customers can now use these commands in production, with full support by Microsoft both through our Azure support channels or GitHub. We don’t expect breaking changes for these commands in new releases of Azure CLI 2.0.

This new version of Azure CLI should feel much more native to developers who are familiar with command line experiences in the bash enviornment for Linux and macOS with simple commands that have smart defaults for most common operations and that support tab completion and pipe-able outputs for interacting with other text-parsing tools like grep, cut, jq and the popular JMESpath query syntax​. It’s easy to install on the platform of your choice and learn.

During the preview period, we’ve received valuable feedback from early adopters and have added new features based on that input. The number of Azure services supported in Azure CLI 2.0 has grown and we now have command modules for sql, documentdb, redis, and many other services on Azure. We also have new features to make working with Azure CLI 2.0 more productive. For example, we’ve added the "–wait" and "–no-wait" capabilities that enable users to respond to external conditions or continue the script without waiting for a response.

We’re also very excited about some new features in Azure CLI 2.0, particularly the combination of Bash and CLI commands, and support for new platform features like Azure Managed Disks.

Here’s how to get started using Azure CLI 2.0.

Installing the Azure CLI

The CLI runs on Mac, Linux, and of course, Windows. Get started now by installing the CLI on whatever platform you use.  Also, review our documentation and samples for full details on getting started with the CLI, and how to access to services provided via Azure using the CLI in scripts.

Here’s an example of the features included with the "vm command":

 

Working with the Azure CLI

Accessing Azure and starting one or more VMs is easy. Here are two lines of code that will create a resource group (a way to group and Manage Azure resources) and a Linux VM using Azure’s latest Ubuntu VM Image in the westus2 region of Azure.

az group create -n MyResourceGroup -l westus2
az vm create -g MyResourceGroup -n MyLinuxVM –image ubuntults

Using the public IP address for the VM (which you get in the output of the vm create command or can look up separately using "az vm list-ip-addresses" command), connect directly to your VM from the command line:

ssh <public ip address>

For Windows VMs on Azure, you can connect using remote desktop ("mstsc <public ip address>" from Windows desktops).

The "create vm" command is a long running operation, and it may take some time for the VM to be created, deployed, and be available for use on Azure. In most automation scripting cases, waiting for this command to complete before running the next command may be fine, as the result of this command may be used in next command. However, in other cases, you may want to continue using other commands while a previous one is still running and waiting for the results from the server. Azure CLI 2.0 now supports a new "–no-wait" option for such scenarios.

az vm create -n MyLinuxVM2 -g MyResourceGroup –image UbuntuLTS –no-wait

As with Resource Groups and a Virtual Machines, you can use the Azure CLI 2.0 to create other resource types in Azure using the "az <resource type name> create" naming pattern.

For example, you can create managed resources on Azure like WebApps within Azure AppServices:

# Create an Azure AppService that we can use to host multiple web apps
az appservice plan create -n MyAppServicePlan -g MyResourceGroup

# Create two web apps within the appservice (note: name param must be a unique DNS entry)
az appservice web create -n MyWebApp43432 -g MyResourceGroup –plan MyAppServicePlan
az appservice web create -n MyWEbApp43433 -g MyResourceGroup –plan MyAppServicePlan

Read the CLI 2.0 reference docs to learn more about the create command options for various Azure resource types. The Azure CLI 2.0 lets you list your Azure resources and provides different output formats.

–output Description
json json string. json is the default. Best for integrating with query tools etc
jsonc colorized json string.
table table with column headings. Only shows a curated list of common properties for the selected resource type in human readable form.
tsv tab-separated values with no headers. optimized for piping to other tex-processing commands and tools like grep, awk, etc.

You can use the "–query" option with the list command to find specific resources, and to customize the properties that you want to see in the output. Here are a few examples:

# list all VMs in a given Resource Group
az vm list -g MyResourceGroup –output table

# list all VMs in a Resource Group whose name contains the string ‘My’
az vm list –query “[?contains(resourceGroup,’My’)]” –output tsv

# same as above but only show the &;VM name&039; and &039;osType&039; properties, instead of all default properties for selected VMs
az vm list –query “[?contains(resourceGroup,’My’)].{name:name, osType:storageProfile.osDisk.osType}” –output table

Azure CLI 2.0 supports management operations against SQL Server on Azure. You can use it to create servers, databases, data warehouses, and other data sources; and to show usage, manage administrative logins, and run other management operations.

# Create a new SQL Server on Azure
az sql server create -n MySqlServer -g MyResourceGroup –administrator-login <admin login> –administrator-login-password <admin password> -l westus2

# Create a new SQL Server database
az sql db create -n MySqlDB -g MyResourceGroup –server-name MySqlServer -l westus2

# list available SQL databases on Server within a Resource Group
az sql db list -g MyResourceGroup –server-name MySqlServer

Scripting with the new Azure CLI 2.0 features

The new ability to combine Bash and Azure CLI 2.0 commands in the same script can be a big time saver, especially if you’re already familiar with Linux command-line tools like grep, cut, jq and JMESpath queries.

Let’s start with a simple example that stops a VM in a resource group using a VM’s resource ID (or multiple IDs by spaces):

az vm stop –ids ‘<one or more ids>’

You can also stop a VM in a resource group using the VM’s name. Here’s how to stop the VM we created above:

az vm stop -g resourceGroup -n simpleVM

For a more complicated use case, let’s imagine we have a large number of VMs in a resource group, running Windows and Linux.  To stop all running Linux VMs in that resource group, we can use a JMESpath query, like this:

os="Linux"
rg="resourceGroup"
ps="VM running"
rvq="[].{resourceGroup: resourceGroup, osType: storageProfile.osDisk.osType, powerState: powerState, id:id}| [?osType==&039;$os&039;]|[?resourceGroup==&039;$rg&039;]| [?powerState==&039;$ps&039;]|[].id"
az vm stop –ids $(az vm list –show-details –query "$rvq" –output tsv)

This script issues an az vm stop command, but only for VMs that are returned in the JMESpath query results (as defined in the rvq variable). The osType, resourceGroup and powerState parameters are provided values. The resourceGroup parameter is compared to a VM’s resourceGroup property, and the osType parameter is compared to a VM’s storageProfile.osDisk.osType property, and all matching results are returned (in tsv format) for use by the "az vm stop" command.

Azure Container Services in the CLI

Azure Container Service (ACS) simplifies the creation, configuration, and management of a cluster of virtual machines that are preconfigured to run container applications. You can use Docker images with DC/OS (powered by Apache Mesos), Docker Swarm or Kubernetes for orchestration.

The Azure CLI supports the creation and scaling of ACS clusters via the az acs command. You can discover full documentation for Azure Container Services, as well as a tutorial for deploying an ACS DC/OS cluster with Azure CLI commands.

Scale with Azure Managed Disks using the CLI

Microsoft recently announced the general availability of Azure Managed Disks to simplify the management and scaling of Virtual Machines. You can create a Virtual Machine with an implicit Managed Disk for a specific disk image, and also create managed disks from blob storage or standalone with the az vm disk command. Updates and snapshots are easy as well — check out what you can do with Managed dDisks from the CLI.

Start using Azure CLI 2.0 today!

Whether you are an existing CLI user or starting a new Azure project, it’s easy to get started with the CLI at http://aka.ms/CLI and master the command line with our updated docs and samples. Check out topics like installing and updating the CLI, working with Virtual Machines, creating a complete Linux environment including VMs, Scale Sets, Storage, and network, and deploying Azure Web Apps – and let us know what you think!

Azure CLI 2.0 is open source and on GitHub.

In the next few months, we’ll provide more updates. As ever, we want your ongoing feedback! Customers using the vm, storage and network commands in production can contact Azure Support for any issues, reach out via StackOverflow using the azure-cli tag, or email us directly at azfeedback@microsoft.com.
Quelle: Azure

Amazon DynamoDB now supports automatic item expiration with Time-to-Live (TTL)

Amazon DynamoDB Time-to-Live (TTL) enables you to automatically delete expired items from your tables, at no additional cost. Now, you no longer need to deal with the complexity and cost of manually scanning your tables and deleting the items that you don’t want to retain. Instead, you can simply specify an attribute containing the timestamp when each item in your table should expire and DynamoDB will delete the items for you automatically. You can also view and archive items deleted via TTL using DynamoDB Streams.
Quelle: aws.amazon.com

Incident management at Google — adventures in SRE-land

By Paul Newson, Incident Commander

Have you ever wondered what happens at Google when something goes wrong? Our industry is fond of using colorful metaphors such as “putting out fires” to describe what we do.
Of course, unlike the actual firefighters pictured here, our incidents don’t normally involve risk to life and limb. Despite the imperfect metaphor, Google Site Reliability Engineers (SREs) have a lot in common with other first responders in other fields.

Like these other first responders, SREs at Google regularly practice emergency response, honing the skills, tools, techniques and attitude required to quickly and effectively deal with the problem at hand.

In emergency services, and at Google, when something goes wrong, it’s called an “incident.”

This is the story of my first “incident” as a Google SRE.

Prologue: preparation
For the past several months, I’ve been on a Mission Control rotation with the Google Compute Engine SRE team. I did one week of general SRE training. I learned about Compute Engine through weekly peer training sessions, and by taking on project work. I participated in weekly “Wheel of Misfortune” sessions, where we’re given a typical on-call problem and try to solve it. I shadowed actual on-callers, helping them respond to problems. I was secondary on-call, assisting the primary with urgent issues, and handling less urgent issues independently.

Sooner or later, after all the preparation, it’s time to be at the sharp end. Primary on-call. The first responder.

Editor’s Note: Chapter 28 “Accelerating SREs to On-Call and Beyond” in Site Reliability Engineering goes into detail about how we prepare new SREs to be ready to be first responders.

Going on-callThere’s a lot more to being an SRE than being on-call. On-call is, by design, a minority of what Site Reliability Engineers (SREs) do, but it’s also critical. Not only because someone needs to respond when things go wrong, but because the experience of being on-call informs many other things we do as SREs.

During my first on-call shifts, our alerting system saw fit to page1 me twice, and two other problems were escalated to me by other people. With each page, I felt a hit of adrenaline. I wondered “Can I handle this? What if I can’t?” But then I started to work the problem in front of me, like I was trained to, and I remembered that I don’t need to know everything — there are other people I can call on, and they will answer. I may be on point, but I’m not alone.

Editor’s Note: Chapter 11 “Being On-Call” in Site Reliability Engineering has lots of advice on how to organize on-call duties in a way that allows people to be effective over the long term.

It’s an incident!Three of the pages I received were minor. The fourth was more, shall we say. . . interesting?

Another Google engineer using Compute Engine for their service had a test automation failure, and upon investigation noticed something unusual with a few of their instances. They notified the development team’s primary on-call, Parya, and she brought me into the loop. I reached out to my more experienced secondary, Benson, and the three of us started to investigate, along with others from the development team who were looped in. Relatively quickly we determined it was a genuine problem. Having no reason to believe that the impact was limited to the single internal customer who reported the issue, we declared an incident.

What does declaring an incident mean? In principle it means that an issue is of sufficient potential impact, scope and complexity that it will require a coordinated effort with well defined roles to manage it effectively. At some point, everything you see on the summary page of the Google Cloud Status Dashboard was declared an incident by someone at Google. In practice, declaring an incident at Google means creating a new incident in our internal incident management tool.

As part of my on-call training, I was trained on the principles behind Google’s incident management protocol, and the internal tool that we use to facilitate incident response. The incident management protocol defines roles and responsibilities for the individuals involved. Earlier I asserted that Google SREs have a lot in common with other first responders. Not surprisingly, our incident management process was inspired by, and is similar to, well established incident command protocols used in other forms of emergency response.

My role was Incident Commander. Less than seven minutes after I declared the incident, a member of our support team took on the External Communications role. In this particular incident, we did not declare any other formal roles, but in retrospect, Parya was the Operations Lead; she led the efforts to root-cause the issue, pulling in others as needed. Benson was the Assistant Incident Commander, as I asked him a series of questions of the form “I think we should do X, Y and Z. Does that sound reasonable to you?”

One of the keys to effective incident response is clear communication between incident responders, and others who may be affected by the incident. Part of that equation is the incident management tool itself, which is a central place that Googlers can go to know about any ongoing incidents with Google services. The tool then directs Googlers to additional relevant resources, such as an issue in our issue-tracking database that contains more details, or the communications channels being used to coordinate the incident response.

Editor’s Note: Chapters 12, 13 and 14 of Site Reliability Engineering discuss effective troubleshooting, emergency response and managing oncidents respectively.

The rollback — an SRE’s fire extinguisher
While some of us worked to understand the scope of the issue, others looked for the proximate and root causes so we could take action to mitigate the incident. The scope was determined to be relatively limited, and the cause was tracked down to a particular change included in a release that was currently being rolled out.

This is quite typical. The majority of problems in production systems are caused by changing something — a new configuration, a new binary, or a service you depend on doing one of those things. There are two best practices that help in this very common situation.

First, all non-emergency changes should use a progressive rollout, which simply means don’t change everything at once. This gives you the time to notice problems, such as the one described here, before they become big problems affecting large numbers of customers.

Second, all rollouts should have a well understood and well tested rollback mechanism. This means that once you understand which change is responsible for the problem, you have an “undo” button you can press to restore service.

Keeping your problems small using a progressive rollout, and then mitigating them quickly via a trusted rollback mechanism are two powerful tools in the quest to meet your Service Level Objectives (SLOs).

This particular incident followed this pattern. We caught the problem while it was small, and then were able to mitigate it quickly via a rollback.

Editor’s Note: Chapter 36 “A Collection of Best Practices for Production Services” in Site Reliability Engineering talks more about these, and other, best practices.

Epilogue: the postmortem
With the rollback complete, and the problem mitigated, I declared the incident “closed.” At this point, the incident management tool helpfully created a postmortem document for the incident responders to collaborate on. Taking our firefighting analogy to its logical conclusion, this is analogous to the part where the fire marshal analyzes the fire, and the response to the fire, to see how similar fires could be prevented in the future, or handled more effectively.

Google has a blameless postmortem culture. We believe that when something goes wrong, you should not look for someone to blame and punish. Chances are the people in the story were well intentioned, competent and doing the best they could with the information they had at the time. If you want to make lasting change, and avoid having similar problems in the future, you need to look to how you can improve the systems, tools and processes around the people, such that a similar problem simply can’t happen again.

Despite the relatively limited impact of the incident, and the relatively subtle nature of the bug, the postmortem identified nine specific follow-up actions that could potentially avoid the problem in the future, or allow us to detect and mitigate it faster if a similar problem occurs. These nine issues were all filed in our bug tracking database, with owners assigned, so they’ll be considered, researched and followed up on in the future.

The follow-up actions are not the only outcome of the postmortem. Since every incident at Google has a postmortem, and since we use a common template for our postmortem documents, we can perform analysis of overall trends. For example, this is how we know that a significant fraction of incidents at Google come from configuration changes. (Remember this the next time someone says “but it’s just a config change” when trying to convince you that it’s a good idea to push it out late on the Friday before a long weekend . . .)

Postmortems are also shared within the teams involved. On the Compute Engine team, for example, we have a weekly incident review meeting, where incident responders present their postmortem to a broader group of SREs and developers who work on Compute Engine. This helps identify additional follow up items that may have been overlooked, and shares the lessons learned with the broader team, making everyone better at thinking about reliability from these case studies. It’s also a very strong way to reinforce Google’s blameless post mortem culture. I recall one of these meetings where the person presenting the postmortem attempted to take blame for the problem. The person running the meeting said “While I appreciate your willingness to fall on your sword, we don’t do that here.”

The next time you read the phrase “We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence” on our status page, I hope you’ll remember this story. Having experienced firsthand the way we follow up on incidents at Google, I can assure you that it’s not an empty promise.

Editor’s Note: Chapter 15, “Postmortem Culture: Learning from Failure” in Site Reliability Engineering discusses postmortem culture in depth.

1 We don’t actually use pagers anymore of course, but we still call it “getting paged” no matter what device or communications channel is used.

Quelle: Google Cloud Platform

Azure brings 5 new services to Canada

Since the beginning of the year, we’ve deployed multiple new services in Canada. Please find below a brief summary of recently deployed services.

Available now

HDInsight is the only fully-managed cloud Hadoop offering that provides optimized open source analytic clusters for Spark, Hive, MapReduce, HBase, Storm, Kafka, and R Server backed by a 99.9% SLA. Each of these big data technologies and ISV applications are easily deployable as managed clusters with enterprise-level security and monitoring.

Learn more about HDInsight.

Azure Functions is an event-based serverless compute experience to accelerate your development. It can scale based on demand and you pay only for the resources you consume. Azure Function’s numerous triggers and bindings, such as http, storage, queues, and events streams, allow you to quickly build solutions with less code.

Learn more about Azure Functions.

Managed Disks makes managing your VM disks much simpler. With Managed Disks, customers only need to specify the desired disk type (Standard or Premium disk) and the disk size, and Azure will create and manage the disk for them. In addition, Managed Disks comes with enhanced VM scale sets (VMSS) capabilities such as being able to define scale sets with attached data drives, and create a scale set with up to 1,000 VMs from an Azure platform/marketplace images.

Learn more about Managed Disks in our General Availability Announcement.

Azure Site Recovery contributes to your BCDR strategy by orchestrating replication of on-premises virtual machines and physical servers. You replicate servers and VMs from your primary on-premises datacenter to the cloud, Azure, or to a secondary datacenter.

Learn more about Azure Site Recovery.

Azure Backup previously required service registration in PowerShell for the past few months. This is no longer required and you can use Backup directly in the Azure Portal.  All subscriptions that were registered previously will continue to work without any intervention. In addition, Hybrid Backup is now deployed (on-prem to Azure backup) and is also available in the Azure Portal. Azure Backup can be used to back up, protect, and restore your data in the Microsoft cloud. Azure Backup replaces your existing on-premises or off-site backup solution with a cloud-based solution that is reliable, secure, and cost-competitive.

Learn more about Azure Backup.
Quelle: Azure

Build Your DockerCon Agenda!

It’s that time of the year again…the DockerCon Agenda Builder is live!
Whether you are a Docker beginner or have been dabbling in containers for a while now, we’re confident that DockerCon 2017 will have the right content for you. With 7 tracks and more than 60 sessions presented by Docker Engineering, Docker Captains, community members and corporate heavyweights such as Intuit, MetLife, PayPal, Activision and Netflix, DockerCon 2017 will cover a wide range of container tech use cases and topics.
Build your agenda
We encourage you to review the catalogue of DockerCon sessions and build your agenda for the week. You’ll find a new agenda builder that allows you to apply filters based on your areas of interest, experience, job role and more!
Check Out All The Sessions
 

One of our favorite features of the Agenda Builder is the recommendations generated based on your profile and marked interest sessions. To unlock the recommendations feature you’ll need to sign up for a DockerCon account.

Within this tool you’ll be able to adjust your agenda, rate sessions and add notes to reference after the conference. All of your selections features will be available in the DockerCon mobile app once it’s launched.
So without further ado, happy DockerCon agenda building!
DockerCon All the Things
More info about DockerCon:

What’s new at DockerCon?
5 reasons to attend DockerCon
Convince your manager to send you to DockerCon

 

 It’s time to build your @DockerCon Agenda! To Tweet

The post Build Your DockerCon Agenda! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/