Docker Tools for Modernizing Traditional Applications

Over the past two years Docker has worked closely with customers to modernize portfolios of traditional applications with Docker container technology and Docker Enterprise, the industry-leading container platform. Such applications are typically monolithic in nature, run atop older operating systems such as Windows Server 2008 or Windows Server 2003, and are difficult to transition from on-premises data centers to the public cloud.

The Docker platform alleviates each of these pain points by decoupling an application from a particular operating system, enabling microservice architecture patterns, and fostering portability across on-premises, cloud, and hybrid environments.
As the Modernizing Traditional Applications (MTA) program has matured, Docker has invested in tooling and methodologies that accelerate the transition to containers and decrease the time necessary to experience value from the Docker Enterprise platform. From the initial application assessment process to running containerized applications on a cluster, Docker is committed to improving the experience for customers on the MTA journey.
Application Discovery & Assessment
Enterprises develop and maintain exhaustive portfolios of applications. Such apps come in a myriad of languages, frameworks, and architectures developed by both first and third party development teams. The first step in the containerization journey is to determine which applications are strong initial candidates, and where to begin the process.
A natural instinct is to choose the most complex, sophisticated application in a portfolio to begin containerization; the rationale being that if it works for the toughest app, it will work for less complex applications. For an organization new to the Docker ecosystem this approach can be fraught with challenges. Beginning containerization with an application that is less complex, yet still representative of the overall portfolio and aligned with organizational goals, will foster experience and skill with containers before encountering tougher applications.
Docker has developed a series of archetypes that help to “bucket” similar applications together based on architectural characteristics and estimated level of effort for containerization:

Evaluating a portfolio to place applications within each archetype can help estimate level of effort for a given portfolio of applications and aid in determining good initial candidates for a containerization project. There are a variety of methods for executing such evaluations, including:

Manual discovery and assessment involves humans examining each application within a portfolio. For smaller numbers of apps this approach is often mangeable, however scalability is difficult to hundreds or thousands of applications.
Configuration Management Databases (CMDBs), when used within an organization, provide existing and detailed information about a given environment. Introspecting such data can aid in establishing application characters and related archetypes.
Automated tooling from vendors such as RISC Networks, Movere, BMC Helix Discovery, and others provide detailed assessments of data center environments by monitoring servers for a period of time and then generating reports. Such reports may be used in containerization initiatives and are helpful in understanding interdependencies between workloads.
Systems Integrators may be engaged to undergo a formal portfolio evaluation. Such integrators often have mature methodologies and proprietary tooling to aid in the assessment of applications.

Automated Containerization
Building a container for a traditional application can present several challenges. The original developers of an application are often long gone, making it difficult to understand how the application logic was constructed. Formal source code is often unavailable, with applications instead running on virtual machines without assets living in a source control system. Scaling containerization efforts across dozens or hundreds of applications is time intensive and complicated.
These pain points are alleviated with the use of a conversion tool developed by Docker. Part of the Docker Enterprise platform, this tool was developed to automate the generation of Dockerfiles for applications running on virtual machines or bare metal servers. A server is scanned to determine how the operating system is configured, how web servers are setup, and how application code is running. The data is then assembled into a Dockerfile and the application code pulled into a directory, ready for a Docker Build on a modern operating system. For example, a Windows Server 2003 environment can be scanned to generate Dockerfiles for IIS-based .NET applications running in disparate IIS Application Pools. This automation shifts the user from an author to an editor of a Dockerfile, significantly decreasing the time and effort involved in containerizing traditional applications.

Cluster Management
Running containers on a single server may be sufficient for a single developer, but a cluster of servers working together is used to operationalize container-based workloads. Historically the creation and management of such clusters were either fully controlled by a public cloud provider, tying the user to a particular infrastructure.
A new Docker CLI Plugin, called “Docker Cluster”, is included in the Docker Enterprise 3.0 platform. Docker Cluster streamlines the initial creation of a Docker Enterprise cluster by consuming a declarative YAML file to automatically provision and configure infrastructure resources. Cluster may be used across a variety of infrastructure vendors, including Azure, AWS, and VMware, to stand up identical container platforms across each of the major infrastructure targets. This added flexibility decreases the need to lock into a single provider, enables consistency for multi-cloud and hybrid environments, and provides the option of deploying containers via either the Kubernetes or Swarm orchestrators.

Beyond the automation tooling, Docker also offers detailed, infrastructure-specific Reference Architectures for Certified Infrastructure partners that catalogue best-practices for various providers. These documents offer exhaustive guidance on implementing Docker Enterprise in addition to the automated CLI tooling. Additional guidance on integrating Docker Enterprise with common container ecosystem solutions can be found in Docker’s library of Solution Briefs.
Provisioning and managing a Docker Enterprise cluster has been significantly simplified with the introduction of Docker Cluster, Solution Briefs, and Reference Architectures. These tools allow you to focus on containerizing legacy applications rather than investing additional time into the setup of a container cluster.

Learn more about how #Docker helps customers modernize their portfolios of traditional applications with Docker container #technology and Docker Enterprise, the industry-leading container platform.Click To Tweet

Call to Action

Watch the video 
Learn more about Docker Enterprise
Find out more about Docker containers

The post Docker Tools for Modernizing Traditional Applications appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

3 steps to gain business value from AI

Many customers have asked us this profound question: How do we realize business value from artificial intelligence (AI) initiatives after a proof of concept (POC)?  Enterprises are excited at the potential of AI, and some even create a POC as a first step. However, some are stymied by lack of clarity on the business value or return on investment. As a result we have heard the same question from data science teams that have created machine learning (ML) models that are under-utilized by their organizations.  At Google Cloud, we’re committed to helping organizations of all sizes to transform themselves with AI. We have worked with many of our customers to help them derive value from their AI investments.  AI is a team sport that requires strong collaboration between business analysts, data engineers, data scientists and machine learning engineers. As a result, we recommend discussing the following three steps with your team to realize the most business value from your AI projects:Step 1: Align AI projects with business priorities and find a good sponsor.Step 2: Plan for explainable ML in models, dashboards and displays.Step 3: Broaden expertise within the organization on data analytics and data engineering.Step 1: Align AI projects with business priorities and find a good sponsorThe first step to realizing value from AI is to identify the right business problem and a sponsor committed to using AI to solve that problem. Teams often get excited by the prospect of applying AI to a problem without deeply thinking about how that problem contributes to overall business value. For example, using AI to better classify objects might be less valuable to the bottom line, than, say, a great chatbot. Yet many businesses don’t start with the critical step of aligning the AI project with the business challenges that matter most.  Identify the right business problem. To ensure alignment, start with your organization’s business strategy and key priorities. Identify the business priorities that can gain the most from AI. The person doing this assessment needs to have a good understanding of the most common use cases for AI and ML. It could be a data science director, or a team of business analysts and data scientists.Keep a shortlist of the business priorities that can truly benefit from AI or ML.  During implementation, work through this list starting with the most feasible. By taking this approach, you’re more likely to generate significant business value as you build a set of ML models that solve specific business priorities.   Conversely, if a data science or machine learning team builds great solutions for problems that are not aligned with business priorities, the models they build are unlikely to be used at scale.Find a business sponsor. We’ve also found that AI projects are more likely to be successful when they have a senior executive sponsor that will champion them with other leaders in your organization. Don’t start an AI project without completing this critical step. Once you identify the right business priority, find the senior executive to own it.  Work with their team to get their buy-in and sponsorship. The more senior and committed, the better. If your CEO cares about AI, you can bet most of your employees will.Step 2:  Plan for explainable ML in models, dashboards and displaysAn important requirement from many business users is to have explanations from ML models. In many cases, it is not enough for an ML model to provide an outcome; it’s also important to understand why. Explanations help to build trust in the model’s predictions and offer useful factors with which business users can take action. In regulated industries such as financial services and healthcare, for example, there are regulations that require explanations of decisions. For example, in the United States the Equal Credit Opportunity Act (ECOA) enforced by the the Federal Trade Commission (FTC), gives consumers the right to know why their loan applications were rejected.  Lenders have to tell the consumer the specific reasons why they were rejected. Regulators have been seeking more transparency around how ML predictions are made.Choose new techniques for building explainable ML models. Until recently, most leading ML models have offered little or no explanations for their predictions. However, recent advances are emerging to provide explanations even for the most complex ML algorithms such as deep learning.  These include Local Interpretable Model-Agnostic Explanations (LIME),  Anchor, Integrated Gradients, and Shapley. These techniques offer a unique opportunity to meet the needs of business users even in regulated industries with powerful ML models.  Use the right technique to meet your users’ needs for model explanation. When you build ML models, be prepared to provide explanations globally and locally. Global explanations provide the model’s key drivers, and are the strongest predictors in the overall model. For example, the global explanation from a credit default prediction model will likely show the top predictors of default may include variables such as number of previous defaults, number of missed payments, employment status, length of time with your bank, length of time at your address, etc. In contrast, local explanations provide the reasons why a specific customer is predicted to default, and the specific reason will vary from one customer to another.  As you develop your ML models, build time into your plan to provide global and local explanations. We also recommend gathering user needs to help you choose the right technique for model explanation. For example, many financial regulators do not allow the use of surrogate models for explanations, which rules out techniques like LIME. In this instance, the Integrated Gradients technique would be more suited to this use case.Also, be prepared to share the model’s explanations wherever you show the model’s results — this can be on analytics dashboards, embedded apps or other displays. This will help to build confidence in your ML models. Business users are more likely to trust your ML model if it provides intuitive explanations for its predictions. Your business users are more likely to take action on the predictions if they trust the model. Similarly, with these explanations, your models are more likely to be accepted by regulators.Step 3: Broaden expertise in data analytics and data engineering within your organizationTo realize the full potential of AI, you need good people with the right skills. This is a big challenge for many organizations given the acute shortage of ML engineers — many organizations really struggle to hire them. You can address this skills shortage by upskilling your existing employees and taking advantage of a new generation of products that simplify AI model development.Upskill your existing employees. You don’t always need PhD ML engineers to be successful with ML. PhD ML engineers are great if your applications need research and development, for example, if you were building driverless cars.  But most typical applications of AI or ML do not require PhD experts. What you need instead are people who can apply existing algorithms or even pre-trained ML models to solve real world problems. For example, there are powerful ML models for image recognition, such as ResNet50 or Inception V3, that are available for free in the open source community. You don’t need an expert in computer vision to use them. Instead of searching for unicorns, start by upgrading your existing data engineers and business analysts and be sure they understand the basics of data science and statistics to use powerful ML algorithms correctly.At Google we provide a wealth of ML training — from Qwiklabs to Coursera courses (e.g. Machine Learning with TensorFlow on Google Cloud Platform Specialization or Machine Learning for Business Professionals). We also offer immersive training such as instructor-led courses and a four-week intensive machine learning training program at the Advanced Solutions Lab. These courses offer great avenues to train your business analysts, data engineers and developers on machine learning.Take advantage of products that simplify AI model development. Until recently, you needed sophisticated data scientists and machine learning engineers to build even the simplest of ML models. This workforce required deep knowledge in core ML algorithms in order to choose the right one for each problem. However, that is quickly changing. Powerful but simple ML products such as Cloud AutoML from Google Cloud make it possible for developers with limited knowledge of machine learning to train high-quality models specific to their business needs. Similarly, BigQuery ML enables data analysts  to build and operationalize  machine learning models in minutes in BigQuery using simple SQL queries. With these two products, business analysts, data analysts and data engineers can be trained to build powerful machine learning models with very little ML expertise.Make AI a team sport. Machine learning teams should not exist in silos; they must be connected to analytics and data engineering teams. This will facilitate operationalization of models. Close collaboration between ML engineers and business analysts will help the ML team tie their models to important business priorities through the right KPIs. It also allows business analysts to run experiments to demonstrate the business value of each ML model. Close collaboration between ML and data engineering teams also helps speed up data preparation and model deployment in production. The results of ML models need to be displayed in applications or analytics and operational dashboards. Data engineers are critical in the development of data pipelines that are needed to operationalize models and integrate them into business workflows for the right end users.  It is very tempting to think that you have to hire a large team of ML engineers to be successful. In our experience, this is not always necessary or scalable. A more pragmatic approach to scale is to use the right combination of business analysts working closely with ML engineers and data engineers. A good recommendation is to have six business analysts and three data engineers for each ML engineer. More details on the recommended team structure is available in our Coursera course, Machine Learning for Business Professionals.Conclusion  As many organizations start to explore AI and machine learning, they are confronted with the question of how to realize the business potential of these powerful technologies. Based on our experience working with customers across industries, we recommend the three steps in this blog post to realize business value from AI.To learn more about AI and machine learning on Google Cloud, visit our Cloud AI page.
Quelle: Google Cloud Platform