How AutoML Vision is helping companies create visual inspection solutions for manufacturing

We consistently hear from our customers that they need new ways to apply the latest technologies, such as AI, to improve efficiency. One area that AI has proven to be particularly beneficial is in helping to automate the visual quality control process for manufacturing customers. These customers tell us they want AI solutions that help them make quality control and inspections more efficient, to improve overall quality. But, there are many factors that make it difficult to prevent the distribution of damaged products. And the later a defect is caught in the manufacturing process, the more costly it is to fix or replace. Visual inspection helps manufacturing customers identify defects early and at a lower cost, and we’re seeing many innovative ways it’s helping our customers revolutionize their processes. Chip making made more efficientOne example of a customer using AI to transform their manufacturing process is GlobalFoundries, a leader in the semiconductor manufacturing industry. The company used AutoML Vision to build a visual inspection solution that can detect random defects in wafer map and scanning electron microscope (SEM) images, which are essential pieces for semiconductor manufacturing. A wafer map shows the performance of a semiconductor device, while an SEM’s images, which are created with a focused beam of electrons, can be used to closely examine a wafer.“Google Cloud AutoML Vision made it easy for our subject matter experts to quickly learn how to navigate and then train the AI,” Dr. DP Prakash, Global Head of AI XR Innovation at GlobalFoundries explained. “In our factory leading the initiative, 40% of the manual inspection workload has already been successfully shifted to the visual inspection solution we built based on AutoML.” GlobalFoundries’ visual inspection solution integrates AutoML Vision into their in-house content management system, and includes SEM image acquisition, image and sample defect management, defect prediction visualization, and product quality report generation among its features. AutoML Vision reads in the images of wafers and sample defects, and trains customized models to detect these defects. The trained model will be used to detect defects in new incoming product images. When evaluating technologies, GlobalFoundries was impressed that AutoML Vision could successfully classify 80% of the images based on a limited amount of training data in the initial pass itself. This fast path to high accuracy let GlobalFoundries quickly move to production, start realizing benefits, and scale up. To capture and control process defects in semiconductor factories, GlobalFoundries deployed hundreds of models in its factories. AutoML Vision’s data and model management features help refresh the data continuously and efficiently, giving the company visibility into all those models. GlobalFoundries also achieved similar success in their lithography process—where a pattern is transferred onto a chip. In the conventional method, due to the practical constraints of time and cost in high volume manufacturing environments, only a sample of the wafers produced are typically inspected for systematic defect patterns. The new visual inspection solution developed with AutoML, however, increases the validation rate to 95% of wafers, reducing waste, and improving quality and customer satisfaction.Revolutionizing manufacturing processesSiemens is another company using AutoML Vision to change the way they manage the inspection process. “Siemens leveraged Google’s domain expertise in AI technology to create Factory AI service, which revolutionized our manufacturing with automated visual inspections,” said Tigran Bagramyan, Intrapreneur and Data Scientist, Siemens. “We use AutoML Vision to quickly build prototypes and push them to production on the factory floor. AutoML Vision helps us concentrate on use cases and customer value rather than complexity of AI development.” Meanwhile, LG CNS leverages AutoML Vision Edge to create manufacturing intelligence solutions that detect defects in everything from LCD screens and optical films, to automotive fabrics on the assembly line. AutoML Vision Edge improved defect detection accuracy by 6% and reduced the time to design and train their ML models from seven days to just a few hours. AutoML Vision lets customers train high-quality defect detection models, deploy models, and run inference on production lines. We look forward to supporting customers as they continue to find innovative new ways to deploy AI.To learn more about how you can use our vision products for visual inspection and other use cases, check out Google Cloud Vision AI.
Quelle: Google Cloud Platform

A year of bringing AI to the edge

This post is co-authored by Anny Dow, Product Marketing Manager, Azure Cognitive Services.

In an age where low-latency and data security can be the lifeblood of an organization, containers make it possible for enterprises to meet these needs when harnessing artificial intelligence (AI).

Since introducing Azure Cognitive Services in containers this time last year, businesses across industries have unlocked new productivity gains and insights. The combination of both the most comprehensive set of domain-specific AI services in the market and containers enables enterprises to apply AI to more scenarios with Azure than with any other major cloud provider. Organizations ranging from healthcare to financial services have transformed their processes and customer experiences as a result.

 

These are some of the highlights from the past year:

Employing anomaly detection for predictive maintenance

Airbus Defense and Space, one of the world’s largest aerospace and defense companies, has tested Azure Cognitive Services in containers for developing a proof of concept in predictive maintenance. The company runs Anomaly Detector for immediately spotting unusual behavior in voltage levels to mitigate unexpected downtime. By employing advanced anomaly detection in containers without further burdening the data scientist team, Airbus can scale this critical capability across the business globally.

“Innovation has always been a driving force at Airbus. Using Anomaly Detector, an Azure Cognitive Service, we can solve some aircraft predictive maintenance use cases more easily.”  —Peter Weckesser, Digital Transformation Officer, Airbus

Automating data extraction for highly-regulated businesses

As enterprises grow, they begin to acquire thousands of hours of repetitive but critically important work every week. High-value domain specialists spend too much of their time on this. Today, innovative organizations use robotic process automation (RPA) to help manage, scale, and accelerate processes, and in doing so free people to create more value.

Automation Anywhere, a leader in robotic process automation, partners with these companies eager to streamline operations by applying AI. IQ Bot, their unique RPA software, automates data extraction from documents of various types. By deploying Cognitive Services in containers, Automation Anywhere can now handle documents on-premises and at the edge for highly regulated industries:

“Azure Cognitive Services in containers gives us the headroom to scale, both on-premises and in the cloud, especially for verticals such as insurance, finance, and health care where there are millions of documents to process.” —Prince Kohli, Chief Technology Officer for Products and Engineering, Automation Anywhere

For more about Automation Anywhere's partnership with Microsoft to democratize AI for organizations, check out this blog post.

Delighting customers and employees with an intelligent virtual agent

Lowell, one of the largest credit management services in Europe, wants credit to work better for everybody. So, it works hard to make every consumer interaction as painless as possible with the AI. Partnering with Crayon, a global leader in cloud services and solutions, Lowell set out to solve the outdated processes that kept the company’s highly trained credit counselors too busy with routine inquiries and created friction in the customer experience. Lowell turned to Cognitive Services to create an AI-enabled virtual agent that now handles 40 percent of all inquiries—making it easier for service agents to deliver greater value to consumers and better outcomes for Lowell clients.

With GDPR requirements, chatbots weren’t an option for many businesses before containers became available. Now companies like Lowell can ensure the data handling meets stringent compliance standards while running Cognitive Services in containers. As Carl Udvang, Product Manager at Lowell explains:

"By taking advantage of container support in Cognitive Services, we built a bot that safeguards consumer information, analyzes it, and compares it to case studies about defaulted payments to find the solutions that work for each individual."

One-to-one customer care at scale in data-sensitive environments has become easier to achieve.

Empowering disaster relief organizations on the ground

A few years ago, there was a major Ebola outbreak in Liberia. A team from USAID was sent to help mitigate the crisis. Their first task on the ground was to find and categorize the information such as the state of healthcare facilities, wifi networks, and population density centers.  They tracked this information manually and had to extract insights based on a complex corpus of data to determine the best course of action.

With the rugged versions of Azure Stack Edge, teams responding to such crises can carry a device running Cognitive Services in their backpack. They can upload unstructured data like maps, images, pictures of documents and then extract content, translate, draw relationships among entities, and apply a search layer. With these cloud AI capabilities available offline, at their fingertips, response teams can find the information they need in a matter of moments. In Satya’s Ignite 2019 keynote, Dean Paron, Partner Director of Azure Storage and Edge, walks us through how Cognitive Services in Azure Stack Edge can be applied in such disaster relief scenarios (starting at 27:07): 

Transforming customer support with call center analytics

Call centers are a critical customer touchpoint for many businesses, and being able to derive insights from customer calls is key to improving customer support. With Cognitive Services, businesses can transcribe calls with Speech to Text, analyze sentiment in real-time with Text Analytics, and develop a virtual agent to respond to questions with Text to Speech. However, in highly regulated industries, businesses are typically prohibited from running AI services in the cloud due to policies against uploading, processing, and storing any data in public cloud environments. This is especially true for financial institutions.

A leading bank in Europe addressed regulatory requirements and brought the latest transcription technology to their own on-premises environment by deploying Cognitive Services in containers. Through transcribing calls, customer service agents could not only get real-time feedback on customer sentiment and call effectiveness, but also batch process data to identify broad themes and unlock deeper insights on millions of hours of audio. Using containers also gave them flexibility to integrate with their own custom workflows and scale throughput at low latency.

What's next?

These stories touch on just a handful of the organizations leading innovation by bringing AI to where data lives. As running AI anywhere becomes more mainstream, the opportunities for empowering people and organizations will only be limited by the imagination.

Visit the container support page to get started with containers today.

For a deeper dive into these stories, visit the following

Automation Anywhere case study
Automation Anywhere’s partnership with Microsoft
Lowell case study
Azure Stack Edge update from Microsoft Ignite 2019
Cognitive Services in Azure Stack Edge demo (at 27:07)

Quelle: Azure

You can cook turkey in a toaster oven, but you don't have to

When I was in college and couldn’t make it home for the Thanksgiving holiday, I would get together with other students in the same situation and do the next best thing: cook a traditional Thanksgiving feast of roast turkey, mashed potatoes and gravy, stuffing, and green beans by ourselves. In a dorm room. Using the kitchen equipment we had available: a toaster oven and a popcorn popper. The resulting dinner wasn’t terrible, but it didn’t hold a candle to the meal my family was enjoying back home, made with the benefit of an oven, high-BTU range, food processor, standing mixer—you get the idea.Software development teams are sometimes in a similar situation. They need to build something new and have a few tools, so they build their application using what they have. Like our dorm-room Thanksgiving dinner, this can work, but it is probably not a good experience and may not get the best result.Today, with cloud computing, software development teams have a lot more resources available to them. But sometimes teams move to the cloud but keep using the same old tools, just on a larger scale. That’s like moving from a toaster oven to a wall of large ovens, but not looking into how things like convection or microwave ovens, broilers, sous-vide cooking, instant pots, griddles, breadmakers, or woks can help you make a meal.In short, if you’re an application developer and you’ve moved to the cloud, you should really explore all the new kinds of tools you can use to run your code, beyond configuring and managing virtual machines.Like the number of side dishes on my parents’ holiday table, the number of Google Cloud Platform products you might use can be overwhelming. Here are a few you might want to look at first:App Engine Standard Environment is a serverless platform for web applications. You bring your own application code and let the platform handle the web server itself, along with scaling and monitoring. It can even scale to zero, so if there are idle periods without traffic, you won’t be paying for computer time you aren’t using.Some of the code you need might not be an application, but just a handler to deal with events as they happen, such as new data arriving or some operation being ready to start. Cloud Functions is another serverless platform that runs code written in supported languages in response to many kinds of events. Cloud Run can do similar tasks for you, with fewer restrictions on what languages and binaries you can run, but requiring a bit more management on your part.Do you need regular housekeeping tasks performed, such as generating daily reports or deleting stale data? Instead of running a virtual machine just so you can trigger a cron job, you can have Cloud Scheduler do the triggering for you. If you want to get really fancy (like your aunt’s bourbon pecan pie), you can implement it with another serverless offering such as Cloud Functions, at specified intervals.Instead of installing and managing a relational database server, use Cloud SQL instead. It’s reliable and secure, and handles backups and replication for you.Maybe you don’t need (or just don’t want to use) a relational database. Cloud Firestore is a serverless NoSQL database that’s easy to use and that will scale up or down as needed. It also replicates your data across multiple regions for extremely high availability.After Thanksgiving dinner, you may feel like a blob. Or you may just need to store blobs of data, such as files. But you don’t want to use a local filesystem, you want replicated and backed up storage. Some teams put these blobs into general purpose databases, but that’s not a good fit and can be expensive. Cloud Storage is designed to store and retrieve blob-format data on demand, affordably and reliably.These products are great starting points in rethinking what kind of infrastructure your application could be built on, once you have adopted cloud computing. You might find they give you a better development experience and great outcomes relative to launching and managing more virtual machines. Now if you’ll excuse me, dinner’s ready!
Quelle: Google Cloud Platform

Stackdriver Logging comes to Cloud Code in Visual Studio Code

A big part of troubleshooting your code is inspecting the logs. At Google Cloud, we offer Cloud Code, a plugin to popular integrated development environments (IDEs) to help you write, deploy, and debug cloud-native applications quickly and easily. Stackdriver Logging, meanwhile, is the go-to tool for all Google Cloud Platform (GCP) logs, providing advanced searching and filtering as well as detailed information about them. But deciphering logs can be tedious. Even worse, you need to leave your IDE to access Stackdriver Logging. Now, with the Cloud Code plugin, you can access your Stackdriver logs in the Visual Studio Code IDE directly! The new Cloud Code logs viewer helps you simplify and streamline the diagnostics process with three new features:Integration with Stackdriver Logging A customizable logs viewerKubernetes-specific filtering  View Stackdriver logs in VS CodeWith the new Cloud Code logs viewer you can access your Stackdriver logs in VS Code directly. Simply open the logs viewer and Cloud Code displays all your Stackdriver logs. You can edit the filters just like you do in Stackdriver, and if you would like to see more detailed information you can easily return to Stackdriver Logging from the IDE with your filters in place.In contrast to kubectl logs, Stackdriver logs are natively integrated with Google Cloud. Learn more about Stackdriver Logging here. Improved log exploration The new logs viewer provides a structured logs viewing experience that has several new features including: severity filters, colorized output, streaming capabilities, and timezone conversions. The new logs viewer presents an organized view of logs and lets you filter and search your logs from within VS Code. Think of the logs viewer as your first stop for all of your logs without having to leave your IDE.  The logs viewer will supports kubectl logs.Kubernetes-specific filtering Kubernetes logs are complex. The new logs viewer lets you filter on Kubernetes-specific elements including: namespace, deployment, pod, container, and keyword. This allows you to easily see logs for specific pod or all the logs from a given deployment, helping you so you can navigate complex logs more effectively.In addition to manual filtering, you can access the logs viewer from the Cloud Code resource browser and use the tree view to filter your logs. This way, you can locate a resource with the context around it. The tree view shows status and context information that can help you find important logs such as unhealthy or orphaned pods.Get started Accessing Stackdriver Logs in VS Code with Cloud Code brings your logs closer to your code, with advanced filtering options that help you stay focused and in your IDE. To learn more, check out this guide to getting started with the Log Viewer. If you are new to Cloud Code or Stackdriver Logging, start by learning how to install Cloud Code and set up Stackdriver. If you are already using Cloud Code and Stackdriver Logging, there are no prerequisites to get started—just open the new logs viewer with Cloud Code and you’re ready to go!
Quelle: Google Cloud Platform

Multi-cluster Management with GitOps

In this blog post we are going to introduce Multi-cluster Management patterns with GitOps and how you can implement these patterns on OpenShift.
If you’re interested in diving into an interactive tutorial, try this link.
In the introductory blog post to GitOps we described some of the use cases that we can solve with GitOps on OpenShift. In
today’s blog post we are going to describe how we can leverage GitOps patterns to perform tasks on multiple clusters.
We are going to explore the following use cases:

Deploy an application to multiple clusters
Customize the application by cluster
Perform a canary deployment

During this blog post we are not going to cover advanced GitOps workflows, instead we are going to show you basic capabilities around
the topic. More advanced posts around GitOps workflows will follow.
Environment

Two OpenShift 4.1 clusters, one for preproduction (context: pre) environment and one for production (context: pro) environment.
ArgoCD used as the GitOps tool
Demo files here

Deploy an Application to Multiple Clusters
In this first example, we are going to deploy our base application to both clusters.
As we are using ArgoCD as our GitOps tool an ArgoCD Server is already deployed in our environment as well as the argocd cli tool.
Our application definition can be found here

Ensure we have access to both clusters
$ oc –context pre get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-128-17.ap-southeast-1.compute.internal Ready master 19h v1.13.4+ab8449285
ip-10-0-136-41.ap-southeast-1.compute.internal Ready worker 19h v1.13.4+ab8449285
ip-10-0-151-90.ap-southeast-1.compute.internal Ready worker 19h v1.13.4+ab8449285

$ oc –context pro get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-140-239.ap-southeast-1.compute.internal Ready master 19h v1.13.4+ab8449285
ip-10-0-142-57.ap-southeast-1.compute.internal Ready worker 19h v1.13.4+ab8449285
ip-10-0-170-168.ap-southeast-1.compute.internal Ready worker 19h v1.13.4+ab8449285

Ensure we have our clusters registered in ArgoCD
$ argocd cluster list
SERVER NAME STATUS MESSAGE
https://api.openshift.pre.example.com:6443 pre Successful
https://api.openshift.pro.example.com:6443 pro Successful
https://kubernetes.default.svc Successful

Add our GitOps repository to ArgoCD
$ argocd repo add https://github.com/mvazquezc/gitops-demo.git

repository ‘https://github.com/mvazquezc/gitops-demo.git’ added

Deploy our application to preproduction and production clusters
# Create the application on Preproduction cluster
$ argocd app create –project default –name pre-reversewords –repo https://github.com/mvazquezc/gitops-demo.git –path reversewords_app/base –dest-server https://api.openshift.pre.example.com:6443 –dest-namespace reverse-words –revision pre

application ‘pre-reversewords’ created

# Create the application on Production cluster
$ argocd app create –project default –name pro-reversewords –repo https://github.com/mvazquezc/gitops-demo.git –path reversewords_app/base –dest-server https://api.openshift.pro.example.com:6443 –dest-namespace reverse-words –revision pro

application ‘pro-reversewords’ created

4.1 Above commands create a new ArgoCD Application named pre-reversewords and pro-reversewords that will be deployed on preproduction and production clusters in reverse-words namespace using the code from pre/pro branch located under path reversewords_app/base

As we haven’t defined a sync policy, we need to force ArgoCD to sync the Git repo content on our pre and pro clusters
$ argocd app sync pre-reversewords
$ argocd app sync pro-reversewords

After a few seconds we will see our application deployed on pre and pro clusters
# Get application status on preproduction cluster
$ argocd app get pre-reversewords

Name: pre-reversewords
Project: default
Server: https://api.openshift.pre.example.com:6443
Namespace: reverse-words
URL: https://argocd.apps.example.com/applications/pre-reversewords
Repo: https://github.com/mvazquezc/gitops-demo.git
Target: pre
Path: reversewords_app/base
Sync Policy: <none>
Sync Status: Synced to pre (306ce10)
Health Status: Healthy

GROUP KIND NAMESPACE NAME STATUS HEALTH
Namespace reverse-words Synced
Service reverse-words reverse-words Synced Healthy
apps Deployment reverse-words reverse-words Synced Healthy

# Get application status on production cluster
$ argocd app get pro-reversewords

Name: pro-reversewords
Project: default
Server: https://api.openshift.pro.example.com:6443
Namespace: reverse-words
URL: https://argocd.apps.example.com/applications/pro-reversewords
Repo: https://github.com/mvazquezc/gitops-demo.git
Target: pro
Path: reversewords_app/base
Sync Policy: <none>
Sync Status: Synced to pro (98bbfb1)
Health Status: Healthy

GROUP KIND NAMESPACE NAME STATUS HEALTH
Namespace reverse-words Synced
Service reverse-words reverse-words Synced Healthy
apps Deployment reverse-words reverse-words Synced Healthy

Our application defines a service for accessing its API, let’s try to access and get the release name for both clusters
# Get the preproduction cluster LB hostname
$ PRE_LB_HOSTNAME=$(oc –context pre -n reverse-words get svc reverse-words -o jsonpath='{.status.loadBalancer.ingress[*].hostname}’)
# Get the production cluster LB hostname
$ PRO_LB_HOSTNAME=$(oc –context pro -n reverse-words get svc reverse-words -o jsonpath='{.status.loadBalancer.ingress[*].hostname}’)
# Access the preproduccion LB and get the release name
$ curl http://${PRE_LB_HOSTNAME}:8080

Reverse Words Release: Base release. App version: v0.0.2
# Access the production LB and get the release name
$ curl http://${PRO_LB_HOSTNAME}:8080

Reverse Words Release: Base release. App version: v0.0.2

As you have seen, we have been able to deploy to multiple clusters from a single tool (ArgoCD). In the next section we are going to explore how we can override some configurations depending on the destination cluster by using embedded Kustomize on ArgoCD.
Customize the Application by Cluster
In this second example, we are going to modify the application behavior depending on which cluster is deployed.
We want the application to have a release name preproduction or production depending on which environment the application gets deployed on.
ArgoCD leverages Kustomize under the hood to deal with configuration overrides across environments.
The way we organize our application in Git is as follows:

The Git Repository has two branches, pre which has manifests for preproduction env, and pro for production env.

Application overrides can be found in their respective folders and branch:

Preproduction cluster overrides
Production cluster overrides

We placed the application overrides in the Git repository, there is only one override that configures a release name different than the default based on the cluster the application gets deployed
Deploy our Kustomized application to preproduction and production clusters
# Create the application on Preproduction cluster
argocd app create –project default –name pre-kustomize-reversewords –repo https://github.com/mvazquezc/gitops-demo.git –path reversewords_app/overlays/pre –dest-server https://api.openshift.pre.example.com:6443 –dest-namespace reverse-words –revision pre –sync-policy automated

application ‘pre-kustomize-reversewords’ created

# Create the application on Production cluster
argocd app create –project default –name pro-kustomize-reversewords –repo https://github.com/mvazquezc/gitops-demo.git –path reversewords_app/overlays/pro –dest-server https://api.openshift.pro.example.com:6443 –dest-namespace reverse-words –revision pro –sync-policy automated

application ‘pro-kustomize-reversewords’ created

2.1 Above commands create a new ArgoCD Application named pre-kustomize-reversewords and pro-kustomize-reversewords that will be deployed on preproduction and production clusters in reverse-words namespace using the code from pre and pro branch respectively. Each application will get the code from a different folder in our overlays folder, that way the application will be customized depending on which environment it gets deployed on. Note that only the modified values are stored in the overlay folder, the base application is still deployed from the base folder, so we don’t end up having duplicate application files.

As we have defined an automated sync policy we don’t need to force the sync, ArgoCD will start synching our application once it gets created. On top of that, if changes were made to the application repository, ArgoCD would re-deploy the changes for us.

After a few seconds we will see our application deployed on pre cluster
# Get application status on preproduction cluster
$ argocd app get pre-kustomize-reversewords

Name: pre-kustomize-reversewords
Project: default
Server: https://api.openshift.pre.example.com:6443
Namespace: reverse-words
URL: https://argocd.apps.example.com/applications/pre-kustomize-reversewords
Repo: https://github.com/mvazquezc/gitops-demo.git
Target: pre
Path: reversewords_app/overlays/pre
Sync Policy: Automated
Sync Status: Synced to pre (306ce10)
Health Status: Healthy

GROUP KIND NAMESPACE NAME STATUS HEALTH
Namespace reverse-words Synced
Service reverse-words reverse-words Synced Healthy
apps Deployment reverse-words reverse-words Synced Healthy

# Get application status on production cluster
$ argocd app get pro-kustomize-reversewords

Name: pro-kustomize-reversewords
Project: default
Server: https://api.openshift.pro.example.com:6443
Namespace: reverse-words
URL: https://argocd.apps.example.com/applications/pro-kustomize-reversewords
Repo: https://github.com/mvazquezc/gitops-demo.git
Target: pro
Path: reversewords_app/overlays/pro
Sync Policy: Automated
Sync Status: Synced to pro (98bbfb1)
Health Status: Healthy

GROUP KIND NAMESPACE NAME STATUS HEALTH
Namespace reverse-words Synced
Service reverse-words reverse-words Synced Healthy
apps Deployment reverse-words reverse-words Synced Healthy

Our application defines a service for accessing its API, let’s try to access and get the release name for both clusters
# Get the preproduction cluster LB hostname
$ PRE_LB_HOSTNAME=$(oc –context pre -n reverse-words get svc reverse-words -o jsonpath='{.status.loadBalancer.ingress[*].hostname}’)
# Get the production cluster LB hostname
$ PRO_LB_HOSTNAME=$(oc –context pro -n reverse-words get svc reverse-words -o jsonpath='{.status.loadBalancer.ingress[*].hostname}’)
# Access the preproduccion LB and get the release name
$ curl http://${PRE_LB_HOSTNAME}:8080

Reverse Words Release: Preproduction release. App version: v0.0.2
# Access the production LB and get the release name
$ curl http://${PRO_LB_HOSTNAME}:8080

Reverse Words Release: Production release. App version: v0.0.2

As you have seen, we have been able to deploy to multiple clusters and use custom configurations depending on which cluster we are using to deploy the application. In the next section we are going to explore how we can use GitOps to perform a basic canary deployment.
Perform a Canary Deployment
A common practice is to deploy a new version of an application to a small subset of the available clusters, and once the application has been proven to work as expected, then it gets promoted to the rest of the clusters.
We are going to use the Kustomized apps that we created before, let’s verify which versions are we running:
# Get the preproduction cluster LB hostname
$ PRE_LB_HOSTNAME=$(oc –context pre -n reverse-words get svc reverse-words -o jsonpath='{.status.loadBalancer.ingress[*].hostname}’)
# Get the production cluster LB hostname
$ PRO_LB_HOSTNAME=$(oc –context pro -n reverse-words get svc reverse-words -o jsonpath='{.status.loadBalancer.ingress[*].hostname}’)
# Access the preproduccion LB and get the release name
$ curl http://${PRE_LB_HOSTNAME}:8080

Reverse Words Release: Preproduction release. App version: v0.0.2
# Access the production LB and get the release name
$ curl http://${PRO_LB_HOSTNAME}:8080

Reverse Words Release: Production release. App version: v0.0.2

As you can see the current deployed version is v0.0.2, let’s perform a canary deployment to v0.0.3.

We need to update the container image that will be used on preproduction cluster, we are going to modify the Deployment overlay as follows:
# reversewords_app/overlays/pre/deployment.yaml in git branch pre
apiVersion: apps/v1
kind: Deployment
metadata:
name: reverse-words
labels:
app: reverse-words
spec:
template:
spec:
containers:
– name: reverse-words
image: quay.io/mavazque/reversewords:v0.0.3
env:
– name: RELEASE
value: “Preproduction release”
– $patch: replace

We send our changes to the git repository
git add reversewords_app/overlays/pre/deployment.yaml
git commit -m “Updated preproduction image version from v0.0.2 to v0.0.3″
git push origin pre

ArgoCD will detect the update in our code and will deploy the new changes, now we should see the version v0.0.3 deployed on pre and the version v0.0.2 deployed on pro.
# Access the preproduccion LB and get the release name
$ curl http://${PRE_LB_HOSTNAME}:8080

Reverse Words Release: Preproduction release. App version: v0.0.3
# Access the production LB and get the release name
$ curl http://${PRO_LB_HOSTNAME}:8080

Reverse Words Release: Production release. App version: v0.0.2

Let’s verify that our application is working as expected
$ curl http://${PRE_LB_HOSTNAME}:8080 -X POST -d ‘{“word”:”PALC”}’

{“reverse_word”:”CLAP”}

The application is working fine, now it’s time to update production to v0.0.3 as well. Let’s update the overlay:
# reversewords_app/overlays/pro/deployment.yaml in git branch pro
apiVersion: apps/v1
kind: Deployment
metadata:
name: reverse-words
labels:
app: reverse-words
spec:
template:
spec:
containers:
– name: reverse-words
image: quay.io/mavazque/reversewords:v0.0.3
env:
– name: RELEASE
value: “Production release”
– $patch: replace

Send the changes to Git
git add reversewords_app/overlays/pro/deployment.yaml
git commit -m “Updated production image version from v0.0.2 to v0.0.3″
git push origin pro

Get versions in use
# Access the preproduccion LB and get the release name
$ curl http://${PRE_LB_HOSTNAME}:8080

Reverse Words Release: Preproduction release. App version: v0.0.3
# Access the production LB and get the release name
$ curl http://${PRO_LB_HOSTNAME}:8080

Reverse Words Release: Production release. App version: v0.0.3

We should now update the base deployment so newer deployments use v0.0.3 version

Final Thoughts

We have updated our application by modifying the application overlays in Git, this is a very basic scenario, advanced scenarios may include CI tests, multiple approvals, etc.

We have pushed our code to the pre/pro branches directly, that is not a good practice, in a real life scenario a more advanced workflow should be used. We will discuss GitOps workflows in future blog posts.

We have ArgoCD Cli, ArgoCD has a WebUI where you can do almost the same operations as with the cli, on top of that you can visualize your applications and its components.

Next Steps
In future blog posts we will talk about multiple topics related to GitOps such as:

GitOps Workflows in Production
Disaster Recovery with GitOps
Moving to GitOps

The post Multi-cluster Management with GitOps appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift