OpenShift Partner Reference Architectures

Red Hat’s Partners play a key role in developing customer relationships, understanding customer needs, and providing comprehensive joint solutions. As customers use Red Hat technologies to help solve increasingly complex business issues, partners provide reliable guidance, technical information, and even engineered integrations to assist customers in making sound technology decisions. For this post, the focus […]
The post OpenShift Partner Reference Architectures appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Running Redis on GCP: four deployment scenarios

Redis is one of the most popular open source in-memory data stores, used as a database, cache and message broker. This post covers the major deployment scenarios for Redis on Google Cloud Platform (GCP). In the following post, we’ll go through the pros and cons of these deployment scenarios and the step-by-step approach, limitations and caveats for each.Deployment options for running Redis on GCPThere are four typical deployment scenarios we see for running Redis on GCP: Cloud Memorystore for Redis, Redis Labs Cloud and VPC, Redis on Google Kubernetes Engine (GKE), and Redis on Google Compute Engine. We’ll go through the considerations for each of them. It’s also important to have backup for production databases, so we’ll discuss backup and restore considerations for each deployment type.Cloud Memorystore for RedisCloud Memorystore for Redis, part of GCP, is a way to use Redis and get all its benefits without the cost of managing Redis. If you need data sharding, you can deploy open source Redis proxies such as Twemproxy and Codis with multiple Cloud Memorystore for Redis instances for scale until Redis Cluster becomes ready in GCP.TwemproxyTwemproxy, also known as the nutcracker, is an open source (under the Apache License) fast and lightweight Redis proxy developed by Twitter. The purpose of Twemproxy is to provide a proxy and data sharding solution for Redis and to reduce the number of client connections to the back-end Redis instances. You can set up multiple Redis instances behind Twemproxy. Clients only talk to the proxy and don’t need to know the details of back-end Redis instances, which simplifies management. You can also run multiple Twemproxy instances for the same group of back-end Redis servers to prevent having a single point of failure, as shown here:Note that Twemproxy does not support all Redis commands, such as pub/sub and transaction commands. In addition, it’s not convenient to add or remove back-end Redis nodes for Twemproxy. It requires you to restart Twemproxy for configurations to be effective, and data isn’t rebalanced automatically after adding or removing Redis nodes.CodisCodis is an open source (under the MIT License) proxy-based high-performance Redis cluster tool developed by CodisLabs. Codis offers another Redis data sharding proxy option to solve the horizontal scalability limitation and lack of administration dashboard. It’s fully compatible with Twemproxy and has a handy tool called redis-port that handles the migration from Redis Twemproxy to Codis.Pros of Cloud Memorystore for RedisIt’s fully managed. Google fully manages administrative tasks for Redis instances such as hardware provisioning, setup and configuration management, software patching, failover, monitoring and other nuances that require considerable effort for service owners who just want to use Redis as a memory store or a cache.It’s highly available. We provide a standard Cloud Memorystore tier, in which we fully manage replication and failover to provide high availability. In addition, you can keep the replica in a different zone.It’s scalable and performs well. You can easily scale memory that’s provisioned for Redis instances. We also provide high network throughput per instance, which can be scaled on demand.It’s updated and secure. We provide network isolation so that access to Redis is restricted to within a network via a private IP. Also, OSS compatibility is Redis 3.2.11, as of late 2018.Cons of Cloud Memorystore for RedisSome features are not yet available: Redis Cluster, backup and restore.It lacks replica options. Cloud Memorystore for Redis provides a master/replica configuration in the standard tier, and master and replica are spread across zones. There is only one replica per instance.There are some product constraints you should note.You can deploy OSS proxies such as Twemproxy and Codis with multiple Cloud Memorystore for Redis instances for scalability until Redis Cluster is ready in GCP. And note the caveat that basic-tier Cloud Memorystore for Redis instances are subject to a cold restart and full data flush during routine maintenance, scaling, or an instance failure. Choose the standard tier to prevent data loss during those events.How to get startedCheck out our Cloud Memorystore for Redis guide for the basics. You can see here how to configure multiple Cloud Memorystore for Redis instances using Twemproxy and an internal load balancer in front of them.1. Create nine new Cloud Memorystore for Redis instances in asia-northeast1 region2. Prepare a Twemproxy container for deployment3. Build a Twemproxy docker image* Please replace <your-project> with your GCP project ID.Note that a VM instance starts a container with –network=”host” flag of the Docker run command by default.4. Create an instance template based on the Docker image* Please replace <your-project> with your GCP project ID.5. Create a managed instance group using the template6. Create a health check for the internal load balancer7.Create a back-end service for the internal load balancer8. Add instance groups to the back-end service9. Create a forwarding rule for the internal load balancer10. Configure firewall rules to allow the internal load balancer access to Twemproxy instancesRedis Labs Cloud and VPCTo get managed Redis Clusters, you can use a partner solution from Redis Labs. Redis Labs has two managed-service options: Redis Enterprise Cloud (hosted) and Redis Enterprise VPC (managed).Redis Enterprise Cloud is a fully managed and hosted Redis Cluster on GCP.Redis Enterprise VPC is a fully managed Redis Cluster in your virtual private cloud (VPC) on GCP.Redis Labs Cloud and VPC protect your database by maintaining automated daily and on-demand backups to remote storage. You can back up your Redis Enterprise Cloud/VPC databases to Cloud Storage. Find instructions here.You can also import a data set from an RDB file using Redis Labs Cloud with VPC. Check out the official public document on Redis Labs site for instructions.Pros of Redis Labs Cloud and VPCIt’s fully managed. Redis Labs manages all administrative tasks.It’s highly available. These Redis Labs products include an SLA with 99.99% availability.It scales and performs well. It will automatically add new instances to your cluster according to your actual data set size without any interruption to your applications.It’s fully supported. Redis Labs supports Redis itself.Cons of Redis Labs Cloud and VPCThere’s a cost consideration. You’ll have to pay separately for Redis Labs’ service.How to get startedContact Redis Labs to discuss further steps.Redis on GKEIf you want to use Redis Cluster, or want to read from replicas, Redis on GKE is an option. Here’s what you should know.Pros of Redis on GKEYou have full control of the Redis instances. You can configure, manage and operate as you like.You can use Redis Cluster.You can read from replicas.Cons of Redis on GKEIt’s not managed. You’ll need to manage administrative tasks such as hardware provisioning, setup and configuration management, software patching, failover, backup and restore, configuration management, monitoring, etc.Availability, scalability and performance varies, depending on how you architect. Running a standalone instance of Redis on GKE is not ideal for production because it would be a single point of failure, so consider configuring master/slave replication to have redundant nodes with Sentinel, or set up a cluster.There’s a steeper learning curve. This option requires you to learn Redis itself in more detail. Kubernetes also requires some time to learn, and its deployment may introduce additional complexity to your design and operations.When using Redis on GKE, you’ll want to be aware of GKE cluster node maintenance; cluster nodes will need to be upgraded once every three months or so. To avoid unexpected disruption during the upgrade process, consider using PodDisruptionBudgets and configure parameters appropriately. And you’ll want to run containers in host networking mode to eliminate additional network overhead from Docker networking. Make sure that you run one Redis instance on each VM, otherwise it may cause port conflicts. This can be achieved with podAntiAffinity.How to get startedUse Kubernetes to deploy a container to run Redis on GKE. The example below shows the steps to deploy Redis Cluster on GKE.1. Provision a GKE cluster* If prompted, specify your preferred GCP project ID or zone.2. Clone an example git repository3. Create config maps4. Deploy Redis pods* Wait until it is completed.5. Prepare a list of Redis cache nodes6. Submit a job to configure Redis Cluster7. Confirm the job “redis-create-cluster-xxxxx” shows completed statusLimitations highly depend on how you design the cluster.Backing up and restoring manually built RedisBoth GKE and Compute Engine will follow the same method to back up and restore your databases. Basically, copying the RDB file is completely safe while the server is running, because the RDB is never modified once produced.To back up your data, copy the RDB file to somewhere safe, such as Cloud Storage.Create a cron job in your server to take hourly snapshots of the RDB files in one directory, and daily snapshots in a different directory.Every time the cron script runs, make sure to call the “find” command to make sure old snapshots are deleted: for instance, you can take hourly snapshots for the latest 48 hours, and daily snapshots for one or two months. Make sure to name the snapshots with data and time information.At least once a day, make sure to transfer an RDB snapshot outside your production environment. Cloud Storage is a good place to do so.To restore a data set from an RDB file, disable AOF and remove AOF and RDB before restoring data to Redis. Then you can copy RDB file from remote and simply restart redis-server to restore your data.Redis will try to restore data from the AOF file if AOF is enabled. If the AOF file cannot be found, Redis will start with an empty data set.Once the RDB snapshot is triggered due to the key changes, the original RDB file will be rewritten.Redis on Compute EngineYou can also deploy your own open source Redis Cluster on Google Compute Engine if you want to use Redis Cluster, or want to read from replicas. The possible deployment options are:Run Redis on a Compute Engine instance—this is the simplest way to run the Redis service processes directly.Run Redis containers on Docker on a Compute Engine instance.Pros of Redis on Compute EngineYou’ll have full control of Redis. You can configure, manage and operate as you like.Cons of Redis on Compute EngineIt’s not managed. You have to manage administrative tasks such as hardware provisioning, setup and configuration management, software patching, failover, backup and restore, configuration management, monitoring, etc.Availability, scalability and performance depend on how you architect. For example,  a standalone setup is not ideal for production because it would be a single point of failure, so consider configuring master/slave replication to have redundant nodes with Sentinel, or set up a cluster.There’s a steeper learning curve: This option requires you to learn Redis itself in more detail.For best results, run containers in host networking mode to eliminate additional network overhead from Docker networking. Make sure that you run one Redis container on each VM, otherwise it causes port conflicts. Limitations highly depend on how you design the cluster.How to get startedProvision Compute Engine instances by deploying containers on VMs and managed instance groups. Alternatively, you can run your container on Compute Engine instances using whatever container technologies and orchestration tools that you need. You can create an instance from a public VM image and then install the container technologies that you want, such as Docker. Package service-specific components into separate containers and upload to Cloud Repositories.The steps to configure Redis on Compute Engine instances are pretty basic if you’re already using Compute Engine, so we don’t describe them here. Check out the Compute Engine docs and open source Redis docs for more details.Redis performance testingIt’s always necessary to measure the performance of your system to identify any bottlenecks before you expose it in production. The key factors affecting the performance of Redis are CPU, network bandwidth and latency, the size of the data set, and the operations you perform. If the result of the benchmark test doesn’t meet your requirements, consider scaling your infrastructure up or out or adjust the way you use Redis. There are a few ways to do benchmark testing against multiple Cloud Memorystore for Redis instances deployed using Twemproxy with an internal load balancer in front.redis-benchmarkRedis-benchmark is an open source command line benchmark tool for Redis, which is included with the open source Redis package.memtier_benchmarkMemtier_benchmark is an open source command line benchmark tool for NoSQL key-value stores, developed by Redis Labs. It supports both Redis and Memcache protocols, and can generate various traffic patterns against instances.Migrating Redis to GCPThe most typical Redis customer journey to GCP we see is migration from other cloud providers. Here are a few options that can be used to perform data migration of Redis:Setting up the master/slave relationship to replicate the dataLoading persistence data files [Use append-only file (AOF) or Redis database (RDB) to restore the data]Use MIGRATE commandUse the redis-port tool developed by CodisLabsIf you would like to work with Google experts to migrate your Redis deployment onto GCP, get in touch and learn more here.
Quelle: Google Cloud Platform

OpenVPN: Enabling access to the corporate network with Cloud Identity credentials

Editor’s note: Cloud Identity, Google Cloud’s identity as a service (IDaaS) platform, now offers secure LDAP functionality that enables authentication, authorization, and user/group lookups for LDAP-based apps and IT infrastructure. Today, we hear from OpenVPN, which has tested and integrated its OpenVPN Access Server with secure LDAP, enabling your employees and partners to use their Cloud Identity credentials to access applications through VPN. Read on to learn more.As IT organizations adopt more cloud-based IaaS and SaaS apps, they need a way to let users access them securely, while still being able to use legacy LDAP-based apps and infrastructure. The new secure LDAP capabilities in Cloud Identity provides both legacy LDAP platforms and cloud-native applications with a single authentication source, for a simple, effective solution to this problem.In fact, we here at OpenVPN have integrated our OpenVPN Access Server with Cloud Identity, allowing your remote users to connect to your corporate network and apps over VPN with their Cloud Identity (or G Suite) credentials. This helps keep your company secure, and ensures your entire team is following the protocol.This illustration demonstrates how Cloud Identity makes security accessible and efficient for any level of enterprise. The top-half of the illustration shows the deployment of OpenVPN Access Server in various cloud IaaS providers. As you can see, all instances of Access Server use Cloud Identity for authentication and authorization. The Access Servers are configured with a group called ‘IT Admin,’ which allows SSH access to all application servers on all the private networks. This allows any employee identity present in Cloud Identity that is a member of ‘IT Admin’ group to access any of the private networks via VPN and use SSH.Then, as you can see in the lower half of the illustration, remote employees use VPN to connect to your corporate network and apps with their Cloud Identity credentials.Using Cloud Identity for authenticationOpenVPN Access Server v2.6.1 and later supports secure LDAP and has been tested to work with Cloud Identity. You can find specific configuration instructions on our website.Using Cloud Identity groups for network access controlAs shown in the illustration below, Access Server’s administrative controls make it easy to configure groups. Administrators can configure access controls for these groups with fine granularity down to an individual IP address and port number.You can configure groups in Access Server that correspond to those stored in Cloud Identity and enforce access controls for the user based on that user’s group membership. You can do this kind of mapping by using a script on Access Server. Instructions to set up the script are available on our website. In addition, our support staff is also ready to help you.With OpenVPN Access Server, you can protect your cloud applications, connect your premise to the cloud, and provide simple and secure access for your remote employees in a way that scales with the tools you’re already using. Best of all, OpenVPN Access Server is available on GCP Marketplace. Try it out today!
Quelle: Google Cloud Platform

Latest enhancements now available for Cognitive Services' Computer Vision

This blog was co-authored by Lei Zhang, Principal Research Manager, Computer Vision

You can now extract more insights and unlock new workflows from your images with the latest enhancements to Cognitive Services’ Computer Vision service.

1. Enrich insights with expanded tagging vocabulary

Computer Vision has more than doubled the types of objects, situations, and actions it can recognize per image.

Before

Now

2. Automate cropping with new object detection feature

Easily automate cropping and conduct basic counting of what you need from an image with the new object detection feature. Detect thousands of real life or man-made objects in images. Each object is now highlighted by a bounding box denoting its location in the image.

3. Monitor brand presence with new brand detection feature

You can now track logo placement of thousands of global brands from the consumer electronics, retail, manufacturing, entertainment industries.

With these enhancements, you can:

Do at-scale image and video-frame indexing, making your media content searchable. If you’re in media, entertainment, advertising, or stock photography, rich image and video metadata can unlock productivity for your business.
Derive insights from social media and advertising campaigns by understanding the content of images and videos and detecting logos of interest at scale. Businesses like digital agencies have found this capability useful for tracking the effectiveness of advertising campaigns. For example, if your business launches an influencer campaign, you can apply Custom Vision to automatically generate brand inclusion metrics pulling from influencer-generated images and videos.

In some cases, you may need to further customize the image recognition capabilities beyond what the enhanced Computer Vision service now provides by adding specific tagging vocabulary or object types that are relevant to your use case. Custom Vision service allows you to easily customize and deploy your model without requiring machine-learning expertise.

See it in action through the Computer Vision demo. If you’re ready to start building to unlock these insights, visit our documentation pages for image tagging, object detection, and brand detection.
Quelle: Azure

How to remove the top 3 barriers to AI adoption in business automation

Many businesses are looking to automation to increase productivity, save costs and improve customer and employee experiences.
For example, imagine a large insurance company that processes millions of claims a year. Around 60 percent of its claims are automatically processed, but it’d like to automate 85 percent of claims and reduce error rates, reduce costs and increase topline revenue. To make these improvements, the company is considering artificial intelligence (AI) to extend its automation capabilities.
While AI technology has the potential to make automation truly intelligent, there are barriers to adopting AI across operations that could limit early success. Three barriers we see most often are:

Business people don’t know how and where AI can be best applied to their problems.
AI algorithms are often disconnected from daily business operations.
AI is difficult for business people to trust, control and monitor.

Introducing IBM Business Automation Intelligence with Watson
To help eliminate these barriers, IBM is designing a learning system to help business managers improve productivity and customer experiences using AI in their daily business operations. IBM Business Automation Intelligence with Watson is an automation capability for creating, managing and governing AI across the enterprise and applying it to operations using Watson. It will be able to access and act on the operational data generated by the IBM Automation Platform for Digital Business.
With Business Automation Intelligence, business leaders will be able to automate work from the mundane to the complex while measuring the impact of AI on business outcomes. Users will be able to apply AI to existing apps to capture the necessary data; run analytics at scale; and deliver continuous, AI-enabled operational improvements.
Overcoming AI barriers
Here’s how Business Automation Intelligence could address each barrier to AI adoption using the hypothetical insurance company example above:
1. Business people don’t know how and where AI can be best applied to their problem.
Business Automation Intelligence will enable business users to identify opportunities for automation by seeing where automation agents (bots that handle specific tasks or functions, with or without intelligence) could potentially have the most impact. Business Automation Intelligence will provide built-in analysis using process mining to find hotspots for automation.
For example, imagine Lisa, an employee at the insurance company. As the business owner of the claims processing system, she uses Business Automation Intelligence to analyze the claims processing operational data and finds her employees are spending a lot of time extracting information from claims documents and entering it into their claims processing system. Based on this data, she could prioritize automating this part of the workflow.
2. AI algorithms are often disconnected from daily business operations.
Business Automation Intelligence is designed to enable you to apply AI at scale to a wide range of styles of work, from the mundane clerical to complex knowledge work. Our goal is to help clients move past one-off AI experiments and use Business Automation Intelligence to methodically discover, create, manage, govern and apply AI to automated business operations across the enterprise, delivering continuous, AI-enabled operational improvements. It will do this with built-in connectivity to the IBM Automation Platform for Digital Business, as well as with several Watson capabilities.
For example, Lisa’s knowledge workers must analyze every claim that isn’t automatically processed and then manually route it to the right claims processor based on complexity. With Business Automation Intelligence, built-in machine learning evaluates the complexity of the claim and automatically routes it to the claims specialist with the appropriate level of experience and expertise.
3. AI is too hard for business people to trust, control and monitor.
Business Automation Intelligence will include work guardrails and performance monitoring so business leaders can control and manage the digital workforce initiatives based on business outcomes. Guardrails will use natural-language rules to define and control the conditions under which the automation operates. To monitor the performance of automation agents, prebuilt dashboards will be included with KPIs that the user defines.
For example, some insurance claims require specialized handling. Lisa sets up guardrails in Business Automation Intelligence that define the types of claims that get immediately routed to a specialist instead of being processed automatically. This helps as the company handles specific compliance situations. When these guardrails are embedded alongside the AI algorithm, claims can be managed more comprehensively, ensuring the AI tech is applied consistently.
Learn more about what Business Automation Intelligence can do, or request an invitation to the early access program.
The post How to remove the top 3 barriers to AI adoption in business automation appeared first on Cloud computing news.
Quelle: Thoughts on Cloud