In part one of the series, we saw how the Red Hat OpenShift Scale-CI evolved. In this post, we will look at the various components of Scale-CI. OpenShift Scale-CI is not a single tool, it’s a framework which orchestrates a bunch of tools to help analyze and improve the Scalability and Performance of OpenShift. It does this by:
Loading thousands of objects on a large scale production cluster to stress the control plane ( ApiServer, Controller, Etcd ), kubelet and other system components.
Running various benchmarks, gathering performance data during the run and visualizing the data to identify bottlenecks and tuning opportunities.
Repeating the scale tests on OpenShift deployed on various clouds including AWS, Azure, OpenStack and GCP to monitor Performance and Scalability regressions.
The motivation behind building Scale-CI is also to onboard and enable other teams to take advantage of the automation, tooling and hardware to see how well their application/component performs at scale instead of building and maintaining their own clusters, infrastructure and tools.
Architecture
Scale-CI comprises of the following components:
Scale-CI pipeline: Acts as the orchestrator for all tools to deploy, configure, monitor and diagnose OpenShift. This is the entrypoint for onboarding workloads which will be run automatically at scale.
Workloads: Tools an OpenShift cluster and runs OpenShift Performance and Scale workloads.
Scale-CI deploy: Collection of playbooks and scripts to provision and install OpenShift on various cloud platforms including AWS, Azure, GCP and OpenStack. It also supports scaling and upgrading the cluster to the desired payload.
Images: Hosts the container images source files for Scale-CI. The builds are triggered by commits into this repo. In addition we will periodically trigger rebuilds when tools in dependent containers are built and published.
Scale-CI graphshift: Deploys mutable Grafana with Performance Analysis Dashboards for OpenShift.
Scale-CI diagnosis: Running OpenShift at high scale is expensive. There is a chance that a particular config, component logs or metrics need to be looked at after the cluster has been terminated to find an issue during a particular scale test run on the cluster. This motivated us to create this tool. It helps in debugging the issues by capturing Prometheus database from the running Prometheus pods to the local file system. This can be used to look at the metrics later by running Prometheus locally with the backed up DB. It also captures OpenShift cluster information including all the operator managed components using must-gather.
The Performance and Scalability team at Red Hat have built a number of other tools to help with our work:
Cluster Loader: Deploys large numbers of various objects to a cluster, which creates user-defined cluster objects. Build, configure, and run Cluster Loader to measure the performance metrics of your OpenShift Container Platform deployment in various cluster states. It is part of both OKD and upstream kubernetes.
Pbench: This is a benchmarking and performance analysis framework which runs benchmarks across one or more systems, while properly collecting the configuration of those systems, their logs and specified telemetry from various tools (sar, vmstat, perf, etc.). The collected data is shipped off to the Pbench server which is responsible for archiving the resulting tar balls, indexing them and unpacking them for display.
A typical Scale-CI run installs OpenShift on a chosen cloud provider, sets up tooling to run a pbench-agent DaemonSet, runs Conformance (e2e test suite ) to check the sanity of the cluster, scales up the cluster to the desired node count, runs various scale tests focusing on Control plane density, kubelet density, HTTP/Router, SDN, Storage, Logging, Monitoring and Cluster Limits. It also runs a Baseline workload which collects configuration and performance data on an idle cluster to know how the product is moving across various OpenShift releases. The results are shipped to the Pbench server after processing for analysis and long term storage. The results are scraped to generate a machine readable output ( JSON ) of the metrics to compare with previous runs to pass/fail the Job and send a green/red signal.
For large and long running clusters, components like Prometheus need more disk and resources including CPU and memory. Instead of using bigger worker nodes, we create infrastructure nodes with huge amounts of disk, CPU and memory using custom MachineSets and modify the node selector to ensure that the components including Prometheus, Logging, Router and Registry run on the infrastructure nodes. This is part of the day two operation and is needed for large scale clusters.
Adding a new workload to the framework or making changes to the existing Jobs is as simple as creating a PR using the sample templates provided in the repositories.. The Scale-CI watcher picks up the change after the PR gets merged and updates the respective Jobs.
We spoke about the Automated OpenShift/Kubernetes Scalability testing at KubeCon + CloudNativeCon North America 2018. The slides are here, you can also watch the presentation online at https://youtu.be/37naDDcmDo4.
We recently scale tested OpenShift 4.1 before the general availability, Keep an eye out for our next blog OpenShift Scale-CI: part 3, which will have the highlights of the OpenShift 4.1 scalability run. Like always, any feedback or contributions are most welcome.
The post OpenShift Scale-CI: Part 2 – Deep Dive appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift
Published by