OpenShift 4.3: Alertmanager Configuration

Alerts are only useful if you know about them. That’s why we’re working on adding features to Red Hat OpenShift to make it easier for you to find out about potential problems and solve them before they become incidents. The new cluster overview dashboard is great for looking at the status of a cluster. But to get information when you might be away from your cluster, you’ll need to correctly configure your alerting system. One of the first things you should do when you set up a cluster is to use the tools described in this post. Without correct configuration you will not get critical alerts outside of the OpenShift console, and may miss out on features designed to reduce your mean time to resolution.
Alertmanager Configuration

OpenShift 4.3 contains a new Alertmanager section on the cluster settings page. The options it provides make it easier than ever to tell OpenShift’s monitoring tools how and where to send you notifications.

The first group is the alert routing settings. These fields determine how alerts are grouped into notifications and how long to wait before sending the notifications. Those notifications are then sent to Receivers that can be created and edited from the bottom of the page.
Receivers
Every OpenShift cluster needs a default receiver to handle any alerts not sent to other places. The default receiver that comes with a fresh install is initially very basic, so your first step should be to configure it to suit your needs. For more complex team structures, you may want to send different kinds of alerts to different places by creating more receivers. The easiest way to do this is to simply click the create receiver button. We currently offer forms for two types of receivers–webhook and PagerDuty–but more form types coming in the future.

Once you’ve entered the necessary details for the receiver, you can add some routing labels to decide which alerts will be sent there. For instance, you could send warning alerts to an email address and critical alerts to a specific Slack channel.

You can use these forms to create a robust alerting system. But for really complex configuration, it helps to go straight to the source. Switch to the YAML tab to view the raw version of your config and make any necessary changes. You can also use this view to create receiver types that are not currently supported by forms.
Information in the right places
Using the new Alertmanager configuration tools in OpenShift 4.3, you can direct alerts to the teams that need them–and avoid bothering teams that don’t. These features are part of an effort to make problem-solving in OpenShift simpler and reduce time to resolution. Follow along with the OpenShift Console and OpenShift Design GitHub repositories to see what new work is happening. If you’d like to provide feedback on any of the new 4.3 features, please take this brief 3-minute survey
The post OpenShift 4.3: Alertmanager Configuration appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Top 5 DevOps predictions for 2020

There are five DevOps trends that I believe will leave a mark in 2020. I’ll walk you through all five, plus some recommended next steps to take full advantage of these trends.
In 2019, Accenture’s disruptability index discovered that at least two-thirds of large organizations are facing high levels of industry disruption. Driven by pervasive technology change, organizations pursued new and more agile business models and new opportunities. Organizations that delivered applications and services faster were able to react more swiftly to those market changes and were better equipped to disrupt, rather than becoming disrupted. A study by the DevOps Research and Assessment Group (DORA) shows that the best-performing teams deliver applications 46 times more frequently than the lowest performing teams. That means delivering value to customers every hour, rather than monthly or quarterly.
2020 will be the year of delivering software at speed and with high quality, but the big change will be the focus on strong DevOps governance. The desire to take a DevOps approach is the new normal. We are entering a new chapter that calls for DevOps scalability, for better ways to manage multiple tools and platforms, and for tighter IT alignment to the business. DevOps culture and tools are critical, but without strong governance, you can’t scale. To succeed, the business needs must be the driver. The end state, after all, is one where increased IT agility enables maximum business agility. To improve trust across formerly disconnected teams, common visibility and insights into the end-to-end pipeline will be needed by all DevOps stakeholders, including the business.
 
DevOps trends in 2020
What will be the enablers and catalysts in 2020 driving DevOps agility?
Prediction 1: DevOps champions will enable business innovation at scale. From leaders to practitioners, DevOps champions will coexist and share desires, concerns and requirements. This collaboration will include the following:

A desire to speed the flow of software
Concerns about the quality of releases, release management, and how quality activities impact the delivery lifecycle and customer expectations
Continual optimization of the delivery process, including visualization and governance requirements

Prediction 2: More fragmentation of DevOps toolchains will motivate organizations to turn to value streams. 2020 will be the year of more DevOps for X, DevOps for Kubernetes, DevOps for clouds, DevOps for mobile, DevOps for databases, DevOps for SAP, etc. In the coming year, expect to see DevOps for anything involved in the production and delivery of software updates, application modernization, service delivery and integration. Developers, platform owners and site reliability engineers (SREs) will be given more control and visibility over the architectural and infrastructural components of the lifecycle. Governance will be established, and the growing set of stakeholders will get a positive return from having access and visibility to the delivery pipeline.
Figure 1: UrbanCode Velocity and its Value Stream Management screen enable full DevOps governance.
 
Prediction 3: Tekton will have a significant impact on cloud-native continuous delivery. Tekton is a set of shared open-source components for building continuous integration and continuous delivery systems. What if you were able to build, test and deploy apps to Kubernetes using an open source, vendor-neutral, Kubernetes-native framework? That’s the Tekton promise, under a framework of composable, declarative, reproducible and cloud-native principles. Tekton has a bright future now that it is strongly embraced by a large community of users along with organizations like Google, CloudBees, Red Hat and IBM.
Prediction 4: DevOps accelerators will make DevOps kings. In the search for holistic optimization, organizations will move from providing integrations, and move to creating sets of “best practices in a box.” These will deliver what is needed for systems to talk fluidly, but also remain auditable for compliance. These assets will become easier to discover, adopt and customize. Test assets that have been traditionally developed and maintained by software and system integrators will be provided by ambitious user communities, vendors, service providers, regulatory services and domain specialists.
Prediction 5: Artificial intelligence (AI) and machine learning in DevOps will go from marketing to reality. Tech giants, such as Google and IBM, will continue researching how to bring the benefits of DevOps to quantum computing, blockchain, AI, bots, 5G and edge technologies. They will also continue to look at how technologies can be used within the activities of continuous deployment, continuous software testing prediction, performance testing, and other parts of the DevOps pipeline. DevOps solutions will be able to detect, highlight, or act independently when opportunities for improvement or risk mitigation surface, from the moment an idea becomes a story until a story becomes a solution in the hands of their users.
 
Next steps
Companies embracing DevOps will need to carefully evaluate their current internal landscape, then prioritize next steps for DevOps success.
First, identify a DevOps champion to lead the efforts, beginning with automation. Establishing an automated and controlled path to production is the starting point for many DevOps transformations and one where leaders can show ROI clearly.
Then, the focus should turn toward scaling best practices across the enterprise and introducing governance and optimization. This includes reducing waste, optimizing flow and shifting security, quality and governance to the left. It also means increasing the frequency of complex releases by simplifying, digitizing and streamlining execution.
Figure 2: Scaling best practices across the enterprise.
 
These are big steps, so accelerate your DevOps journey by aligning with a long-term vision vendor that has a reputation of helping organizations navigate transformational journeys successfully. OVUM and Forrester have identified organizations that can help support your modernization in the following OVUM report, OVUM webinar and Forrester report.
Do you agree with these predictions? Do you have any others? Maybe an early 2020 DevOps success story? Looking forward to reading those on Twitter at @IBMCloud.
 
The post Top 5 DevOps predictions for 2020 appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

How We Solved a Report on docker-compose Performance on macOS Catalina

Photo by Caspar Camille Rubin on Unsplash

As a Docker Compose maintainer, my daily duty is to check for newly reported issues and try to help users through misunderstanding and possible underlying bugs. Sometimes issues are very well documented, sometimes they are nothing much but some “please help” message. And sometimes they look really weird and can result in funny investigations. Here is the story of how we solved one such report…

A one-line bug report

An issue was reported as “docker-compose super slow on macOS Catalina” – no version, no details. How should I prioritize this? I don’t even know if the reporter is using the latest version of the tool – the opened issue doesn’t follow the bug reporting template. This is just a one-liner. But for some reason, I decided to take a look at it anyway and diagnose the issue.

Without any obvious explanation for super-slowness, I decided to take a risk and upgrade my own MacBook to OSX Catalina. I was able to reproduce significant slow down in docker-compose execution, waiting seconds for the very first line printed on the console even to display usage on invalid command.

Investigating the issue

In the meantime, some users reported getting correct performance when installing docker-compose as a plain python software, not with the packaged executable. The docker-compose executable is packaged using PyInstaller, which embeds a Python runtime and libraries with application code in a single executable file. As a result, one gets a distributable binary that can be created for Windows, Linux and OSX.  I wrote a minimalist “hello world” python application and was able to reproduce the same weird behaviour once packaged the same way docker-compose is, i.e. a few second startup delay.

Here comes the funny part. I’m a remote worker on the Docker team, and I sometimes have trouble with my Internet connection. It happened this exact day, as my network router had to reboot. And during the reboot sequence, docker-compose performance suddenly became quite good … but eventually, the initial execution delay came back. How do you explain such a thing?

So I installed Charles proxy to analyze network traffic, and discovered a request sent to api.apple-cloudkit.com each and everytime docker-compose was run. Apple Cloudkit is Apple cloud storage SDK, and there’s no obvious relation between docker-compose and this service.

As the Docker Desktop team was investigating Catalina support during this period, I heard about the notarization constraints introduced by the Apple OS upgrade. I decided to reconfigure my system with system integrity check disabled (you have to run ‘csrutil disable’ from recovery console on boot). Here again, docker-compose suddenly went reasonably fast.

Looking into PyInstaller implementation details, when executed docker-compose binary extracts itself into a temporary folder, then executes the embedded Python runtime to run the packaged application. This bootstrap sequence takes a blink of an eye on a recent computer with tmp folder mapped to memory, but on my Catalina-upgraded MacBook it took up to 10 seconds – until I disabled integrity check.

Confirming the hypothesis

My assumption was: OSX Catalina reinforced security constraints do apply to the python runtime as it gets extracted, as a security scan, and the system does send a scan report to apple over its own cloud storage service. I can’t remember having approved sending such data to Apple, but I admit I didn’t carefully read the upgrade guide and service agreement before I hit the “upgrade to Catalina” button. As a fresh new Python runtime is extracted for temporary execution, this takes place each and every time we run a docker-compose command: new system scan, new report sent to apple – not even as a background task. 

To confirm this hypothesis, I built a custom flavour of docker-compose using an alternate PyInstaller configuration, so it doesn’t create a single binary, but a folder with runtime and libraries. The first execution of this custom docker-compose packaging took 10 seconds again (initial scan by the system), but subsequent commands were as efficient as expected.

The resolution

A few weeks later, a release candidate build was included in the Docker Desktop Edge channel to confirm that Catalina users get good performance using this alternate packaging, while not introducing unexpected bugs. Docker-compose 1.25.1 was released one month later with the bug fix confirmed. Starting with this release, docker-compose is available both as single binary packaging and as a tar.gz for OSX Catalina.

The post How We Solved a Report on docker-compose Performance on macOS Catalina appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/