Best Kept Security Secrets: Tap into the power of Organization Policy Service

The canvas of cloud resources is vast, ready for an ambitious organization to craft their digital masterpiece (or perhaps just their business.) Yet before the first brush of paint is applied, a painter in the cloud needs to think about their frame: What shape should it take, what material is it made of, how will it look as a border against the canvas of their cloud service. Google Cloud’s Organization Policy Service is just such a frame, a broad set of tools for our customer’s security teams to set broad yet unbendable limits for engineers before they start working. Google Cloud’s Organization (org) Policy Service is one of our most dramatic features but is often under-appreciated by security teams. It provides for a separation of duties by focusing on what users can do, and lets the administrator set restrictions on specific resources to determine how they can be configured. This drives defense in depth from configuration errors as well as defense in depth from attacks. An org policy lets the administrator enforce compliance and conformance at a higher level than Identity and Access Management, which focuses on which users can access specific resources.Org policies can reduce toil and can improve security at the scale needed by today’s cloud users. Financial services provider HSBC is one of Google Cloud’s largest customers and has been using org policies for years to help it manage cloud resources across its highly-regulated enterprise environment. As the company explains in this video, HSBC’s creative use of org policies manages more than 15,000 service accounts and 40,000 IT professionals. They control 6.5 million virtual machines per year. That’s 22,500 virtual machines per day, and only 2,500 of those VMs exist for more than 24 hours.HSBC prefers org policies instead of other preventative controls because they are native to Google Cloud and can be enforced independently of how the request originated (such as from Infrastructure-as-Code, Google Cloud services interacting with each other, or a user in the UI.) Detecting resource violations is expensive for many customers, and often comes too late to prevent harm. Org Policies can be deployed to prevent violations from occurring and eliminate detection and remediation costs. Importantly, HSBC’s custom installation is designed so that org policy violations are immediately discoverable, which can help HSBC personnel quickly understand how to quickly and accurately correct an error condition. When an action violates org policy, an error code is returned telling the resource requester which policy was violated. Corresponding logs are generated for administrators to monitor and provide further troubleshooting.Diagram of the organization policy workflowHere are two additional use cases that further illustrate the power of organization policies.Organizations that operate in a region with rigorous data residency requirements can configure and enable the Location org policy to help ensure that all resources created (such as VMs, clusters, and buckets) are deployed in a particular cloud region. Admins who want to ensure that only trusted workloads are deployed for Google Kubernetes Engine (GKE) or Cloud Run may want to restrict developers to only use verified images in their deployment processes. They can create a custom org policy that targets GKE cluster resource type and create and update methods to block the creation or update of any clusters that do not have binary authorization enforced. How it worksGoogle Cloud offers more than 80 org policies that can be used to restrict and govern interactions with Google Cloud services and resources across important domains such as security, reliability, and compliance. Org policies can help:Restrict resource and service access to the organization domain only, secure public access to resources, or stop service account key abuse. Enforce use of global or regional DNS, and global or regional load balancing, to Improve service reliability and availability.Specify which services can access resources, in which regions, and at what times in support of compliance objectives.Secure Virtual Private Cloud (VPC) networks and reduce data exfiltration risk by preventing data from leaving a specific perimeter. See the Organization Policy Service list of constraints for more about org policies and constraints. You can also use the recently introduced Custom Organization Policies to tailor guardrails so they meet your specific compliance and security requirements. With Custom Organization Policies, security administrators can create their own constraints using Common Expression Language (CEL) to define which resource configurations are allowed or denied. Administrators can develop and deploy new policies and constraints in minutes. With great power comes great responsibility, so with that in mind we will soon be introducing Dry Run for Custom Org Policies. It will let users put a policy in an audit-only mode to observe behavior during real operations without putting production workloads at risk.Getting startedSetting up your first org policy is straightforward. An organization policy administrator enables a new organization policy on a Google Cloud organization, folder, or project in scope. Once set, the administrator then determines and applies the constraints. Here’s how it works:1. Design your constraint, which is a particular type of restriction against either a single Google Cloud service or a group of Google Cloud services. You can choose from the list of available built-in constraints by configuring desired restrictions and exceptions (based on tags) or create custom org policies.It’s important to remember that descendants of the targeted resource hierarchy node inherit the org policy. By applying an organization policy to the root organization node, you can drive enforcement of that organization policy and configuration of restrictions across your organization.2. Deploy the org policy to evaluate and allow or deny resource Create, Update, and Delete operations. This can be done through the Google Cloud console, gCloud, or via API. 3. Monitor audit logs and your Security Command Center Premium findings to detect and respond to policy violations.Do I need an org policy?Org policies can help maintain security and compliance at scale while also allowing development teams to work rapidly. Because they give you the ability to set broad guardrails, they can help ensure compliance without adding operational overhead and monitor policy violations. To learn more about org policy, please review these resources:Read the Creating and Managing Organizations page to learn how to acquire an organization resource.Read about how to create and manage organization policies with the Google Cloud console.Learn how to define organization policies using constraints.Explore the solutions you can accomplish with organization policy constraints.Listen to the podcast where Vandy Ramadurai, Google Cloud’s Org Policy product manager, explains it all.Related ArticleRead Article
Quelle: Google Cloud Platform

New startup CPU boost improves cold starts in Cloud Run, Cloud Functions

We are announcing startup CPU boost for Cloud Run and Cloud Functions 2nd gen, a new feature allowing you to drastically reduce the cold start time of Cloud Run and Cloud Functions. With startup CPU boost, more CPU is dynamically allocated to your container during startup, allowing it to start serving requests faster. For some workloads we measured, startup time was cut in half.Making cold starts a little warmerA “cold start” is the latency encountered in the processing of a request that is due to the startup of a new container instance to serve that request. For example, when a Cloud Run service scales down to zero instances, and a new request reaches the service, an instance needs to be started in order to process this request. In addition to the zero-to-one scale event, cold starts often happen when services are configured to serve a single concurrent request, or during traffic scaling events. Minimum instances can be used to remove the cold-start encountered when going from zero to one instance, but min-instances aren’t a solution for all cold-starts as traffic scales out to higher numbers of instances. As part of our continued efforts to give you more control over cold start latency, startup CPU boost can help speed up every cold start.ResultsJava applications, in particular, appear to  greatly benefit from the startup CPU boost feature. Internal testers and private preview customers reported the following startup time reductions for their Java applications:up to 50% faster for the Spring PetClinic sample application up to 47% faster for a Native Spring w/GraalVM serviceup to 23% faster for a plain Java Cloud FunctionsCustomers testing the feature in private preview with Node.js have observed startup time reductions of up to 30%, a significant improvement, a bit less than Java due to the single-threaded nature of Node.js. Each language, framework, and code base will see different levels of benefit.Get startedYou can enable startup CPU boost for your existing Cloud Run service with one command:code_block[StructValue([(u’code’, u’$ gcloud beta run services update SERVICE –cpu-boost’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e7b07db2810>)])]Even better, Cloud Functions uses startup CPU boost by default. To learn more, check out the documentation.Related ArticleCloud Run min instances: Minimize your serverless cold startsWith Cloud Run’s new min instances feature, you can ensure your application never scales entirely to zero if you don’t want it to.Read Article
Quelle: Google Cloud Platform

EPAM and Microsoft partner on data governance solutions with Microsoft Energy Data Services

This blog was co-authored by Emile Veliyev, Director, OSDU Delivery, EPAM.

The energy industry creates and consumes large amounts of highly complex data for key business decisions, like where to drill the next series of wells, how to optimize production, and where to lease the next big field. Despite good intentions, the industry is still plagued by large quantities of data that are inconsistent in location, quality, and format—much of which cannot reliably be found or used when needed. Even when the data is reliable, it can be locked into application-specific data stores that limit its use. The solution to this dilemma is multi-faceted and increasingly includes cloud technology, the OSDU™ Data Platform, modern applications, and data governance focused on people and their business processes.

Microsoft Energy Data Services is a data platform fully supported by Microsoft, that enables efficient data management, standardization, liberation, and consumption in energy exploration. The solution is a hyperscale data ecosystem that leverages the capabilities of the OSDU Data Platform, Microsoft's secure and trustworthy cloud services with our partners’ extensive domain expertise.

Cloud and the OSDU Data Platform

Cloud-based computing is the future—scalable, reliable, secure storage and compute capabilities, all managed for you with many powerful add-on capabilities at your fingertips. For the energy industry, the Open Group® OSDU Data Platform is rapidly emerging as the standard—an open source, cloud-based data platform that unlocks data from applications and provides standard data schemas and access protocols, enabling both data governance and rapid innovation.

One of the things that EPAM discovered when delivering app developer boot camps and deploying the platform for ourselves and for clients is its high level of complexity. In those earlier days, platform deployment was a multi-step process, with each service being deployed and validated separately, taking up to a week. Before we could move on to solving business problems, a part of our work was to guide our clients through various technical deployment obstacles. In addition, it took another several days to ingest pre-formatted sample data in order to test the platform with real data. Not anymore.

Microsoft Energy Data Services

Microsoft has made the OSDU Data Platform enterprise-ready and pre-bundled with the capabilities needed to optimize Energy Company data value using the Microsoft Cloud. EPAM has seen its benefits. As an enterprise-grade platform, Microsoft Energy Data Services has nearly single-click deployment. Deployment time has reduced significantly—what previously took multiple days now takes about 45 minutes! Similarly, the time to ingest the sample data is drastically reduced from one week to around one hour! In addition, the management layer surrounding the platform provides the assured reliability, stability, security, tools, performance, and the SLAs needed by large enterprises such as major energy companies.

Data governance and modern applications

As noted before, excellent infrastructure alone does not magically solve all data and business problems. With Microsoft Energy Data Services providing a solid foundation with which to store data, process data, and build and host cloud-native apps aligned with the OSDU Technical Standard, what remains to empower a data-driven organization is modern applications and data governance.

It is a daunting task to manually track the manifold ways that data enters the company, the many places it is stored, and the many ways it is consumed, enriched, and duplicated. Improving this requires a team who can map out the detailed way in which all of this happens today. It also takes modern digital tools to automate the aggregation, parsing, quality assessment, and lineage-tracking of the data. It takes people with a broad and deep view to accomplish this for large organizations—people who understand the business, the data types, the technology, and how to provide the right data, in the right formats, in the right place, at the right time, to the right people. That includes application connectors and analytical applications themselves designed for the modern cloud environment so that liberated data can move back and forth to users seamlessly.

How to work with EPAM on Microsoft Energy Data Services

EPAM brings industry knowledge, technical expertise, tools, frameworks, relationships with software vendors, and world-class delivery built on the Microsoft Energy Data Services platform. EPAM has developed a document extraction and processing system (DEPS) accelerator, which provides capabilities to facilitate the development of customizable workflows for extracting and processing unstructured data in the form of scanned or digitalized document formats. DEPS is powered by Azure AI/machine learning and deep learning algorithms.  It includes pluggable sub-systems for customization, uses machine learning pre/post processors, validation and extensions for UI review, automation machine learning models training, manual labeling, and analytics capabilities to improve classic optical character recognition (OCR) and text extraction accuracy. DEPS can be adapted to process numerous data types covering both image and text, PDF, XLS, ASCII, and other file formats. Ask more at OSDU@epam.com.

Microsoft Energy Data Services is an enterprise-grade, fully-managed, OSDU Data Platform for the energy industry that is efficient, standardized, easy to deploy, and scalable for data management—for ingesting, aggregating, storing, searching, and retrieving data. The platform will provide scale, security, privacy, and compliance expected by our enterprise customers. EPAM offers services providing the right data, in the right formats, in the right place, at the right time, to the right people, which includes application connectors and analytical applications, with data contained in Microsoft Energy Data Services.

Get started with Microsoft Energy Data Services today.
Quelle: Azure

Creating Kubernetes Extensions in Docker Desktop

This guest post is courtesy of one of our Docker Captains! James Spurin, a DevOps Consultant and Course/Content Creator at DiveInto, recalls his experience creating the Kubernetes Extension for Docker Desktop. Of course, every journey had its challenges. But being able to leverage the powerful open source benefits of the loft.sh vcluster Extension was well worth the effort!

Ever wondered what it would take to create your own Kubernetes Extensions in Docker Desktop? In this blog, we’ll walk through the steps and lessons I learned while creating the k9s Docker Extension and how it leverages the incredible open source efforts of loft.sh vcluster Extension as crucial infrastructure components.

Why build a Kubernetes Docker Extension?

When I initially encountered Docker Extensions, I wondered:

“Can we use Docker Extensions to communicate with the inbuilt Docker-managed Kubernetes server provided in Docker Desktop?”

Docker Extensions open many opportunities with the convenient full-stack interface within the Extensions pane.

Traditionally when using Docker, we’d run a container through the UI or CLI. We’d then expose the container’s service port (for example, 8080) to our host system. Next, we’d access the user interface via our web browser with a URL such as http://localhost:8080.

While the UI/CLI makes this relatively simple, this would still involve multiple steps between different components, namely Docker Desktop and a web browser. We may also need to repeat these steps each time we restart the service or close our browser.

Docker Extensions solve this problem by helping us visualize our backend services through the Docker Dashboard.

Combining Docker Desktop, Docker Extensions, and Kubernetes opens up even more opportunities. This toolset lets us productively leverage Docker Desktop from the beginning stages of development to container creation, execution, and testing, leading up to container orchestration with Kubernetes.

Challenges creating the k9s Extension

Wanting to see this in action, I experimented with different ways to leverage Docker Desktop with the inbuilt Kubernetes server. Eventually, I was able to bridge the gap and provide Kubernetes access to a Docker Extension.

At the time, this required a privileged container — a security risk. As a result, this approach was less than ideal and wasn’t something I was comfortable sharing…

Photo by FLY:D on Unsplash

Let’s dive deeper into this.

Docker Desktop uses a hidden virtual machine to run Docker. We also have the Docker-managed Kubernetes instance within this instance, deployed via kubeadm:

Docker Desktop conveniently provides the user with a local preconfigured kubeconfig file and kubectl command within the user’s home area. This makes accessing Kubernetes less of a hassle. It works and is a fantastic way to fast-tracking access for those looking to leverage Kubernetes from the convenience of Docker.

However, this simplicity poses some challenges from an extension’s viewpoint. Specifically, we’d need to find a way to provide our Docker Extension with an appropriate kubeconfig file for accessing the in-built Kubernetes service.

Finding a solution with loft.sh and vcluster

Fortunately, the team at loft.sh and vcluster were able to address this challenge! Their efforts provide a solid foundation for those looking to create their Kubernetes-based Extensions in Docker Desktop.

When launching the vcluster Docker Extension, you’ll see that it uses a control loop that verifies Docker Desktop is running Kubernetes.

From an open source viewpoint, this has tremendous reusability for those creating their own Docker Extensions with Kubernetes. The progress indicator shows vcluster checking for a running Kubernetes service, as we can see in the following:

If the service is running, the UI loads accordingly:

If not, an error is displayed as follows:

While internally verifying that the Kubernetes server is running, loft.sh’s vcluster Extension cleverly captures the Docker Desktop Kubernetes kubeconfig. The vcluster Extension does this using a javascript hostcli call out with kubectl binaries included in the extension (to provide compatibility across Windows, Mac, and Linux).

Then, it posts the captured output to a service running within the extension. The service in turn writes a local kubeconfig file for use by the vcluster Extension. 🚀

// Gets docker-desktop kubeconfig file from local and save it in container’s /root/.kube/config file-system.
// We have to use the vm.service to call the post api to store the kubeconfig retrieved. Without post api in vm.service
// all the combinations of commands fail
export const updateDockerDesktopK8sKubeConfig = async (ddClient: v1.DockerDesktopClient) => {
// kubectl config view –raw
let kubeConfig = await hostCli(ddClient, "kubectl", ["config", "view", "–raw", "–minify", "–context", DockerDesktop]);
if (kubeConfig?.stderr) {
console.log("error", kubeConfig?.stderr);
return false;
}

// call backend to store the kubeconfig retrieved
try {
await ddClient.extension.vm?.service?.post("/store-kube-config", {data: kubeConfig?.stdout})
} catch (err) {
console.log("error", JSON.stringify(err));
}

How the k9 Extension for Docker Desktop works

With loft.sh’s ‘Docker Desktop Kubernetes Service is Running’ control loop and the kubeconfig capture logic, we have the key ingredients to create our Kubernetes-based Docker Extensions.

Photo by Anshu A on Unsplash

The k9s Extension that I released for Docker Desktop is essentially these components, with a splash of k9s and ttyd (for the web terminal). It’s the loft.sh vcluster codebase, reduced to a minimum set of components with k9s added.

The source code is available at https://github.com/spurin/k9s-dd-extension

While loft.sh’s vcluster stores the kubeconfig file in a particular directory, the k9s Extension expands this further by combining this service with a Docker Volume. When the service receives the post request with the kubeconfig, it’s saved as expected.

The kubeconfig file is now in a shared volume that other containers can access, such as the k9s as shown in the following example:

When the k9s container starts, it reads the environment variable KUBECONFIG (defined in the container image). Then, it exposes a terminal web-based service on port 35781 with k9s running.

If Kubernetes is running as expected in Docker Desktop, we’ll reuse loft.sh’s Kubernetes control loop to render an iframe, to the service on port 35781.

if (isDDK8sEnabled) {
const myHTML = ‘<style>:root { –dd-spacing-unit: 0px; }</style><iframe src="http://localhost:35781" frameborder="0" style="overflow:hidden;height:99vh;width:100%" height="100%" width="100%"></iframe>';
component = <React.Fragment>
<div dangerouslySetInnerHTML={{ __html: myHTML }} />
</React.Fragment>
} else {
component = <Box>
<Alert iconMapping={{
error: <ErrorIcon fontSize="inherit"/>,
}} severity="error" color="error">
Seems like Kubernetes is not enabled in your Docker Desktop. Please take a look at the <a
href="https://docs.docker.com/desktop/kubernetes/">docker
documentation</a> on how to enable the Kubernetes server.
</Alert>
</Box>
}

This renders k9s within the Extension pane when accessing the k9s Docker Extension.

Conclusion

With that, I hope that sharing my experiences creating the k9s Docker Extension inspires you. By leveraging the source code for the Kubernetes k9s Docker Extension (standing on the shoulders of loft.sh), we open the gate to countless opportunities.

You’ll be able to fast-track the creation of a Kubernetes Extension in Docker Desktop, through changes to just two files: the docker-compose.yaml (for your own container services) and the UI rendering in the control loop.

Of course, all of this wouldn’t be possible without the minds behind vcluster. I’d like to give special thanks to loft.sh’s Lian Li, who I met at Kubecon and introduced me to loft.sh/vcluster. And I’d also like to thank the development team who are referenced both in the vcluster Extension source code and the forked version of k9s!

Thanks for reading – James Spurin

Not sure how to get started or want to learn more about Docker Extensions like this one? Check out the following additional resources:

Learn how to create your own Docker Extension. Get started by installing Docker Desktop for Mac, Windows, or Linux.Read similar blogs covering Docker Extensions.Find more details on loft.sh.  

You can also learn more about James, his top tips for working with Docker, and more in his feature on our Docker Captain Take 5 series. 
Quelle: https://blog.docker.com/feed/