Understand GCP Organization resource hierarchies with Forseti Visualizer

Google Cloud Platform (GCP) includes a powerful resource hierarchy that establishes who owns a specific resource, and through which you can apply access controls and organizational policies. But understanding the GCP resource hierarchy can be hard. For example, what does a GCP Organization “look” like? What networks exist within it? Do specific resources violate established security policies? To which service accounts and groups visualizing do you have access?To help answer those questions, as well as others, we recently open-sourcedForseti Visualizer, which lets you, er, visualize and interact with your GCP Organization. Built on top of the open-source Forseti Security, we also used our colleague Mike Zinni’s post, Visualizing GCP Architecture using Forseti 2.0 and D3.js, as inspiration.Forseti Visualizer does a number things:1. Dynamically renders your entire GCP Organization. Forseti Visualizer leverages Forseti Security’s Inventory via connectivity to CloudSQL / MySQL database, so it’s always up-to-date with the most recent inventory iteration.2. Finds all networks or a given set of resource types across an Organization. Again using Forseti Inventory, Visualizer tackles dynamic data processing and filtering of resources. Through a simple series of clicks on filtered resource types AND expanding the tree structure, we can quickly find all Networks.3. Finds violations. Using Forseti Scanner, Visualizer quickly shows you when a given resource is in violation of one of your Forseti policies.4. Displays access permissions. With the help of Forseti IAM Explain and Visualizer, you can quickly figure out whether or not you have access to a given resource—a question that’s otherwise difficult to answer, particularly if you have multiple projects. The future for Forseti VisualizerThese are powerful features in and of themselves, but we’re just getting started with Forseti Visualizer. Here’s a sampling of other extensions and features that could be useful:Visualization Scaling – Internal performance testing shows degradation when over 500 resources are open and rendered on the page. An extension to limit the total number of resources and dynamically render content while scrolling through the visualization would help prevent this.Visualization spacing for vertical / horizontal / wide-viewMultiple sub-visualizationsFull Forseti Explain functionalityMore detailed GCP resource metadataWhen it comes to Forseti Visualizer, the sky’s the limit. To get started with Forseti Visualizer, check the getting started pages. If you have feedback or suggestions on the visualization, interactivity, future features, reach out to me on our Forseti Slack channel.
Quelle: Google Cloud Platform

How to use BigQuery ML for anomaly detection

Editor’s note:Today’s post comes from Or Hiltch, co-founder and CTO at Skyline AI, an investment manager for commercial real estate. Or describes how BigQuery ML can be used to perform unsupervised anomaly detection.Anomaly detection is the process of identifying data or observations that deviate from the common behavior and patterns of our data, and is used for a variety of purposes, such as detecting bank fraud or defects in manufacturing. There are many approaches to anomaly detection and choosing the right method has a lot to do with the type of data we have. Since detecting anomalies is a fairly generic task, a number of different machine learning algorithms have been created to tailor the process to specific use cases.Here are a few common types:Detecting suspicious activity in a time series, for example a log file. Here, the dimension of time plays a huge role in the data analysis to determine what is considered a deviation from normal patterns. Detecting credit card fraud based on a feed of transactions in a labeled dataset of historical frauds. In this type of supervised learning problem, we can train a classifier to classify a transaction as anomalous or fraudulent given that we have a historical dataset of known transactions, authentic and fraudulent.Detecting a rare and unique combination of a real estate asset’s attributes — for instance, an apartment building from a certain vintage year and a rare unit mix. AtSkyline AI, we use these kinds of anomalies to capture interesting rent growth correlations and track down interesting properties for investment.When applying machine learning for anomaly detection, there are primarily three types of setups: supervised, semi-supervised and unsupervised. In our case, we did not have enough labeled data depicting known anomalies in advance, so we used unsupervised learning.In this post, we’ll demonstrate how to implement a simple unsupervised anomaly detection algorithm using BigQuery, without having to write a single line of code outside of BigQuery’s SQL.K-means clustering — Using unsupervised machine learning for anomaly detection One method of finding anomalies is by generating clusters in our data and analyzing those clusters. A clustering algorithm is an algorithm that, given n points over a numeric space, will find the best way to split them into k groups. The definition of the best way may vary by the type of algorithm, but in this post we’ll focus on what it means for K-Means clustering.If we organize the groups so that the “center of mass” in each group represents the “purest” characteristics of that group, the closer a data point is to that center would indicate whether it is more “standard” or “average” when compared to other points in the group . This allows us to analyze each group and ask ourselves, which points in the group are furthest away from the center of mass, and therefore, the most odd? In general, when clustering, we seek to:Minimize the maximum radius of a cluster. If our data contains a lot of logical differences, we want to capture these with as many clusters as possible.Maximize the average inter-cluster distance. We want our clusters to be different from each other. If our clusters don’t represent differences well enough, they are useless. Minimize the variance within each cluster. Within each cluster, we want the data points to be as similar to each other as possible — this is what makes them members of the same group.K-Means, an unsupervised learning algorithm, is one of the most popular clustering algorithms. If you’d like to learn more about the internals of how K-Means works, I would recommend walking throughthis great lab session.Anomaly detection using clusteringThe Iris DatasetThe Iris dataset is one of the “hello world” datasets for ML, consisting of 50 samples from each of three species of Iris (Iris setosa,Iris virginica andIris versicolor). Fourfeatures were measured from each sample: the length and the width of thesepals andpetals, in centimeters. Based on the combination of these four features, statistician and biologistRonald Fisher developed a linear discriminant model to distinguish the species from each other.In this tutorial, we’ll be detecting anomalies within the Iris dataset. We will find the rarest combinations of sepal and petal lengths and widths for the given species if Iris. The dataset can be obtainedhere.Creating the clusters with BigQuery MLBigQuery ML lets us create and execute machine learning models in BigQuery using standard SQL queries. It uses the output of SQL queries as input for a training process for machine learning algorithms, including k-means, and for generating predictions using those models, all within BigQuery. After loading theIris dataset into a table called public.iris_clusters, we can use the CREATE OR REPLACE MODEL statement to create a k-means model:You can find more information on how to tune the model, and more,here.Detecting anomalies in the clustersNow that we have our clusters ready using BigQuery, how do we detect anomalies? Recall that in k-means, the closer a data point is to the center of the cluster (the “center of mass”), the more “average” it is compared to other data points in the cluster. This center is called centroid. One approach we could take to find anomalies in the data is to find those data points which are furthest away from the centroid of their cluster. Getting the distances of each point from its centroidThe ML.PREDICT function of a k-means model in BigQuery returns an array containing each data point and its distance from the closest centroids. Using the UNNEST function we can flatten this array, taking only the minimum distance (the distance to the closest centroid):Setting a threshold for anomalies and grabbing the outliersAfter we prepared the Distances table, we are ready to find the outliers — the data points farthest away from their centroid in each cluster. To do this, we can use BigQuery’sApproximate Aggregate Functions to compute the 95th percentile. The 95th percentile tells you the value for which 95% of the data points are smaller and 5% are bigger. We will look for those 5% bigger ones:Putting It All TogetherUsing Distances and Threshold together, we finally detect the anomalies in one query:The above query produces the following resultset:Let’s check how rare some of these anomalies really are. For the species ofIris virginica, how rare is a sepal length of 7.7, sepal width of 2.6, petal length of 6.9 and petal width of 2.3? Let’s plot a histogram of the features for the species virginica. Note the green bars to represent the anomaly described above:While it’s hard to mentally imagine the rarity of a combination of a 4-dimensional array, it can be seen in the histograms that this sample is indeed quite rare. SummaryWe’ve seen how several features of BigQuery — BigQuery ML, Approximate Aggregate Function and Arrays, can converge into one simple and powerful anomaly detection application, with a wide variety of use cases, and all without requiring us to write a single line of non-SQL code outside of BigQuery. All of these features of BigQuery combined empower data analysts and engineers to use AI through existing SQL skills. You no longer need to export large amounts of data to spreadsheets or other applications, and in many cases, analysts no longer need to wait for limited resources from a data science team.To learn more about k-means clustering on BigQuery ML, read the documentation.
Quelle: Google Cloud Platform

Least privilege for Cloud Functions using Cloud IAM

Cloud Functions enables you to quickly build and deploy lightweight microservices and event-driven workloads at scale. Unfortunately when building these services, security is often an after-thought, resulting in data leaks, unauthorized access, privilege escalation, or worse.Fortunately, Cloud Functions makes it easy to secure your services by enabling you to build least privilege functions that minimize the surface area for an attack or data breach.What is least privilege?The principle of least privilege states that a resource should only have access to the exact resource(s) it needs in order to function. For example, if a service is performing an automated database backup, the service should be restricted to read-only permissions on exactly one database. Similarly, if a service is only responsible for encrypting data, it should not have permissions for decrypting data. Providing too few permissions prohibits the service from completing its task, but providing too many permissions can have rippling security ramifications.If an attacker is able to gain access to a service that doesn’t follow the principles of least privilege, they may be able to force the service to behave nefariously—for example access customer data, delete critical infrastructure, or steal confidential business intelligence.How do we achieve least privilege in Cloud Functions?By default, all Cloud Functions in a Google Cloud project share the same runtime service account. This service account is bound to the function, and is used to generate credentials for accessing Cloud APIs and services. This default service account has the Editor role, which includes all read permissions, plus permissions for actions that modify state, such as changing existing resources. This enables a seamless development experience, but may include overly broad permissions for your functions, since most functions only need to access a subset of resources.To practice the principle of least privilege in Cloud Functions, you can create and bind a unique service account to each function, granting the service account only the most minimal set of permissions required to execute the function.Calling GCP servicesConsider the following example function, which is triggered when a file is uploaded to a Cloud Storage bucket. The function reads the contents of the file, transforms it, and then writes the transformed file back to the same Cloud Storage bucket.Reviewing the Cloud Storage IAM permissions, this function needs the following permissions on the Cloud Storage bucket:storage.objects.getstorage.objects.createWe will use the ability to set a service account on an individual Cloud Function, giving each function its own service account with unique permissions. To do this:1. Create a new service account. The service account name must be unique within the project. For more information, please see the managing service accounts documentation.2. Grant the service account minimal IAM permissions. By default, service accounts have very minimal permissions. To use the service account with a function, we need to add bindings for the service account to the resources it needs to access.3. Deploy a function that uses the new service account. When deploying our function, we use the –service-account flag to specify that our function should run as our custom service account instead of the default account. Since the function executes as the service account, our function inherits the permissions granted to the service account.The Cloud Storage Object Admin role includes the following permissions:storage.objects.createstorage.objects.updatestorage.objects.deletestorage.objects.getstorage.objects.liststorage.objects.getIamPolicystorage.objects.setIamPolicyPermissions are very fine-grained access control rules. One or more permissions are usually combined to form a role. There are pre-built roles (like the Object Admin role above), and you also have the ability to generate custom roles with very specific sets of permissions.If you recall, our function only needs the create and get permissions, but the role we picked includes five additional permissions that are not needed. While we have gotten closer, we are still not fully practicing the principle of least privilege.There are no pre-built roles that includes only the two permissions we need, so we need to create a custom role in our project and grant that role to the service account on the bucket:1. Create a custom role with exactly the two permissions needed.2. Grant the service account access to the custom role on the bucket:3. Deploy the function bound to that service account:Calling other functionsIn addition to calling a Google Cloud service like Cloud Storage, you may want to a function to call (“invoke”) another function. The concept of least privilege also applies to restricting which functions or users can invoke your function. You can achieve this by using the Cloud IAM roles/cloudfunctions.invoker role. Set IAM policies on each function to enforce that only certain users, functions, or services can invoke the function.A good first step is to ensure that a function cannot be invoked by the public. For example, remove the special allUsers member from the roles/cloudfunctions.invoker role associated with the function:This makes your function private and restricts the ability to invoke the function unless the caller has cloudfunctions.invoker permissions. If a caller does not have this permission, the request is rejected, your function is not invoked and you avoid billing charges.Once a Cloud Function is private, you will need to add authentication when invoking it. Specifically, the caller needs a Google-signed identity token (a JSON Web Token) in the Authorization header of the outbound HTTP request. The audience (aud field) must be set to the URL of the function you are calling.One of the easiest ways to get such a token is by querying the compute metadata server:For example, suppose we have two functions – myFunction and otherFunction, where myFunction needs permission to invoke otherFunction. To accomplish this while also following the principle of least privilege we would:1. Create a new, dedicated service account:2. Grant the service account permissions to invoke otherFunction (this assumes that otherFunction is already running and deployed):3. Deploy myFunction bound to the service account which has permission to invoke otherFunction:Cloud Run and App Engine (when using IAP) can also perform similar validation.When calling other servicesIf you are calling a compute service that you control that does not have Cloud IAM policies restricting access (like a Compute Engine VM), you can follow the same steps to generate the token and then validate the Google-signed identity token yourself.Next stepsWe hope this post illustrates the importance of the principle of least privilege and provides concrete steps you can take to improve the security of your serverless functions. If you want to learn more about Cloud Functions security, you can watch Serverless Security made Simple from Cloud Next 2019. If you want to learn more about how Google Cloud is enabling organizations to improve their security, including adopting the principle of least privilege, sign up for the Policy Intelligence alpha.
Quelle: Google Cloud Platform

Satellitennavigation: Galileo ist wieder online

Ein unglückliches Zusammentreffen mehrere Ereignisse hat zum Ausfall des europäischen Satellitennavigationssystem Galileo geführt. In dem einen Kontrollzentrum war ein System ausgefallen, das andere war durch eine Software-Aktualisierung lahmgelegt. (Galileo, GPS)
Quelle: Golem

Accelerate Application Delivery with Application Templates in Docker Desktop Enterprise

The Application Templates interface.

Docker Enterprise 3.0, now generally available, includes several new features that make it simpler and faster for developers to build and deliver modern applications in the world of Docker containers and Kubernetes. One such feature is the new Application Templates interface that is included with Docker Desktop Enterprise.

Application Templates enable developers to build modern applications using a library of predefined and organization-approved application and service templates, without requiring prior knowledge of Docker commands. By providing re-usable “scaffolding” for developing modern container-based applications, Application Templates accelerate developer onboarding and improve productivity.

The Application Templates themselves include many of the discrete components required for developing a new application, including the Dockerfile, custom base images, common compose service YAML, and application parameters (external ports and upstream image versions). They can even include boilerplate code and code editor configs.

With Application Templates, development leads, application architects, and security and operations teams can customize and share application and service templates that align to corporate standards. As a developer, you know you’re starting from pre-approved templates that  eliminate time-consuming configuration steps and error-prone manual setup. Instead, you have the freedom to customize and experiment so you can focus on delivering innovative apps. 

Application Templates In Action: A Short Demo

The Easiest and Fastest Way to Containerize Apps

Even if you’ve never run a Docker container before, there is a new GUI-based Application Designer interface in Docker Desktop Enterprise that makes it simple to view and select Application Templates, run them on your machine, and start coding. There’s also a docker template CLI interface (currently available in Experimental mode only), which provides the same functionality if you prefer command line to a GUI.

Underneath the covers, Application Templates create a Docker Application, a new packaging format based on the Cloud Native Application Bundle specification. Docker Applications make it easy to bundle up all the container images, configuration, and parameters and share them on Docker Hub or Docker Trusted Registry. 

Docker Desktop Enterprise comes pre-loaded with a library of common templates based on Docker Hub official images, but you can also create and use templates customized to your own organization’s specifications.

After developers select their template and scaffold it locally, source code can be mounted in the local containers to speed the inner loop code and test cycles. The containers are running right on the developer’s machine so any changes to the code will be visible immediately in the running application. 

Docker Desktop Enterprise with Application Templates is generally available now! Contact Docker today to get started.

Accelerate Application Delivery with Application Templates in #Docker Desktop EnterpriseClick To Tweet

Interested in finding out more?

Learn more about Docker Desktop EnterpriseCheck out Docker Enterprise 3.0, the only end-to-end platform for building, sharing and running container-based applicationsWatch the full Docker Desktop Enterprise product demoGet the detailed documentation on Application Templates

The post Accelerate Application Delivery with Application Templates in Docker Desktop Enterprise appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Borderlands 3 angespielt: Action auf dem Opferpfad

Unkomplizierter, aber taktisch durchaus fordernder Ballerspaß steht wie bei den Vorgängern im Mittelpunkt von Borderlands 3. Golem.de hat sich in einer Vorabversion bis in eine heilige Rundfunkstation gekämpft – und dort einen musikalischen Oberboss getroffen. (Borderlands, Gearbox)
Quelle: Golem