Kickstarter: Das Schachbrett, das Spielfiguren selbst bewegt
Phantom sieht auf wie ein normales Holzschach. Allerdings kann das Brettspiel Figuren selbst versetzen und per Sprache gesteuert werden. (Brettspiele, KI)
Quelle: Golem
Phantom sieht auf wie ein normales Holzschach. Allerdings kann das Brettspiel Figuren selbst versetzen und per Sprache gesteuert werden. (Brettspiele, KI)
Quelle: Golem
Jedes Jahr gelangen Millionen Tonnen Plastikmüll in die Meere. Eine aktuelle Studie warnt vor den Folgen für die Umwelt. (Umweltschutz, Wissenschaft)
Quelle: Golem
Fans von Star Trek können sich im Herbst aufs 4K-Remaster der ersten vier Kinofilme freuen – einschließlich Bonusmaterial und Digitalkopien. (Star Trek, Blu-ray)
Quelle: Golem
Quelle: <a href="The Mushroom Scammer: Fake Identities, Twisted Science, And A Scheme To Save The World“>BuzzFeed
The Friday Five is a weekly Red Hat® blog post with 5 of the week’s top news items and ideas from or about Red Hat and the technology industry. Consider it your weekly digest of things that caught our eye.
Quelle: CloudForms
This is the fifth and final post in a multi-part series about the Kubernetes Resource Model. Check out parts 1, 2, 3, and 4 to learn more. In part 2 of this series, we learned how the Kubernetes Resource Model works, and how the Kubernetes control plane takes action to ensure that your desired resource state matches the running state. Up until now, that “running resource state” has existed inside the world of Kubernetes- Pods, for example, run on Nodes inside a cluster. The exception to this is any core Kubernetes resource that depends on your cloud provider. For instance, GKE Services of type Load Balancer depend on Google Cloud network load balancers, and GKE has a Google Cloud-specific controller that will spin up those resources on your behalf. But if you’re operating a Kubernetes platform, it’s likely that you have resources that live entirely outside of Kubernetes. You might have CI/CD triggers, IAM policies, firewall rules, databases. The first post of this series introduced the platform diagram below, and asserted that “Kubernetes can be the powerful declarative control plane that manages large swaths” of that platform. Let’s close that loop by exploring how to use the Kubernetes Resource Model to configure and provision resources hosted in Google Cloud. Click to enlargeWhy use KRM for hosted resources?Before diving into the “what” and “how” of using KRM for cloud-hosted resources, let’s first ask “why.” There is already an active ecosystem of infrastructure-as-code tools, including Terraform, that can manage cloud-hosted resources. Why use KRM to manage resources outside of the cluster boundary? Three big reasons. The first is consistency. The last post explored ways to ensure consistency across multiple Kubernetes clusters- but what about consistency between Kubernetes resources and cloud resources? If you have org-wide policies you’d like to enforce on Kubernetes resources, chances are that you also have policies around hosted resources. So one reason to manage cloud resources with KRM is to standardize your infrastructure toolchain, unifying your Kubernetes and cloud resource configuration into one language (YAML), one Git config repo, one policy enforcement mechanism. The second reason is continuous reconciliation. One major advantage of Kubernetes is its control-loop architecture. So if you use KRM to deploy a hosted firewall rule, Kubernetes will work constantly to make sure that resource is always deployed to your cloud provider- even if it gets manually deleted. A third reason to consider using KRM for hosted resources is the ability to integrate tools like kustomize into your hosted resource specs, allowing you to customize resource specifications without templating languages. These benefits have resulted in a new ecosystem of KRM tools designed to manage cloud-hosted resources, including the Crossplane project, as well as first-party tools from AWS, Azure, and Google Cloud. Let’s explore how to use Google Cloud Config Connector to manage GCP-hosted resources with KRM. Introducing Config ConnectorConfig Connector is a tool designed specifically for managing Google Cloud resources with the Kubernetes Resource Model. It works by installing a set of GCP-specific resource controllers onto your GKE cluster, along with a set of Kubernetes Custom Resources for Google Cloud products, from Cloud DNS to Pub/Sub.How does it work? Let’s say that a security administrator at Cymbal Bank wants to start working more closely with the platform team to define and test Policy Controller constraints. But they don’t have access to a Linux machine, which is the operating system used by the platform team. The platform team can address this by manually setting up a Google Compute Engine (GCE) Linux instance for the security admin. But with Config Connector, the platform team can instead create a declarative KRM resource for a GCE instance, commit it to the config repo, and Config Connector will spin up the instance on their behalf.Click to enlargeWhat does this declarative resource look like? A Config Connector resource is just a regular Kubernetes-style YAML file- in this case, a custom resource called Compute Instance. In the resource spec, the platform team can define specific fields, like what GCE machine type to use. Once the platform team commits this resource to the Config Sync repo, Config Sync will deploy the resource to the cymbal-admin GKE cluster, and Config Connector, running on that same cluster, will spin up the GCE resource represented in the file.Click to enlargeThis KRM workflow for cloud resources opens the door for powerful automation, like custom UIs to automate resource requests within the Cymbal Bank org. Integrating Config Connector with Policy Controller By using Config Connector to manage Google Cloud-hosted resources as KRM, you can adopt Policy Controller to enforce guardrails across your cloud and Kubernetes resources. Let’s say that the data analytics team at Cymbal Bank is beginning to adopt BigQuery. While the security team is approving production usage of that product, the platform team wants to make sure no real customer data is imported. Together, Config Connector and Policy Controller can set up guardrails for BigQuery usage within Cymbal Bank. Click to enlargeConfig Connector supports BigQuery resources, including Jobs, Datasets, and Tables. The platform team can work with the analytics team to define a test dataset, containing mocked data, as KRM, pushing those resources to the Config Sync repo as they did with the GCE instance resource. From there, the platform team can create a custom Constraint Template for Policy Controller, limiting the allowed Cymbal datasets to only the pre-vetted mock dataset: These guardrails, combined with IAM, can allow your organization to adopt new cloud products safely- not only defining who can set up certain resources, but within those resources, what field values are allowed. Manage existing GCP resources with Config Connector Another useful feature of Config Connector is that it supports importing existing Google Cloud resources into KRM format, allowing you to bring live-running resources into the management domain of Config Connector. You can use the config-connector command line tool to do this, exporting specific resource URIs into static files: Output:From here, we can push these KRM resources to the config repo, and allow Config Sync and Config Controller to start lifecycling the resources on our behalf. The screenshot below shows that the cymbal-dev Cloud SQL database now has the “managed-by-cnrm” label, indicating that it’s now being managed from Config Connector (CNRM = “cloud-native resource management”).Click to enlargeThis resource export tool is especially useful for teams looking to try out KRM for hosted resources, without having to invest in writing a new set of YAML files for their existing resources. And if you’re ready to adopt Config Connector for lots of existing resources, the tool has a bulk export option as well. Overall, while managing hosted resources with KRM is still a newer paradigm, it can provide lots of benefits for resource consistency and policy enforcement. Want to try out Config Connector yourself? Check out the part 5 demo.This post concludes the Build a Platform with KRM series. Hopefully these posts and demos provided some inspiration on how to build a platform around Kubernetes, with the right abstractions and base-layer tools in mind. Thanks for reading, and stay tuned for new KRM products and features from Google.
Quelle: Google Cloud Platform
AWS Lambda unterstützt jetzt Amazon MQ für RabbitMQ als eine Ereignisquelle, die es Kunden ermöglicht, Anwendungen schnell und einfach zu entwickeln, die von Nachrichten in ihrer RabbitMQ-Warteschlangeausgelöst werden. Amazon MQ ist ein verwalteter Message Broker-Service für Apache ActiveMQ und RabbitMQ, der die Einrichtung und Bedienung von Message-Brokern in der Cloud vereinfacht. Kunden können Anwendungen schnell und einfach mit Lambda-Funktionen bauen, die basierend auf Nachrichten aufgerufen werden, welche an Amazon MQ Message Broker gepostet wurden. Dabei müssen sie sich keine Sorgen um die Bereitstellung oder Verwaltung von Servern machen.
Quelle: aws.amazon.com
Ab heute sind Amazon EC2 M6g-, C6g-, R6g und T4g-Instances in den Regionen EU (Paris, Mailand) verfügbar. Außerdem sind die Amazon-EC2-M6g-Instances in er Region Naher Osten (Bahrain) verfügbar.
Quelle: aws.amazon.com
Amazon RDS for Oracle unterstützt jetzt Oracle Management Agent (OMA) Version 13.5 for Oracle Enterprise Manager (OEM) Cloud Control 13c Version 5. OEM 13c bietet webbasierte Tools zur Überwachung und Verwaltung Ihrer Oracle-Datenbanken. Amazon RDS for Oracle installiert OMA, das daraufhin mit Ihrem Oracle Management Service (OMS) kommuniziert, um Überwachungsinformationen bereitzustellen. Kunden mit OMS 13.5 können nun Datenbanken durch die Installation von OMA 13.5 verwalten.
Quelle: aws.amazon.com
Amazon Kendra ist ein intelligenter, auf Machine Learning gestützter, Suchservice, mit dem Organisationen ihren Kunden und Mitarbeitern die bei Bedarf notwendigen Informationen zur Verfügung stellen können. Ab heute können AWS-Kunden den Amazon Kendra Web-Crawler nutzen, um Webseiten zu indizieren.
Quelle: aws.amazon.com