Nahverkehr: Einigung zur 9-Euro-Ticket-Nachfolge soll im Oktober kommen
Zur 9-Euro-Ticket-Nachfolge gibt es breite Zustimmung. Noch gibt es aber Streit ums Geld, den eine Arbeitsgruppe lösen soll. (9-Euro-Ticket, Politik/Recht)
Quelle: Golem
Zur 9-Euro-Ticket-Nachfolge gibt es breite Zustimmung. Noch gibt es aber Streit ums Geld, den eine Arbeitsgruppe lösen soll. (9-Euro-Ticket, Politik/Recht)
Quelle: Golem
We are pleased to announce general availability of virtual machine (VM) support in Anthos. VM support is available on Anthos for bare metal (now known as Google Distributed Cloud Virtual). Customers can now run VMs alongside containers on a single, unified, Google Cloud-connected platform in their data center or at the edge. With VM support in Anthos, developers and operations teams can run VMs alongside containers on shared cloud-native infrastructure. VM support in Anthos lets you achieve consistent container and VM operations with Kubernetes-style declarative configuration and policy enforcement, self-service deployment, observability and monitoring, all from the familiar Google Cloud console, APIs, and command line interfaces. The Anthos VM runtime can be enabled on any Anthos on bare metal cluster (v1.12 or higher) at no additional charge. During preview, we saw strong interest in VM support in Anthos for retail edge environments, where there is a small infrastructure footprint and a need to run new container apps and heritage VM apps on just a few hosts. In fact, a global Quick Service Restaurant (QSR), using a single VM of their existing point of sale solution, simulated throughput of more than 1,700 orders per hour for 10 hours, totalling more than 17,000 orders. The VM was running on the same hardware that exists at the store.Why extend Anthos to manage VMs?Many of our customers are modernizing their existing (heritage) applications using containers and Kubernetes. But few enterprise workloads are containerized today and millions of business-critical workloads still run in VMs. While many VMs can be modernized by migrating to VMs in Google Cloud or to containers on GKE or Anthos, many can’t — at least not right away. You might depend on a vendor-provided app that hasn’t been updated to run in containers yet, need to keep a VM in a data center or edge location for low latency connectivity to other local apps or infrastructure, or you might not have the budget to containerize a custom-built app today. How can you include these VMs in your container and cloud app modernization strategy?Anthos now provides consistent visibility, configuration, and security for VMs and containersRun and manage VMs and containers side-by-sideAt the heart of VM support in Anthos is the Anthos VM Runtime, which extends and enhances the open source KubeVirt technology. We integrated Kubevirt with Anthos on bare metal to simplify the install and upgrade experience. We’ve provided tools to manage VMs using the command line, APIs and the Google Cloud console. We’ve integrated VM observability logs and metrics with the Google Cloud operations suite, including out of the box dashboards and alerts. We’ve included significant networking enhancements like support for multiple network interfaces for VMs and IP/MAC stickiness to enable VM mobility that is also compatible with Kubernetes pod (multi-NIC). And we’ve added VLAN integration while also enabling customers to apply L4 Kubernetes network policies for an on-premises, VPC-like, micro segmentation experience. If you’re an experienced VM admin, you can take advantage of VM high availability, and simplified Kubernetes storage management for a familiar yet updated VM management experience. VM lifecycle management is built into the Google Cloud console for a simplified user experience that integrates with your existing Anthos and Google Cloud authentication and authorization frameworks.View and manage VMs running on Anthos in the Google Cloud ConsoleGet started right away with new VM assessment and migration toolsHow do you know if Anthos is the right technology for your VM workloads? Google Cloud offers assessment and migration tools to help you at every step of your VM modernization journey. Our updated fit assessment tool collects data about your existing VMware VMs and generates a detailed report. This no-cost report belongs to you and can be uploaded to the Google Cloud console for detailed visualization and historical views. The report provides a fit score for every VM that estimates the effort required to containerize the VM and migrate it to Anthos or GKE as a container, or migrate it to Anthos as a VM. Once you’ve identified the best VMs to migrate, use our no-cost updated Migrate to Containers tool to migrate VMs to Anthos from the command line or the console.Sample fit assessment report showing VMs that can be shifted (migrated) to Anthos as VMsDon’t let business-critical VM workloads or virtualization management investments keep you from realizing your cloud and container app modernization goals. Now you can include your heritage VMs in your on-premises managed container platform strategy. Please reach out for a complimentary fit assessment and let us help you breathe new life into your most important VMs.To learn more about all the exciting innovations we’re adding to Anthos, mark your calendar and join us at Google Cloud Next ‘22.Related ArticleAnthos on-prem and on bare metal now power Google Distributed Cloud VirtualGoogle Distributed Cloud Virtual uses Anthos on-prem or bare metal to create a hybrid cloud on your existing hardware.Read Article
Quelle: Google Cloud Platform
One principle of GitOps is to have the desired state declarations as Versioned and Immutable, where Git repositories play an important role as the source of truth. But can you have an alternative to a Git repository for storing and deploying your Kubernetes manifests via GitOps? What if you could package your Kubernetes manifests into a container image instead? What if you can reuse the same authentication and authorization mechanism as your container images?To answer the above questions, an understanding of OCI registries and OCI artifacts is needed. Simply put, OCI registries are the registries that are typically used for container images but can be expanded to store other types of data (aka OCI artifacts) such as Helm charts, Kubernetes manifests, Kustomize overlays, scripts, etc. Using OCI Registries and OCI Artifacts provides you with the following advantages: Less tools to operate: Single artifact registry can store expanded data types apart from container images. In-built release archival system: OCI registries give users two sets of mutable and immutable URLs which are tags and content-addressable ones. Flourishing ecosystem: Standardized and supported by dozen of providers which helps users take advantage of new features and tools developed by large Kubernetes community Given these benefits, and in addition to the support of files stored in Git repositories, we are thrilled to announce two new formats supported by Config Sync 1.13 to deploy OCI artifacts:Sync OCI artifacts from Artifact RegistrySync Helm charts from OCI registriesConfig Sync is an open source tool that provides GitOps continuous delivery for Kubernetes clusters.The Open Container Initiative (OCI) is an open governance structure for the express purpose of creating open industry standards around container formats and runtimes. OCI artifacts give you the power of storing and distributing different types of data such as Kubernetes manifests, Helm Charts, and Kustomize overlays, in addition to container images via OCI registries.Throughout this blog, you will see how you can leverage the two new formats (OCI artifacts and Helm charts) supported by Config Sync, by using:oras and helm to package and push OCI artifactsArtifact registry as OCI registry to store the OCI artifactsGKE cluster to host the OCI artifacts syncedConfig Sync installed in that GKE cluster to sync the OCI artifactsInitial setupFirst, you need to have a common setup for the two scenarios by configuring and securing the access from the GKE cluster with Config Sync to the Artifact Registry repository.Initialize the Google Cloud project you will use throughout this blog:code_block[StructValue([(u’code’, u’PROJECT=SET_YOUR_PROJECT_ID_HERErngcloud config set project $PROJECT’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca253a6c10>)])]Create a GKE cluster with Workload Identity registered in a fleet to enable Config Management:code_block[StructValue([(u’code’, u’CLUSTER_NAME=oci-artifacts-clusterrnREGION=us-east4rngcloud services enable container.googleapis.comrngcloud container clusters create ${CLUSTER_NAME} \rn –workload-pool=${PROJECT}.svc.id.goog \rn –region ${REGION}rngcloud services enable gkehub.googleapis.comrngcloud container fleet memberships register ${CLUSTER_NAME} \rn –gke-cluster ${REGION}/${CLUSTER_NAME} \rn –enable-workload-identityrngcloud beta container fleet config-management enable’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca263d4e10>)])]Install Config Sync in the GKE cluster:code_block[StructValue([(u’code’, u’cat <<EOF > acm-config.yamlrnapplySpecVersion: 1rnspec:rn configSync:rn enabled: truernEOFrngcloud beta container fleet config-management apply \rn –membership ${CLUSTER_NAME} \rn –config acm-config.yaml’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca263d4cd0>)])]Create an Artifact Registry repository to host OCI artifacts (–repository-format docker):code_block[StructValue([(u’code’, u’CONTAINER_REGISTRY_NAME=oci-artifactsrngcloud services enable artifactregistry.googleapis.comrngcloud artifacts repositories create ${CONTAINER_REGISTRY_NAME} \rn –location ${REGION} \rn –repository-format docker’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca15b519d0>)])]Create a dedicated Google Cloud Service Account with the fine granular access to that Artifact Registry repository with the roles/artifactregistry.reader role:code_block[StructValue([(u’code’, u’GSA_NAME=oci-artifacts-readerrngcloud iam service-accounts create ${GSA_NAME} \rn –display-name ${GSA_NAME}rngcloud artifacts repositories add-iam-policy-binding ${CONTAINER_REGISTRY_NAME} \rn –location ${REGION} \rn –member “serviceAccount:${GSA_NAME}@${PROJECT}.iam.gserviceaccount.com” \rn –role roles/artifactregistry.reader’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca15b51090>)])]Allow Config Sync to synchronize resources for a specific RootSync:code_block[StructValue([(u’code’, u’ROOT_SYNC_NAME=root-sync-ocirngcloud iam service-accounts add-iam-policy-binding \rn –role roles/iam.workloadIdentityUser \rn –member “serviceAccount:${PROJECT}.svc.id.goog[config-management-system/root-reconciler-${ROOT_SYNC_NAME}]” \rn ${GSA_NAME}@${PROJECT}.iam.gserviceaccount.com’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca26ce6cd0>)])]Login to Artifact Registry so you can push OCI artifacts to it in a later step:code_block[StructValue([(u’code’, u’gcloud auth configure-docker ${REGION}-docker.pkg.dev’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca26ce61d0>)])]Build and sync an OCI artifactNow that you have completed your setup, let’s illustrate our first scenario where you want to sync a Namespace resource as an OCI image.Create a Namespace resource definition:code_block[StructValue([(u’code’, u’cat <<EOF> test-namespace.yamlrnapiVersion: v1rnkind: Namespacernmetadata:rn name: testrnEOF’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca26ce6350>)])]Create an archive of that file:code_block[StructValue([(u’code’, u’tar -cf test-namespace.tar test-namespace.yaml’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca26ce6990>)])]Push that artifact to Artifact Registry. In this tutorial, we use oras, but there are other tools that you can use like crane.code_block[StructValue([(u’code’, u’oras push \rn ${REGION}-docker.pkg.dev/${PROJECT}/${CONTAINER_REGISTRY_NAME}/my-namespace-artifact:v1 \rn test-namespace.tar’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca26ce6290>)])]Set up Config Sync to deploy this artifact from Artifact Registry:code_block[StructValue([(u’code’, u’cat << EOF | kubectl apply -f -rnapiVersion: configsync.gke.io/v1beta1rnkind: RootSyncrnmetadata:rn name: ${ROOT_SYNC_NAME}rn namespace: config-management-systemrnspec:rn sourceFormat: unstructuredrn sourceType: ocirn oci:rn image: ${REGION}-docker.pkg.dev/${PROJECT}/${CONTAINER_REGISTRY_NAME}/my-namespace-artifact:v1rn dir: .rn auth: gcpserviceaccountrn gcpServiceAccountEmail: ${GSA_NAME}@${PROJECT}.iam.gserviceaccount.comrnEOF’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca14f1eb10>)])]Check the status of the sync with the nomos tool:code_block[StructValue([(u’code’, u’nomos status –contexts $(k config current-context)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca14f1ef50>)])]Verify that the Namespace test is synced:code_block[StructValue([(u’code’, u’kubectl get ns test’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca14f1e990>)])]And voilà! You just synced a Namespace resource as an OCI artifact with Config Sync.Build and sync a Helm chartNow, let’s see how you could deploy a Helm chart hosted in a private Artifact Registry.Create a simple Helm chart:code_block[StructValue([(u’code’, u’helm create test-chart’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca25ff7f50>)])]Package the Helm chart:code_block[StructValue([(u’code’, u’helm package test-chart –version 0.1.0′), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca2434b150>)])]Push the chart to Artifact Registry:code_block[StructValue([(u’code’, u’helm push \rn test-chart-0.1.0.tgz \rn oci://${REGION}-docker.pkg.dev/${PROJECT}/${CONTAINER_REGISTRY_NAME}’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca1580b890>)])]Set up Config Sync to deploy this Helm chart from Artifact Registry:code_block[StructValue([(u’code’, u’cat << EOF | kubectl apply -f -rnapiVersion: configsync.gke.io/v1beta1rnkind: RootSyncrnmetadata:rn name: ${ROOT_SYNC_NAME}rn namespace: config-management-systemrnspec:rn sourceFormat: unstructuredrn sourceType: helmrn helm:rn repo: oci://${REGION}-docker.pkg.dev/${PROJECT}/${CONTAINER_REGISTRY_NAME}rn chart: test-chartrn version: 0.1.0rn releaseName: test-chartrn namespace: defaultrn auth: gcpserviceaccountrn gcpServiceAccountEmail: ${GSA_NAME}@${PROJECT}.iam.gserviceaccount.comrnEOF’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca1580b290>)])]Check the status of the sync with the nomos tool:code_block[StructValue([(u’code’, u’nomos status –contexts $(k config current-context)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca1580bc10>)])]Verify that the resources in the Namespace test-chart are synced:code_block[StructValue([(u’code’, u’kubectl get all -n default’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca25bd62d0>)])]And voilà! You just synced an Helm chart with Config Sync.Towards more scalability and securityIn this blog, you synced both an OCI artifact and an Helm chart with Config Sync.OCI registries and OCI artifacts are new kids on the block that can also work alongside with the Git option depending on your needs and use-cases. One of such patterns could be Git still acting as the source of truth for the declarative configs in addition to the well established developer workflow it provides: pull request, code review, branch strategy, etc.The continuous integration pipelines, triggered by pull requests or merges, will run tests against the declarative configs to eventually push the OCI artifacts in an OCI registry.Finally, the continuous reconciliation of GitOps will take it from here and will reconcile between the desired state, now stored in an OCI registry, with the actual state, running in Kubernetes. Your Kubernetes manifests as OCI artifacts are now just seen like any container images for your Kubernetes clusters as they are pulled from OCI registries. This continuous reconciliation from OCI registries, not interacting with Git, has a lot of benefits in terms of scalability, performance and security as you will be able to configure very fine grained access to your OCI artifacts.To get started, check out the two Sync OCI artifacts from Artifact Registry and the Sync Helm charts from OCI registries features today.You could also find this other tutorial showing how you can package and push an Helm chart to GitHub Container Registry with GitHub actions, and then how you can deploy this Helm chart with Config Sync.Attending KubeCon + CloudNativeCon North America 2022 in October? Come check out our session Build and Deploy Cloud Native (OCI) Artifacts, the GitOps Way during the GitOpsCon North America 2022 co-located event on October, 25th. Hope to see you there!Config Sync is open sourced. We are open to contributions and bug fixes if you want to get involved in the development of Config Sync. You can also use the repository to track ongoing work, or build from source to try out bleeding-edge functionalities.Related ArticleGoogle Cloud at KubeCon EU: New projects, updated services, and how to connectEngage with experts and learn more about Google Kubernetes Engine at KubeCon EU.Read Article
Quelle: Google Cloud Platform
Whalecome, dear reader, to our second installment of Dear Moby. In this developer-centric advice column, our Docker subject matter experts (SMEs) answer real questions from you — the Docker community. Think Dear Abby, but better, because it’s just for developers!
Since we announced this column, we’ve received a tidal wave of questions. And you can submit your own questions too!
In this edition, we’ll be talking about the best way to develop in production environments running Kubernetes (spoiler alert: there are more ways than one!).
Without further ado, let’s dive into today’s top question.
The question
What is the best way to develop if my prod environment runs Kubernetes? – Amos
The answer
SME: Engineering Manager and Docker Captain, Michael Irwin.
First and foremost, there isn’t one “best way” to develop, as there are quite a few options, each with its own tradeoffs.
Option #1 is to simply run Kubernetes locally!
Docker Desktop allows you to spin up a Kubernetes cluster with just a few clicks. If you need more flexibility in the versioning, you can look into minikube or KinD (Kubernetes-in-Docker), which are both supported for use cases. Other fantastic tools like Tilt can also do wonders for your development experience by watching for file changes and rebuilding and redeploying container images (among other things).
Note: Docker Desktop currently only ships the latest version of Kubernetes.
The biggest advantage to this option is you can leverage very similar manifests to what’s used in your prod environment. If you mount source code into your containers for development (dev), your manifests will need to be flexible enough to support different configurations for prod versus dev. That being said, you can also test most of the system out the same way your prod environments run.
However, there are a few considerations to think about:
Docker Desktop needs more resources (CPU/memory) to run Kubernetes. There’s a good chance you’ll need to learn more about Kubernetes if you need to debug your application. This can add a bit of a learning curve.Even if you sync the capabilities of your prod cluster locally — there’s still a chance things will differ. This is typically from things like custom controllers and resources, access or security policies, service meshes, ingress and certificate management, and/or other factors that can be hard to replicate locally.
Option #2 is to simply use Docker Compose.
While Kubernetes can be used to run containers, so can many other tools. Docker Compose provides the ability to spin up an entire development environment using a much smaller and more manageable configuration. It leverages the Compose specification, “a developer-focused standard for defining cloud and platform agnostic container-based applications.”
There are a couple of advantages to using Compose. It has a more gradual learning curve and a lighter footprint. You can simply run docker compose up and have everything running! Instead of having to set up Kubernetes, apply manifests, potentially configure Helm, and more, Compose is already ready to go. This saves us from running a full orchestration system on our machines (which we wouldn’t wish on anyone).
However, using Compose does come with conditions:
It’s another tool in your arsenal. This means another set of manifests to maintain and update. If you need to define a new environment variable, you’ll need to add it to both your Compose file and Kubernetes manifests. You’ll have to vet changes against either prod or a staging environment since you’re not running Kubernetes locally.
To recap, it depends!
There are great teams building amazing apps with each approach. We’re super excited to explore how we can make this space better for all developers, so stay tuned for more!
Whale, that does it for this week’s issue. Have another question you’d like the Docker team to tackle? Submit it here!
Quelle: https://blog.docker.com/feed/
Docker Hub’s Export Members functionality is now available, giving you the ability to export a full list of all your Docker users into a single CSV file. The file will contain their username, full name, and email address — as well as the user’s current status and if the user belongs to a given team. If you’re an administrator, that means you can quickly view your entire organization’s usage of Docker.
In the Members Tab, you can download a CSV file by pressing the Export members button. The file can be used to verify user status, confirm team structure, and quickly audit Docker usage.
The Export Members feature is only available for Docker Business subscribers. This feature will help organizations better track their utilization of Docker, while also simplifying the steps needed for an administrator to review their users within Docker Hub.
At Docker, we continually listen to our customers, and strive to build the tools needed to make them successful. Feel free to check out our public roadmap and leave feedback or requests for more features like this!
Learn more about exporting users on our docs page, or sign in to your Docker Hub account to try it for yourself.
Quelle: https://blog.docker.com/feed/
Bis zu 100 Menschen können den 3D-Effekt auf dem Looking Glass 65 erkennen. Der wird ohne externe 3D-Brille realisiert. (3D-Display, Display)
Quelle: Golem
Investitionen in US-Unternehmen werden künftig genauer überprüft. Das soll technologische Führung, nationale Sicherheit und Datenschutz sichern. (CFIUS, Quantencomputer)
Quelle: Golem
Was am 19. September 2022 neben den großen Meldungen sonst noch passiert ist, in aller Kürze. (Kurznews, AMD)
Quelle: Golem
Das Elektrofahrzeug, das mit Brennstoffzellen von Hyundai ausgestattet ist, soll 2023 in einer Kleinserie ausgeliefert werden. (Brennstoffzellenauto, Technologie)
Quelle: Golem
In Karlsruhe soll eine Anlage jährlich mehr als 60 Millionen Liter synthetischen Kraftstoff produzieren. Doch es gibt noch einen großen Haken. (Synthetische Kraftstoffe, Elektroauto)
Quelle: Golem