Corona-App: Updates für Tracing-API von Google und Apple
Google und Apple stellen kleinere Änderungen an ihrer Bluetooth-API für Corona-Tracing-Apps vor, zu möglichen zentralisierten Ansätzen wird nichts gesagt. (Corona-App, Google)
Quelle: Golem
Google und Apple stellen kleinere Änderungen an ihrer Bluetooth-API für Corona-Tracing-Apps vor, zu möglichen zentralisierten Ansätzen wird nichts gesagt. (Corona-App, Google)
Quelle: Golem
itnext.io – Serverless on Kubernetes reduces repetitive configuration in a cloud provider independent way. It’s just the result of continuously automating away manual work. When we’re talking about serverless on…
Quelle: news.kubernauts.io
partlycloudy.blog – To run your Kubernetes cluster in Azure integrated with Azure Active Directory as your identity provider is a best practice in terms of security and compliance. You can give (and remove – when people…
Quelle: news.kubernauts.io
medium.com – tl;dr In alignment with our development of micro services applications, which consist of many individual services that are shipped frequently, modern operations teams need to move away from complex m…
Quelle: news.kubernauts.io
itnext.io – Gone are the days of contending with dozens of README files just to get the right version of helm and to install a chart with sane defaults. arkade (ark for short) provides a simple CLI with strongly…
Quelle: news.kubernauts.io
blog.kubernauts.io – In my previous post we could see how to get an external IP for load balancing on a k3s cluster running in multipass VMs and I promised to show you how MetalLB can work with k3d launched k3s clusters …
Quelle: news.kubernauts.io
A small talk with the Kommander about Cattle AWS vs. Cattle EKSThe KommanderI met this lovely captain recently on Kos Island and had a small talk with her/him regarding our Kubernetes projects. I spoke to her/him about a dilemma which we’re facing these days. This write up is a very short abstract of our conversation, which I’d like to share with you.In the meanwhile our lovely captain has joined us as “The Kommander” and has the code name TK8.We’re delighted to welcome “The Kommander” as a Kubernaut to the Kubernauts community together with you!The dilemmaI spoke to the Kommander about a dilemma by deciding on how to run and manage multiple Kubernetes clusters using OpenShift, RKE, EKS or Kubeadm (w/ or w/o ClusterAPI support) on AWS for a wide range of applications and services in the e-Mobility field with the following requirements:We should be able to have AWS spot instances with auto-scaling support or other choices like using reserved instances to minimize our costs for our highly scalable apps with high peaks in productionWe should be able to avoid licensing costs for an enterprise grade k8s subscription in the first phaseWe should be able to have a great community support and contribute our work to the communityWe should be able to buy enterprise support at any timeWe should be able to deploy Kubernetes through a declarative approach using Terraform and a single config.yaml manifest the GitOps wayWe should be able to upgrade our clusters easily without downtimeWe should be able to recover our clusters from disaster in regions within few hoursWe need to manage all of our clusters and users through a single interfaceWe should be able to use SAML for SSOWe need to address day 2 operation needsWe should be able to move our workloads to other cloud providers like Google, Azure, DigitalOcean, Hertzner, etc. within few days with the same Kubernetes versionWith these requirements in mind, we had to decide to go either with EKS or Rancher’s RKE and use Rancher Server to manage all of our clusters and users.OpenShiftThe main reason not to be able to go with OpenShift was the fact that OpenShift does not support any hosted Kubernetes provider like EKS and needs high cost licensing in the first phase and has support only for reserved instances and not spot instances on AWS. And OKD, the open source version of OpenShift is not available at this time of writing and one can’t switch from OKD to OCP seamlessly and get enterprise support later. And the fact that OpenShift is not OS agnostic, is something which we don’t like so much. But what we like about OpenShift is the self-hosted capability with Operator Framework support and it’s CRI-O integration.KubeadmUsing Vanilla Kubeadm with ClusterAPI is something which we are looking into for about 9 months now and we love it very much. We were thinking of going with Kubeadm and use Rancher to manage our clusters, but with Kubeadm itself at this time, there is no enterprise support option available and we’re not sure how spot instances and auto-scaling support does work with Kubeadm at this time. What we like very much about Kubeadm is the fact that it has support for all container runtimes, docker, containerd and cri-o and the self-hosted capability is coming soon.EKSEKS is still one of our favourites on the list and we have managed to automate EKS deployments with TK8 Cattle EKS Provisioner and Terraform Provider Rancher2, but unfortunately at this time of writing without auto-scaling and spot instances support, since Rancher API doesn’t provide spot-instances support at this time and we hope to get it soon by one of the next Rancher 2.3.x releases.EKS itself provides spot-instances support and we were thinking about using the eksctl tool to deploy EKS clusters with spot-instances and auto-scaling support and then import our EKS clusters into Rancher and implement IAM and SSO integration and handle upgrades through eksctl. But with that we have to deal with 2 tools, eksctl and Rancher and we are not sure how EKS upgrades will affect our Rancher Server, since Rancher can’t deal with EKS upgrades if we deploy EKS with eksctl. But I think this should not be a no-go issue for us at this time.The main reason why we’d love to go with EKS is the fact that we’ll get a full managed control plane and we have to deal only with managing and patching our worker nodes.The down side of EKS is, that we have to go often with an older version of Kubernetes and if we want to move our workloads to other cloud providers, this might become an issue. And for sure vendor lock-in is something which we are concerned with.RancherUsing Rancher with Terraform Provider Rancher2 and TK8 Cattle AWS Provisioner to deploy RKE clusters with spot instances support with tk8ctl on AWS is something which we are thinking to go with most probably at this time, despite the fact and dilemma that we have to manage our control plane along with the stacked etcd nodes on our own.But with this last option we get a full range of benefits through Rancher and can move our RKE clusters with the same Kubernetes version to any cloud provider and deal with upgrades and our day2 operation needs with a single tool.Other products like RancherOS, Longhorn, Submariner, k3s and k3sos and the great community traction and support on slack gives us the peace of mind to go with Rancher, either with or without EKS!After these explanations I got this nice feedback from the Kommander which I wanted to share with you and thank you for your time reading this post :-)Thanks God, it’s Rancher!Try itIf you’d like to learn about TK8 and how it can help you to build production ready Kubernetes clusters with Terraform Provider Rancher2, please refer to the following links under the related resources.Questions?Feel free to join us on kubernauts slack and ask any questions in #tk8 channel.Related resourcesTK8: The KommanderTK8 Cattle AWS ProvisionerTK8 Cattle EKS ProvisionerA Buyer’s Guide to Enterprise Kubernetes Management PlatformsCreditsMy special thanks goes to my awesome colleague Shantanu Deshpande who worked in his spare time on TK8 Cattle AWS and EKS development and for sure to the brilliant team by Rancher Labs and the whole Rancher Kommunity!We’re hiring!We are looking for engineers who love to work in Open Source communities like Kubernetes, Rancher, Docker, etc.If you wish to work on such projects please do visit our job offerings page.Thanks God, it’s Rancher! was originally published in Kubernauts on Medium, where people are continuing the conversation by highlighting and responding to this story.
Quelle: blog.kubernauts.io
A Quick overview and install in less than 5 minutesDefinition From the Docs :Octant is a tool for developers to understand how applications run on a Kubernetes cluster. It aims to be part of the developer’s toolkit for gaining insight and approaching complexity found in Kubernetes. Octant offers a combination of introspective tooling, cluster navigation, and object management along with a plugin system to further extend its capabilities.Octant is one of the recent projects by VMware that aims to simplify the kubernetes view for developers. Now the developers would be able to see what all is happening in the cluster when they are deploying their workloads.Let us setup Octant on a Katakoda cluster and see what all capabilities it provides out of the box to the Developers.This tutorial is a quick overview of the latest version of the octant recently launched by the team which is v0.10.0.Steps:1: Got to- https://www.katacoda.com/courses/kubernetes/playground2: Download the latest octant release v0.10.0master $ wget https://github.com/vmware-tanzu/octant/releases/download/v0.10.0/octant_0.10.0_Linux-64bit.tar.gzmaster $ lsoctant_0.10.0_Linux-64bit.tar.gz# Run master $ tar -xzvf octant_0.10.0_Linux-64bit.tar.gzoctant_0.10.0_Linux-64bit/README.mdoctant_0.10.0_Linux-64bit/octant# Verifymaster $ cp ./octant_0.10.0_Linux-64bit/octant /usr/bin/master $ octant versionVersion: 0.10.0Git commit: 72e66943d660dc7bdd2c96b27cc141f9c4e8f9d8Built: 2020-01-24T00:56:15ZRun Octant- In order to Run octant you can run the Octant command, by default it runs on localhost:7777and if you need to pass additional arguments (like running on a different port) runmaster $ OCTANT_DISABLE_OPEN_BROWSER=true OCTANT_LISTENER_ADDR=0.0.0.0:8900 octant2020-01-26T10:17:29.135Z INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "overview/serviceEditor", "module-name": "overview"}2020-01-26T10:17:29.135Z INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "overview/startPortForward", "module-name": "overview"}2020-01-26T10:17:29.136Z INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "overview/stopPortForward", "module-name": "overview"}2020-01-26T10:17:29.137Z INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "overview/commandExec", "module-name": "overview"}2020-01-26T10:17:29.137Z INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "overview/deleteTerminal", "module-name": "overview"}2020-01-26T10:17:29.138Z INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "deployment/configuration", "module-name": "overview"}2020-01-26T10:17:29.139Z INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "overview/containerEditor", "module-name": "overview"}2020-01-26T10:17:29.140Z INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "octant/deleteObject", "module-name": "configuration"}2020-01-26T10:17:29.140Z INFO dash/dash.go:391 Using embedded Octant frontend2020-01-26T10:17:29.143Z INFO dash/dash.go:370 Dashboard is available at http://[::]:8900You can see that Octant has started, now open port 8900 on Katakoda kubernetes playground to see the Octant dashboard.In order to open port from Katakoda click on the + and select View HTTP port 8080 on Host 1 and change the port to 8900octant dashboardAs you can see whole of the cluster is visible with easy to navigate options. you can navigate through the namespaces, see the pods running. Just run a few podskubectl run nginx –image nginxkubectl run -i -t busybox –image=busybox –restart=NeverNow go to the workloads section and you can see the pods. Getting into the pods will give a much deeper look. Let us take a full view at busybox pod and see what all things you can easily see via the octant dashboard.Overall ViewResource viewer and Pod logsYou can see how easy it is to view the logs, connected resources, overall summary, and the YAML file.Another thing you can do with Octant is, have your own plugins and view them in octant for added functionalityThis was a brief overview of Octant and how you can set up on a katakoda cluster in less than 5 minutesOctant Documentation: https://octant.dev/docs/master/Octant other communication channels for help and contribution:Kubernetes Slack in the #octant channelTwitterGoogle groupGitHub issuesSaiyam Pathakhttps://www.linkedin.com/in/saiyam-pathak-97685a64/https://twitter.com/SaiyamPathakOctant Simplified was originally published in Kubernauts on Medium, where people are continuing the conversation by highlighting and responding to this story.
Quelle: blog.kubernauts.io
Permission manager : RBAC management for KubernetesPhoto by Kyle Glenn on UnsplashCame across a GitHub repository implemented by the awesome folks at Sighup.IO for managing user permissions for Kubernetes cluster easily via web UI.GitHub Repo : https://github.com/sighupio/permission-managerWith Permission Manager, you can create users, assign namespaces/permissions, and distribute Kubeconfig YAML files via a nice&easy web UI.The project works on the concept of templates that you can create and then use that template for different users.Template is directly proportional to clusterrole. In rder to create a new template you need to defile a clusterrole with prefix template-namespaces-resources__. The default template are present in the k8s/k8s-seeds directory.Example template:apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: name: template-namespaced-resources___developerrules: – apiGroups: – "*" resources: – "configmaps" – "endpoints" – "persistentvolumeclaims" – "pods" – "pods/log" – "pods/portforward" – "podtemplates" – "replicationcontrollers" – "resourcequotas" – "secrets" – "services" – "events" – "daemonsets" – "deployments" – "replicasets" – "ingresses" – "networkpolicies" – "poddisruptionbudgets" # – "rolebindings" # – "roles" verbs: – "*"Let us now deploy it on Katakoda kubernetes playground and see the permission checker in action.Step1: Open https://www.katacoda.com/courses/kubernetes/playgroundStep 2: git clone https://github.com/sighupio/permission-manager.gitStep3: Change the deploy.yaml filemaster $ kubectl cluster-infoKubernetes master is running at https://172.17.0.14:6443update the deployment file “k8s/deploy.yaml” with the CONTROL_PLANE_ADDRESS from the result of the above command.apiVersion: apps/v1kind: Deploymentmetadata: namespace: permission-manager name: permission-manager-deployment labels: app: permission-managerspec: replicas: 1 selector: matchLabels: app: permission-manager template: metadata: labels: app: permission-manager spec: serviceAccountName: permission-manager-service-account containers: – name: permission-manager image: quay.io/sighup/permission-manager:1.5.0 ports: – containerPort: 4000 env: – name: PORT value: "4000" – name: CLUSTER_NAME value: "my-cluster" – name: CONTROL_PLANE_ADDRESS value: "https://172.17.0.14:6443" – name: BASIC_AUTH_PASSWORD valueFrom: secretKeyRef: name: auth-password-secret key: password—apiVersion: v1kind: Servicemetadata: namespace: permission-manager name: permission-manager-servicespec: selector: app: permission-manager ports: – protocol: TCP port: 4000 targetPort: 4000 type: NodePortStep4: Deploy the manifestscd permission-managermaster $ kubectl apply -f k8s/k8s-seeds/namespace.ymlnamespace/permission-manager createdmaster $ kubectl apply -f k8s/k8s-seedssecret/auth-password-secret creatednamespace/permission-manager unchangedclusterrole.rbac.authorization.k8s.io/template-namespaced-resources___operation createdclusterrole.rbac.authorization.k8s.io/template-namespaced-resources___developer createdclusterrole.rbac.authorization.k8s.io/template-cluster-resources___read-only createdclusterrole.rbac.authorization.k8s.io/template-cluster-resources___admin createdrolebinding.rbac.authorization.k8s.io/permission-manager-service-account-rolebinding createdclusterrolebinding.rbac.authorization.k8s.io/permission-manager-service-account-rolebinding createdserviceaccount/permission-manager-service-account createdclusterrole.rbac.authorization.k8s.io/permission-manager-cluster-role createdcustomresourcedefinition.apiextensions.k8s.io/permissionmanagerusers.permissionmanager.user createdmaster $ kubectl apply -f k8s/deploy.yamldeployment.apps/permission-manager-deployment createdservice/permission-manager-service createdStep5: Get the NodePort and open UI using Katakodamaster $ kubectl get svc -n permission-managerNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEpermission-manager-service NodePort 10.104.183.10 <none> 4000:31996/TCP 9m40sn order to open port from Katakoda click on the + and select View HTTP port 8080 on Host 1 and change the port to 31996Enter the username and password : username: adminpassword: 1v2d1e2e67dS You can change the password in k8s/k8s-seeds/auth-secret.yml file.Now Let us create some users and assign one of the default template.User Test1 with permission as a developer in permission-manager namespaceLet us download the kubeconfig file and test the permissions:master $ kubectl –kubeconfig=/root/permission-manager/newkubeconfig get podsError from server (Forbidden): pods is forbidden: User "test1" cannot list resource "pods" in API group "" in the namespace "default"master $ kubectl –kubeconfig=/root/permission-manager/newkubeconfig get pods -n permission-managerNAME READY STATUS RESTARTS AGEpermission-manager-deployment-544649f8f5-jzlks 1/1 Running 0 6m38smaster $ kubectl get clusterrole | grep templatetemplate-cluster-resources___admin 7m56stemplate-cluster-resources___read-only 7m56stemplate-namespaced-resources___developer 7m56stemplate-namespaced-resources___operation 7m56sSummary: With permission checker you can easily create multiple users and give permission for specific resources in specific namespace using custom-defined templates.About SaiyamSaiyam is a Software Engineer working on Kubernetes with a focus on creating and managing the project ecosystem. Saiyam has worked on many facets of Kubernetes, including scaling, multi-cloud, managed kubernetes services, K8s documentation and testing. He’s worked on implementing major managed services (GKE/AKS/OKE) in different organizations. When not coding or answering Slack messages, Saiyam contributes to the community by writing blogs and giving sessions on InfluxDB, Docker and Kubernetes at different meetups. Reach him on Twitter @saiyampathak where he gives tips on InfluxDB, Rancher, Kubernetes and open source.We’re hiring!We are looking for engineers who love to work in Open Source communities like Kubernetes, Rancher, Docker, etc.If you wish to work on such projects please do visit our job offerings page.Permission manager : RBAC management for Kubernetes was originally published in Kubernauts on Medium, where people are continuing the conversation by highlighting and responding to this story.
Quelle: blog.kubernauts.io
In a previous post we introduced how to use a Rancher Server with Terraform Rancher Provider to deploy Rancher’s Kubernetes Engine (RKE) with the TK8 Cattle AWS provisioner on auto-provisioned EC2 machines.In this post I’ll introduce the TK8 Cattle EKS provisioner by the awesome Shantanu Deshpande to deploy an EKS Cluster with the tk8ctl tool talking to a Rancher Server using a valid SSL certificate running on our local machine.Rancher launched EKS vs. Rancher launched RKE ClusterWith Rancher Server you can launch or import any Kubernetes cluster on any cloud provider or existing bare-metal servers or virtual machines.In the case of AWS, we can either choose to use RKE with new nodes on Amazon EC2 or the managed Amazon EKS offering.With EKS one doesn’t need to worry about managing the control plane or even the worker nodes, AWS manages everything for us at the price of a lower Kubernetes version, which is Kubernetes v1.14.8 at this time of writing.With RKE, we can use the latest Kubernetes 1.16.x or soon 1.17.x versions, but we need to manage the control plane and worker nodes on our own, which requires skilled Kubernetes and Rancher professionals.Harshal Shah shares his experience nicely in this blog post about Lessons Learned from running EKS in Production, which I highly recommend to read, if you’d like to free-up your time to be able to deal with other challenges.In a previous post I wrote about a dilemma by deciding on how to run and manage multiple Kubernetes clusters using OpenShift, RKE, EKS or Kubeadm on AWS.Let’s get startedPrerequisitesMost probably you have already these tools installed listed below, except mkcert and tk8ctl:AWS CLITerraform 0.12Docker for Desktopgit climkcerttk8ctlGet the sourcegit clone https://github.com/kubernauts/tk8-provisioner-cattle-eks.gitcd tk8-provisioner-cattle-eksInstall Rancher with Docker and mkcertAs mentioned at the beginning we are going to use Rancher Server and Rancher’s API via code to deploy and manage the life cycle of our EKS clusters with tk8ctl and the Cattle EKS provisioner.To keep things simple, we’ll install Rancher on our local machine with docker and mkcert to get a valid SSL certificate in our browser, which we need to talk to with the following simple commands on MacOS (on Linux you need to follow these mkcert instructions and copy the rootCA.pem from the right directory on linux to your working directory):$ brew install mkcert$ mkcert — install$ mkcert '*.rancher.svc'# on MacOS# cp $HOME/Library/Application Support/mkcert/rootCA.pem cacerts.pem# on Ubuntu Linux# cp /home/ubuntu/.local/share/mkcert/rootCA.pem cacerts.pem# cp _wildcard.rancher.svc.pem cert.pem# cp _wildcard.rancher.svc-key.pem key.pem$ sudo echo "127.0.0.1 gui.rancher.svc" >> /etc/hostsdocker run -d -p 80:80 -p 443:443 -v $PWD/cacerts.pem:/etc/rancher/ssl/cacerts.pem -v $PWD/key.pem:/etc/rancher/ssl/key.pem -v $PWD/cert.pem:/etc/rancher/ssl/cert.pem rancher/rancher:stable$ open https://gui.rancher.svcWith that you should be able to access Rancher on https://gui.rancher.svc without TLS warnings!Get the tk8ctl CLIDownload the latest tk8ctl release and place it in your path:# On MacOS$ wget https://github.com/kubernauts/tk8/releases/download/v0.7.7/tk8ctl-darwin-amd64chmod +x tk8ctl-darwin-amd64mv tk8ctl-darwin-amd64 /usr/local/bin/tk8ctl$ tk8ctl version# ignore any warnings for now, you’ll get a config.yaml file which we’ll overwrite shortly# On Linux$ wget https://github.com/kubernauts/tk8/releases/download/v0.7.7/tk8ctl-linux-amd64chmod +x tk8ctl-linux-amd64$ sudo mv tk8ctl-linux-amd64 /usr/local/bin/tk8ctl$ tk8ctl version# provide any value for aws access and secret key, you’ll get a config.yaml file which we’ll overwriteSet AWS and Terraform Rancher Provider variablesGet the bearer token from Rancher UI in the menu via API & Keys:and provide your AWS access and secret keys in a file called e.g. cattle_eks_env_vars.template:https://medium.com/media/7e1e05d06680754e8a20465782e29e06/hrefand source the file:$ source cattle_eks_env_vars.templateDeploy EKS with tk8ctlNow you’re ready to deploy EKS via Rancher API:$ cp example/config-eks-gui.rancher.svc.yaml config.yaml$ tk8ctl cluster install cattle-eksAfter some seconds you should see in the Rancher Server GUI an EKS cluster in the provisioning state, take a cup of coffee or a delicious red wine, your EKS cluster needs about 15 min. to get ready.Access your EKS clusterTo access your EKS Cluster you can either get the kubeconfig from Rancher UI and save it as kubeconfig.yaml and run:KUBECONFIG=kubeconfig.yaml kubectl get nodesor you can run the following aws eks command to update your default kubeconfig file with the new context:aws eks update-kubeconfig –name tk8-tpr2-eksClean-Uptk8ctl cluster destroy cattle-eksWe’re hiring!We are looking for engineers who love to work in Open Source communities like Kubernetes, Rancher, Docker, etc.If you wish to work on such projects please do visit our job offerings page.TK8 Cattle EKS Provisioner with Terraform Rancher Provider was originally published in Kubernauts on Medium, where people are continuing the conversation by highlighting and responding to this story.
Quelle: blog.kubernauts.io