AWS Ankündigung von Windows Server Version 20H2 AMIs für Amazon EC2

Heute geben wir die Verfügbarkeit von LI (License Included) – Lizenzierte Amazon Machine Images (AMIs) für Windows Server Version 20H2 für Amazon EC2 bekannt und bieten Kunden damit eine einfache und flexible Möglichkeit, mit den neuesten Versionen von Windows Server Halbjahres-Channel-Release einsatzbereit zu sein. Windows Server 20H2 bietet die neuesten Korrekturen und Leistungsverbesserungen für Windows Server.
Quelle: aws.amazon.com

How to set up k0s Kubernetes: A quick and dirty guide

The post How to set up k0s Kubernetes: A quick and dirty guide appeared first on Mirantis | Pure Play Open Cloud.
For a couple of weeks now, we’ve been talking about the k0s project, a simple way to get Kubernetes up and running.  In this quick and dirty guide, we’ll give you all the background you need to get started.
The Kubernetes architecture of k0s consists of a single binary that includes everything you need to run Kubernetes on any system that includes the Linux kernel.  Putting it to use is straightforward:

Download the k0s binary
Create a server to instantiate the Kubernetes control plane
Create a Kubernetes worker
Access the cluster

Of course you can add additional controllers or servers, but let’s start with the very simplest version:  a single server running everything you need.
Create a single node Kubernetes cluster with k0s
The first thing we need to do is create a server that will act as the k0s controller.  Note that I didn’t say controller node; you can see Jussi Nummelin’s blog for an explanation of the particular way in which k0s implements the Kubernetes architecture, but the controller processes run directly on the host, and not in pods, so there’s no “master” node.
The host itself doesn’t have to be huge; for this blog I used an AWS t2.medium instance (2 CPUs, 4GB RAM) running Amazon Linux 2.  Just make sure that port 6443 is open so that you can contact the cluster later.
Now you can install k0s with a simple one line command:
sudo curl -sSLf k0s.sh | sudo sh
(Note that there’s no “magic” k0s.sh script you’re missing.  This is the same as sudo curl -sSLf http://k0s.sh | sudo sh)
Once the script downloads, all you need to do is start the server:
sudo k0s server –enable-worker &
That’s it.
You can avoid getting bowled over with logging messages by instead using:
sudo k0s server –enable-worker </dev/null &>/dev/null &
You could also start just the server and create the worker somewhere else, but we’ll talk more about that in a minute.  Now let’s access the new cluster.
Access the k0s cluster
Accessing the cluster is a matter of simply installing kubectl (if necessary) and pointing to the KUBECONFIG file.
When you create the server, k0s creates a KUBECONFIG file for you, so copy it to your working directory and point to it:
sudo cp /var/lib/k0s/pki/admin.conf ~/admin.conf
export KUBECONFIG=~/admin.conf
Now you can access the cluster itself:
kubectl get namespaces
NAME              STATUS   AGE
default           Active   5m32s
kube-node-lease   Active   5m34s
kube-public       Active   5m34s
kube-system       Active   5m34s
Notice that if you look for the nodes, there is no master node:. Remember, k0s implements the control plane as naked processes.
kubectl get nodes
NAME             STATUS   ROLES    AGE    VERSION
ip-172-31-8-33   Ready    <none>   5m1s   v1.19.3
But what happens if we try to access the cluster from another server, such as via a tool such as Lens?
Accessing k0s from outside the cluster: Customizing the k0s Kubernetes cluster
Now let’s look at accessing the cluster from an external server.  We can easily get the KUBECONFIG file:
scp -i k0s.pem ec2-user@<SERVER_IP>:~/admin.conf .
export KUBECONFIG=admin.conf
From there, we’ll want to use the public IP address of the server rather than localhost, so open the admin.conf file and edit the server address.  For example, in my case, the public IP of my server is 52.10.92.152:
apiVersion: v1
clusters:
– cluster:
server: https://52.10.92.152:6443
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURBRENDQWVpZ0F3SUJBZ0lVRzhGakJZVVNZOFBrOWNjcTVhK3lFenNBNXAwd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0dERVdNQlFHQTFVRUF4TU5hM1ZpWlhKdVpYUmxjeTFqWVRBZUZ3MHlNREV4TWpNd016TXpNREJhR…

Now if we were to test this connection, we’d see something odd.
kubectl version
Client Version: version.Info{Major:”1″, Minor:”19″, GitVersion:”v1.19.0″, GitCommit:”e19964183377d0ec2052d1f1fa930c4d7575bd50″, GitTreeState:”clean”, BuildDate:”2020-08-26T14:30:33Z”, GoVersion:”go1.15″, Compiler:”gc”, Platform:”windows/amd64″}
Unable to connect to the server: x509: certificate is valid for 127.0.0.1, 172.31.8.33, 172.31.8.33, 172.31.8.33, 10.96.0.1, not 52.10.92.152
So we’re making the connection, and Kubernetes is working, but the credentials are incorrect.  To solve this problem, we need to configure k0s to include the public IP address.
To start, we can export the actual configuration file k0s will use:
sudo k0s default-config > k0s.yaml
We can then edit that file to add the public IP, and any other address at which we want to call the server:
apiVersion: k0s.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s
spec:
api:
address: 172.31.8.33
sans:
– 172.31.8.33
– 172.31.8.33
– 52.10.92.152
extraArgs: {}
controllerManager:
extraArgs: {}
scheduler:
extraArgs: {}
storage:
type: etcd
kine: null
etcd:
peerAddress: 172.31.8.33
network:
podCIDR: 10.244.0.0/16
serviceCIDR: 10.96.0.0/12
provider: calico
calico:
mode: vxlan
vxlanPort: 4789
vxlanVNI: 4096

Next restart the k0s server. Because it’s running as a background process, the easiest way to do this is to simply restart the machine, then restart k0s:
sudo k0s server –enable-worker &
From here everything should Just Work; the KUBECONFIG file stays the same:
kubectl version
Client Version: version.Info{Major:”1″, Minor:”19″, GitVersion:”v1.19.0″, GitCommit:”e19964183377d0ec2052d1f1fa930c4d7575bd50″, GitTreeState:”clean”, BuildDate:”2020-08-26T14:30:33Z”, GoVersion:”go1.15″, Compiler:”gc”, Platform:”windows/amd64″}
Server Version: version.Info{Major:”1″, Minor:”19″, GitVersion:”v1.19.3″, GitCommit:”1e11e4a2108024935ecfcb2912226cedeafd99df”, GitTreeState:”clean”, BuildDate:”2020-11-11T20:21:36Z”, GoVersion:”go1.15.4″, Compiler:”gc”, Platform:”linux/amd64″}
You can also access the Kubernetes cluster with Lens by importing the KUBECONFIG.
Add additional nodes to the Kubernetes cluster
Scaling the cluster is just a matter of adding additional worker nodes or control planes. To do that, you’re going to need a token so the new server knows where to “phone home”. To generate that, go to the control plane:
k0s token create –role=worker
Obviously, in this case we’re creating a new worker node.  You’ll wind up with a really long string of text such as:
H4sIAAAAAAAC/2yV0Y7iOhKG7/speIGZYycwpxtpLyZgBwIxY8dlJ74LcYZAnBBCGtKs9t1XzcxIu9K5K1f9+n7Lsup/ybujKvvr8dzOJzf8Urj361D21/nLl8nvev4ymUwm17K/lf18Ug1Dd53/9Rf+2/vq46+vX31//m069Z+iouyH489jkQ/ll/x9qM79cfj4YvMhn0+2CRq2CV4IsJE8BkuhIkjARBxREM8ZGhY1jhIQgSBsybXqDKJ+AlFgkFPiUYV5HRmlmNnRoN9pdiqkqja+o2XLApZ+v1skhIZuu8ddlK/MSdZUCLhvuKOBRZYIZRl3dMUlVQLoVMKsirHptKs2VnUZNOOplPSilQgMGD9eOSaImkrdMYvwN5l2TJCoEm1xLw/dnxn935mEqC2JuXClFgbNNK8pU0SkdknNXplZ1mAd0y6TqTBxrXzjWZoDDmJil+DT1cKxKDsxAqAWXFHFgaAEImIRfWoTRG+fbwIgNuJUj4k2Yo/GSwzFQ5Ea21ZV3DdqJ6000rV5YxIFh41Br57Ba79MFTXSabuk5zxUZ5nG9xyGkyDTMVHfe1YrY0IcMXe4JSSiuXIKlhHOkIEyZKBSg8BxnD/Mujyc77BSjx3N+iKluVTnj3gVxBqmvvGpA1dgvXRLvYp7mYohTxnX4YjzU7RV3ut9j88986b3Ag0CMGNlas+2ji6LpvA2XpUomX2opTE2HJZlSo86XE/F8TruqHvfEZpmzYzJZjzHYOKSBlJoK/K22pQy7uNavPNH5vPU1SDXnsDFJoHDNCe4YvUbk+HhpkI+TaRI9aprdaN2GV57WetcDEWfLzOUeW871bzds1MQ5pDdWWqrzUPFWw/PRBtFW4+J/HsHVkbpHhSTsJ7tidMljQabKmN0NLNt8MOc3FWmNMlQtEjUYcz8SNnQcBMKynyC42X0zrVlvKaB8DqR11GwqHHAiA1ipWqxspQf33wAFVjkFrzpAiBRK51ZQ40XXGdTARFwEAHA4SZhfReIjEoLYjBNeR2B1vG0COtNvhIQO3HM0niqaJerlE/L5hWXZNorQne8sX2hqz6HYmYfwecffIiaBhKx4NM/98+ocGvPtsGuOA5Ek1mjDt2Ce+NHhkRrH8zFyjUK22P2MXgQ2ladTMZTty5OgnKotCbDKFJz2hM1JqvgaFD30ErdsjS7m4fd7pYCWczWi5MZEvJm2GIIslZxtjSyeAhPhfHNYuILNDttUYUV5ahsA1FqGPWK+rIRIDxbs1asi1YEpol6CKuLaSgkTbbJfSvLpR2s300zn8LeZzf5cLdd6pgO6WVP7h97sKMljoJUs7zmD4nED+1oLGp6grDok6UxQNmHQviy02tPfe9kTsa7BJtlaTHdNxneK/deoA52cL1tvegae2+UUbvereBum8IT8HaL26CRtUmVDC5GsiYmHS1klTJZjDtpr4vm9RajyN/iIGLp4WFOxlmCRrMUUxsO25KwXqRUJ83IchJhaCqyRdW3QkcO2i4FhO7xyhL14A+r3yIZWpw0fLMPVZVj5f+QaPN7N1NZ8wNHKlHEhQmwQBF47uUtP//rZTJp86acT2p0fSnO7VCOw6+Y+FX/iok/mfFUfTber8/T+7505fBlfz4P16HPu/+nvfd92Q5f/pCezfrY2vlkcW5/Hg8vXV/+LPuyLcrrfPLv/7x8Up/mvyH/gH8aP68wnOuynU9qN15n+Ovl56OyaLj8ffC+9f3x9PLfAAAA////I+m0AwcAAA==
This may seem excessive, but this is actually just a KUBECONFIG that’s been BASE64-encoded. The benefit here is that you can put the worker node anywhere, as long as it can access the control plane over the network.
To create the worker, instantiate a new server (if necessary) and install k0s:
sudo curl -sSLf k0s.sh | sudo sh
Then just go ahead and join the cluster:
sudo k0s worker “long-join-token”
As in:
k0s worker “H4sIAAAAAAAC/2yV0Y7i…”
Now if you were to go back to kubectl and check for nodes, you’d see the new node in your list, as in:
kubectl get nodes
NAME               STATUS   ROLES    AGE   VERSION
ip-172-31-14-157   Ready    <none>   81s   v1.19.3
ip-172-31-8-33     Ready    <none>   11h   v1.19.3
You can also increase the robustness of the cluster by creating an additional control plane.  Again, start by creating the token:
k0s token create –role=controller
And again, on your new server, install k0s and start the server just as you started the worker:
sudo curl -sSLf k0s.sh | sudo sh
sudo k0s server “long-join-token” &
As in:
sudo k0s server “H4sIAAAAAAAC/3RV0Y…” &
This time, though, if you check for nodes, you won’t see the addition, because there are no master nodes in the k0s Kubernetes architecture:
kubectl get nodes
NAME               STATUS   ROLES    AGE   VERSION
ip-172-31-14-157   Ready    <none>   23m   v1.19.3
ip-172-31-8-33     Ready    <none>   11h   v1.19.3
Note that until the community creates a command for leaving the cluster (currently in progress) if something happens to your second controller, the cluster itself will be borked, so don’t add this unless you need to.
Where to go from here
k0s is exciting, but it’s still pretty young, so work is simultaneously very fast but the community would very much like any feedback or contributions. Meanwhile, we’d like to hear when you’re doing with k0s, and what you’d like to see us talk about, so let us know in the comments!
The post How to set up k0s Kubernetes: A quick and dirty guide appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Closing the gap: Migration completeness when using Database Migration Service

Database Migration Service (DMS) provides high-fidelity, minimal downtime migrations for MySQL (Preview) and PostgreSQL (available in Preview by request) workloads to Cloud SQL. Since DMS is serverless, you don’t have to worry about provisioning, managing, or monitoring any migration-specific resources. In this post, we’ll focus on what is and is not included in database migration for MySQL, and what you can do to ensure migration completeness when using DMS. The source database’s data, schema, and additional database features (triggers, stored procedures, and more) are replicated to the Cloud SQL destination reliably, and at scale, with no user intervention required. Due to the peculiarities of MySQL, there are a few things that won’t be migrated, though. Let’s look at what is and isn’t migrated with DMS in more detail.What’s included in MySQL database migrationDMS for MySQL uses the database’s own native replication technology to provide a high-fidelity way to migrate database objects from one database to another. The migration fidelity section of the documentation goes into detail about what is included in the migration. At the time of this Preview launch, all of the following data, schema, and metadata components are migrated as part of the database migration:Data Migration All tables from all databases and schemas, excluding the following default databases and schemas: sys, mysql, performance_schema, and information_schema.Schema MigrationNamingPrimary keyData typeOrdinal positionDefault valueNullabilityAuto-increment attributesSecondary indexesMetadata MigrationStored proceduresFunctionsTriggersViewsForeign key constraintsWhat’s not included in database migrationMySQLThere are certain things that are not migrated as part of a MySQL database migration, as well as some known limitations and quotas that you should be aware of. Users definitionWhen you’re migrating a MySQL database, the MySQL system database, which contains information about users and privileges, is not migrated. That means that user account and login information must be managed in the destination Cloud SQL instance directly. The root account will need to be set up before the instance can be used. You can add users to the Cloud SQL destination instance either from the Users tab in the UI, or from the mysql client. The Cloud SQL documentation contains more information about managing MySQL user accounts.Usage of Definer clauseSince a MySQL migration job doesn’t migrate users data, sources which contain metadata defined by users with the DEFINER clause will fail when invoked on the new Cloud SQL replica, as the users don’t yet exist there. To run a migration from a source that includes the DEFINER clause:Create a migration job without starting it (choose Create instead of Create & Start).Create the users on the new Cloud SQL destination instance using the Cloud SQL API or the Users tab in the UI.Start the migration job from the migration job list or the specific job’s page.Alternatively, you can update the DEFINER clause to INVOKER on the source prior to setting up the migration job. Note that if the metadata was created by ’root’@’localhost’,  the process will fail. Change the DEFINER before starting the migration job.Next Steps with DMSReady to learn more about migrating your MySQL or PostgreSQL database to Cloud SQL? These resources will help you gather the information you need to get started:This blog post announces the launch of DMS and provides an overview of the capabilities it supportsThe DMS documentation goes into more detail about requirements and steps to set up a MySQL database migrationAn in-depth look at configuring connectivity for DMSFill out this form to express interest in DMS for PostgreSQLRelated ArticleDatabase Migration Service Connectivity—A technical introspectiveMigrating your database is hard. So is network connectivity. See how Google’s Database Migration Service can make migration reliable, eas…Read Article
Quelle: Google Cloud Platform