Die Rahmenbedingungen für die Maschine-zu-Cloud-Konnektivität unterstützt jetzt SLMP

AWS hat die Rahmenbedingungen für die Maschine-zu-Cloud-Konnektivität aktualisiert, eine Lösung, die eine sichere Anbindung von Geräten zur AWS Cloud bietet. Die Lösung unterstützt nun Geräte, die das Mitsubishi Seamless Messaging Protocol (SLMP) verwenden. SLMP ist ein einheitliches Protokoll für die nahtlose Kommunikation zwischen Anwendungen ohne Kenntnis der Netzwerkhierarchie oder -grenzen und universeller Ethernet-Geräte.
Quelle: aws.amazon.com

Amazon Connect startet zusätzliche APIs zur Auflistung der Kontaktzentrum-Ressourcen

Amazon Connect bietet jetzt neue APIs, mit denen Sie Ressourcen wie Warteschlangen, Telefonnummern, Kontaktflüsse und Betriebsstunden ganz einfach in einer Amazon Connect-Instance programmgesteuert auflisten können. Jetzt können Sie beispielsweise die Listenwarteschlangen-API verwenden, um Warteschlangen-IDs zur Laufzeit abzurufen und sie mit der Warteschlangenmetrik-API zu verwenden, um die ausgegebenen Daten zu filtern.
Quelle: aws.amazon.com

How Hanu helps bring Windows Server workloads to Azure

For decades our Microsoft services partners have fostered digital transformation at customer organizations around the world. With deep expertise in both on-premises and cloud operating models, our partners are trusted advisers to their customer, helping shape migration decisions. Partners give customers hands-on support with everything from initial strategy to implementation – giving them a unique perspective on why migration matters.

Hanu is one of our premier Microsoft partners and the winner of the 2019 Microsoft Azure Influencer Partner of the Year.  Hanu experts rely on expertise with Windows Server and SQL Server, as well as Azure, to plan and manage cloud migration. This ensures that customers get proactive step-by-step guidance and best in class support as they transform with the cloud. 

Recently, I sat down with Dave Sasson, Chief Strategy Officer at Hanu, to learn more about why Windows Server customers migrate to the cloud, and why they choose Azure. Below I am sharing a few key excerpts.

How often are Windows Server customers considering cloud as a part of their digital strategy today? How are they thinking about migrating business applications?

Very frequently we talk to customers that have Windows Servers running their business-critical apps. For a significant number of custom apps, .NET is the code base.  For the CIOs at these companies, cloud initiatives are their top priorities. In this competitive age, end users are demanding great experiences and our customers are looking at ways to innovate quicker and fail faster. Cloud is the natural choice to deliver these new experiences.

Aging infrastructure that is prone to failure and is vulnerable to security threats are also driving cloud considerations. The recent end of support for SQL Server 2008 and 2008 R2, and the upcoming end of support for Windows Server 2008 and 2008 R2, are decision points for customers on whether to invest in on-premises infrastructure or move their workloads to the cloud.

What are some of the considerations you see Windows Sever customers reviewing when choosing the cloud?

Security, performance and uptime, management, and cost optimization are the top technical considerations mentioned. IT skill is another significant consideration.

Customers want to invest in cloud partners that have technology leadership. This enables customers to modernize their applications and data estates, leverage chatbots, machine learning, and infuse AI services into their internal processes and their customer facing applications.   

What are the challenges you see customers facing when they are transitioning from on-premises to the cloud?

Operating in the cloud is a new paradigm for most customers.  Security, compliance, performance, and uptime are immediate concerns to ensure that companies have business continuity while they digitally transform across the company. Due to recent security threats and compliance requirements, we see this as a concern in not only industry verticals that are traditionally considered highly regulated, but across the board.

Another top challenge for CIOs is how they leverage their organization’s expertise in this new age of IT. Most customer have tons of in-house expertise, but the worry is whether their existing skills will apply when cloud becomes part of their IT environment and keep a high uptime.  

In your experience, why do customers choose Azure for their Windows Server Workloads?

Windows Server and SQL Server users trust Microsoft as their chosen technology partner. Azure offers even better built-in security features and controls to protect cloud environments than what is available on-premises. Azure’s 90+ compliance offerings across the breadth of industry verticals help customers quickly move to a compliant state while running in the cloud. The Azure Governance application also helps automate compliance tracking.

"We worked with Hanu to move our business-critical workloads running on Windows Sever to VMs in Azure. We are saving approximately 30% in cost and best of all, we can now focus entirely on innovation." Paul Athaide, Senior Manager, Multiple Sclerosis Society of Canada

Azure offers first party support for Windows Sever and SQL Server. This means the support team is backed by experts that built Windows Server and SQL server. Azure’s First party support promise combined with Hanu’s world class ISO-27001 certified NOC and SOC standards give customers the confidence that they can run business critical apps in Azure. 

Every customer operates their on-premises environment while they build out their operating environment in the cloud. Azure offers tools for Windows Server admins such as Windows Admin Center to manage their on-premises workloads and their Azure VMs. Many Azure services such as Azure Security Center, Update, Monitoring, Site Recovery and Backup work on-premises and are available through Windows Admin Center. Secondly, Azure Services like Azure SQL Database, App Service, and Azure Kubernetes service natively run Windows applications.

Lastly, we tell all our customers to take advantage of Azure Hybrid Benefit. If they have Software Assurance, they can save significantly on cloud cost by moving their Windows and SQL Server workloads to Azure. 

How does Hanu see the value in building a practice in migrating Windows Server on-premises workloads to the cloud?

Customers who are running Windows Server and SQL Server on-premises today have a greater understanding for and confidence in the cloud. We are frequently being pulled into discussions to assist in building customers environments in Azure. Consequently, we have invested a lot of time and resources in our Windows Server migration practice. As a Microsoft Partner, we are excited to see the innovations that Azure is bringing and ways we can help our customers digitally transform their business. 

Dave, thanks so much for sitting down with me. It sounds like our customers are in good hands! 

It’s always great to hear from our premier partners on what challenges customers face and how Microsoft Azure meets those requirements. 

Please check out the Partner Portal to find partners that meet your requirements. We realize every customer has challenges that are unique to their business and our Microsoft Partner Network has 1000’s of partners that meet those requirements. To learn more about Hanu, try Hanu's solution available on Azure Marketplace. 
Quelle: Azure

Smartwatch: Die Moto 360 soll zurückkommen

Medienberichten zufolge steht eine neue Moto 360 kurz vor der Veröffentlichung: Nach über vier Jahren könnte damit wieder eine Smartwatch mit der bekannten Marke in den Handel kommen. Der Name ist allerdings nur von Motorola lizenziert, der eigentliche Hersteller ist in der Branche unbekannt. (Smartwatch, Lenovo)
Quelle: Golem

A PodPreset Based Webhook Admission Controller

One of the fundamental principles of cloud native applications is the ability to consume assets that are externalized from the application itself during runtime. This feature affords portability across different deployment targets as properties may differ from environment to environment. This pattern is also one of the principles of the Twelve Factor app and is supported through a variety of mechanisms within Kubernetes. Secrets and ConfigMaps are implementations in which assets can be stored whereas the injection point within an application can include environment variables or volume mounts. As Kubernetes and cloud native technologies have matured, there has been an increasing need to dynamically configure applications at runtime even though Kubernetes makes use of a declarative configuration model. Fortunately, Kubernetes contains a pluggable model that enables the validation and modification of applications submitted to the platform as pods, known as admission controllers. These controllers can either accept, reject or accept with modifications the pod which is attempting to be created.
The ability to modify pods at creation time allows both application developers and platform managers the ability to offer capabilities that surpass any limitation that may be imposed by strict declarative configurations. One such implementation of this feature is a concept called PodPresets which enables the injection of ConfigMaps, Secrets, volumes, volume mounts, and environment variables at creation time to pods matching a set of labels. Kubernetes has supported enabling the use of this feature since version 1.6 and the OpenShift Container Platform (OCP) made it available in the 3.6 release. However, due to a perceived direction change for dynamically injecting these types of resources into pods, the feature became deprecated in version 3.7 and removed in 3.11 which left a void for users attempting to take advantage of the provided capabilities.
As time went on and Kubernetes and OpenShift continued to mature, a new mechanism for providing admission controllers was created. Instead of admission plugins being defined within the API server itself, they could instead be run externally and the API server instead makes an HTTP invocation to the remote endpoint. This mechanism is known as a webhook admission controller and enabled end users the ability to easily extend the platform with their own set of features. One of the most popular forms of webhook admission controllers is the injection of a sidecar container to support applications running on the Istio Service Mesh. Even though the upstream PodPresets admission plugin was deprecated in OpenShift, there has been the continued desire for this type of feature. Thanks to a webhook based admission solution, this functionality can be restored with minimal changes. The remainder of this entry will provide an overview of the webhook admission controller including the deployment and implementation to an OpenShift environment.
The PodPreset Webhook admission controller project is located on GitHub in a repository called podpreset-webhook and is located within the Red Hat Community of Practice organization and contains all of the resources necessary to deploy the solution. The assets contained within the repository consist of a set of OpenShift/Kubernetes resources that registers a new PodPreset Custom Resource Definition (CRD), configures the permissions necessary in order for the controller to function properly as well as facilitate a deployment of a container image. Once stated, the container exposes a web server endpoint that listens for requests that are sent by the API server to determine whether newly created resources are valid and/or should be modified. ValidatingWebhookConfigurations and MutatingWebhookConfigurations are Kubernetes/OpenShift objects that registers the types of resources that should undergo assessment by an external web server as well as the location of where this component resides within the cluster. In the case of the PodPreset Webhook admission controller, at initialization time, a MutatingWebhookConfiguration resource is dynamically created that dictates that all newly created pods should be considered by the web server. The API server sends an AdmissionReview object that contains information related to the newly created resource including the pod itself. Logic contained within the web server (identical to the included upstream Kubernetes PodPreset admission plugin) will determine whether the labels on the pod matches a PodPreset custom resource contained within the same namespace as the pod request. If a match is found, the pod is mutated as per the rules contained in the PodPreset. Finally, a resulting Response is sent back to the API server containing any modifications that should be applied to the original pod (as JSON patches) and whether the object itself is valid. The pod then continues through the remainder of the admission process until the object is persisted in etcd.
With an understanding of the functionality of the PodPreset webhook admission controller, let’s deploy it to an OpenShift environment.
As a logged in user with elevated permissions, clone or download the repository to your local machine.
$ git clone https://github.com/redhat-cop/podpreset-webhook
$ cd podpreset-webhook

Next, create a new project called podpreset-webhook
$ oc new-project podpreset-webhook

Deploy the resources to the newly created project
$ oc apply -f deploy/crds/redhatcop_v1alpha1_podpreset_crd.yaml
$ oc apply -f deploy/service_account.yaml
$ oc apply -f deploy/clusterrole.yaml
$ oc apply -f deploy/cluster_role_binding.yaml
$ oc apply -f deploy/role.yaml
$ oc apply -f deploy/role_binding.yaml
$ oc apply -f deploy/secret.yaml
$ oc apply -f deploy/webhook.yaml

Verify controller pod has started by viewing all pods within the namespace
$ oc get pods

NAME READY STATUS RESTARTS AGE
podpreset-webhook-665f68679b-nnx8n 1/1 Running 0 56s
podpreset-webhook-665f68679b-pn96c 1/1 Running 1 55s

Verify the controller also created the MutatingWebhookConfiguration
$ oc get mutatingwebhookconfiguration mutating-webhook-configuration -o yaml

apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
creationTimestamp:
generation: 1
name: mutating-webhook-configuration
resourceVersion: “”
selfLink: /apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations/mutating-webhook-configuration
uid:
webhooks:
– admissionReviewVersions:
– v1beta1
clientConfig:
caBundle:
service:
name: podpreset-webhook
namespace: podpreset-webhook
path: /mutate-pods
failurePolicy: Ignore
name: podpresets.admission.redhatcop.redhat.io
namespaceSelector:
matchExpressions:
– key: control-plane
operator: DoesNotExist
rules:
– apiGroups:
– “”
apiVersions:
– v1
operations:
– CREATE
resources:
– pods
scope: ‘*’
sideEffects: Unknown
timeoutSeconds: 30

As expressed in the generated MutatingWebhookConfiguration, a web server called podpresets.admission.redhatcop.redhat.io was produced that specifies that all pods that are created invoke a web server located at the service called podprerset-webhook within the podpreset-webhook namespace at the /mutate-pods endpoint.
With the webhook server ready to accept requests from the API, lets walk through a scenario that demonstrates the functionality of this solution. As described previously, PodPresets can inject several different types runtime requirements into applications including environment variables. For this instance, PodPresets will be used to dynamically inject an environment variable named FOO with a value of bar. To confirm the existence of the environment variable, an example application will be deployed which repeatedly prints out the value of the FOO environment variable every 30 seconds.
First, let’s start by defining a new PodPreset object:
apiVersion: redhatcop.redhat.io/v1alpha1
kind: PodPreset
metadata:
name: podpreset-example
spec:
env:
– name: FOO
value: bar
selector:
matchLabels:
role: podpreset-example

Save the content to a file called podpreset-example.yaml and execute the following command to add the PodPreset to the project:
$ oc create -f podpreset-example.yaml

Now, create the application:
$ oc run podpreset-webhook-app –image=registry.access.redhat.com/ubi8/ubi-minimal:latest –command=true — bash -c ‘while true; do echo “Value of FOO is: $FOO” && sleep 30; done’

Wait until the application starts to run and then view the logs:
$ oc logs -f $(oc get pods -l deploymentconfig=podpreset-webhook-app -o name)

Value of FOO is:
As seen in the above output, no value is currently present for the environment variable named FOO. In the PodPreset object, only pods with the label “role=podpreset-example” will have this environment variable automatically injected.
Patch the DeploymentConfig for the application to include the requisite label:
$ oc patch dc/podpreset-webhook-app -p ‘{“spec”:{“template”:{“metadata”:{“labels”:{“role”:”podpreset-example”}}}}}’

Wait until the new pod has been deployed and view the logs:
$ oc logs -f $(oc get pods -l deploymentconfig=podpreset-webhook-app -o name)

Value of FOO is: bar
Confirm the environment variable has been set to match the value shown in the pod log output by describing the pod itself:
$ oc describe $(oc get pods -l deploymentconfig=podpreset-webhook-app -o name)

Containers:
podpreset-webhook-app:
Container ID: cri-o://571afdbfea25089333a16ec758cadd77e663f0a81da9953374fa167e7bba5f89
Image: registry.access.redhat.com/ubi8/ubi-minimal:latest
Image ID: registry.access.redhat.com/ubi8/ubi-minimal@sha256:ffbb6e58a87ec743b29214dc8484db0fe5157e8533c09e17590120c80af66dcc
Port:
Host Port:
Command:
bash
-c
while true; do echo “Value of FOO is: $FOO” && sleep 30; done
State: Running
Started: Thu, 26 Sep 2019 21:00:27 +0200
Ready: True
Restart Count: 0
Environment:
FOO: bar
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nns9t (ro)

As you can see, the environment variable FOO has been set with the value bar as described in the PodPreset API object and applied at runtime thanks to the PodPreset admission controller. Environment variables are only one such resource that can be managed by this MutatingWebhook. Other such use cases include adding secrets which may contain certificates or credentials injected as volumes for consumption by applications. This would decouple the logic necessary to support namespace level specifications as one environment may differ from another. The power of being able to dynamically manipulate the definition of resources across a fleet of applications demonstrates the benefits enabled by this functionality which can be deployed to both OpenShift and Kubernetes platforms.
The post A PodPreset Based Webhook Admission Controller appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift