Anzeige: Wie ALDI SÜD auf SAP setzt

Bisher arbeitete ALDI SÜD mit zahlreichen selbst entwickelten Programmen. Allerdings ändert sich der Einzelhandel rapide. Deshalb macht sich die Unternehmensgruppe mit modernen und etablierten Softwarelösungen fit für die Zukunft. (SAP)
Quelle: Golem

Disaster recovery for SAP HANA Systems on Azure

This blog will cover the design, technology, and recommendations for setting up disaster recovery (DR) for an enterprise customer, to achieve best in class recovery point objective (RPO) and recovery time objective (RTO) with an SAP S/4HANA landscape. This post was co-authored by Sivakumar Varadananjayan, Global Head of Cognizant’s SAP Cloud Practice.

Microsoft Azure provides a trusted path to enterprise-ready innovation with SAP solutions in the cloud. Mission critical applications such as SAP run reliably on Azure, which is an enterprise proven platform offering hyperscale, agility, and cost savings for running a customer’s SAP landscape.

System availability and disaster recovery are crucial for customers who run mission-critical SAP applications on Azure.

RTO and RPO are two key metrics that organizations consider in order to develop an appropriate disaster recovery plan that can maintain business continuity due to an unexpected event.  Recovery point objective refers to the amount of data at risk in terms of “Time” whereas Recovery Time Objective refers to the amount of time or the maximum tolerable time that system can be down after disaster occurs.

The below diagram gives a view of RPO and RTO on a timeline view in a business as usual (BAU) scenario.

Orica is the world's largest provider of commercial explosives and innovative blasting systems to the mining, quarrying, oil and gas, and construction markets. They are also a leading supplier of sodium cyanide for gold extraction and a specialist provider of ground support services in mining and tunneling.

As part of Orica’s digital transformation journey, Cognizant has been chosen as a trusted technology advisor and managed cloud platform provider to build highly available, scalable, disaster proof IT platforms for SAP S/4HANA and other SAP applications in Microsoft Azure.

This blog describes how Cognizant took up the challenge of building a disaster recovery solution for Orica as a part of the Digital Transformation Program with SAP S/4HANA as a digital core. This blog contains the SAP on Azure architectural design considerations, by Cognizant and Orica, over the last two years, leading to a reduction in RTO to 4 hours. This is achieved by deploying the latest technology features available on Azure, coupled with automation. Along with reduction in RTO, there’s also reduction in RPO to less than 5 minutes with the use of database specific technologies such as SAP HANA system replication and Azure Site Recovery.

Design principles for disaster recovery systems

Selection of DR Region based on SAP Certified VMs for SAP HANA – It is important to verify the availability of SAP Certified VMs types in DR Region.
RPO and RTO Values – Businesses need to lay out clear expectations in RPO and RTO values which greatly affect the architecture for Disaster Recovery and requirements of tools and automation required to implement Disaster Recovery
Cost of Implementing DR, Maintenance and DR Drills

Criticality of systems – It is possible to establish Trade-off between Cost of DR implementation and Business Requirements. While most critical systems can utilize state of the art DR architecture, medium and less critical systems may afford higher RPO/RTO values.
On Demand Resizing of DR instances – It is preferable to use small size VMs for DR instances and upsize those during active DR scenario. It is also possible to reserve the required capacity of VMs at DR region so that there is no “waiting” time to upscale the VMs. Microsoft offers Reserved Instances with which one can reserve virtual machines in advance and save up to 80 percent. According to required RTO value a tradeoff needs to be worked out between running smaller VMs vs. Azure RI.
Additional considerations for cloud infrastructure costs, efforts in setting up environment for Non-disruptive DR Tests. Non-disruptive DR Tests refers to executing DR Tests without performing failover of actual productive systems to DR systems thereby avoiding any business downtimes. This involves additional costs for setting up temporary infrastructure which is in completely isolated vNet during the DR Tests.
Certain components in SAP systems architecture such as clustered network file system (NFS) which are not recommended to be replicated using Azure Site Recovery, hence there is a need for additional tools with license costs such as SUSE Geo-cluster or SIOS Data keeper for NFS Layer DR.

Selection of specific technology and tools – While Azure offers “Azure Site Recovery (ASR)” which replicates the virtual machines across the region, this technology is used at non-database components or layers of the system while database specific methods such as SAP HANA system replication (HSR) are used at database layer to ensure consistency of databases.

Disaster recovery architecture for SAP systems running on SAP HANA Database

At a very high level, the below diagram depicts the architecture of SAP systems based on SAP HANA and which systems will be available in case of local or regional failures.

The diagram below gives next level details of SAP HANA systems components and corresponding technology used for achieving disaster recovery.

Database layer

At the database layer, database specific method of replications such as SAP HANA system replication (HSR) is used. Use of database specific replication method allows better control over RPO values by configuring various replication specific parameters and offers consistency of database at DR site. The alternative methods of achieving disaster recovery at the database (DB) layer such as backup and restore, and recovery or storage base replications are available however, they result in higher RTO values.

RPO Values for SAP HANA database depend on factors including replication methodology (Synchronous in case of high availability or Asynchronous in case of DR replication), backup frequency, backup data retention policies, savepoint, and replication configuration parameters.

SAP Solution Manager can be used to monitor the replication status, such that an e-mail alert is triggered if the replication is impacted.

Even though multi-node replication is available as of SAP HANA 2.0 SP 3, revision 33, at the time or writing this article, this scenario is not tested in conjunction with high availability cluster. With successful implementation of multi-target replications, the DR maintenance process will become simpler and will not need manual interventions due to fail-over scenarios at primary site.

Application layer – (A)SCS, APP, iSCSI

Azure Site Recovery is used for replication of non-database components of SAP systems architecture including (A)SCS, application servers, and Linux cluster fencing agents such as iSCSI (with an exception of NFS layer which is discussed below.) Azure Site Recovery replicates workloads running on a virtual machines (VMs) from a primary site to a secondary location at storage layer and it does not require VM to be in a running state, and VMs can be started during actual disaster scenarios or DR drills.

There are two options to set up a pacemaker cluster in Azure. You can either use a fencing agent, which takes care of restarting a failed node via the Azure APIs or you can use a storage based death (SBD) device. The SBD device requires at least one additional virtual machine that acts as an iSCSI target server and provides an SBD device. These iSCSI target servers can however be shared with other pacemaker clusters. The advantage of using an SBD device is a faster failover time.

Below diagram describes disaster recovery at the application layer, (A)SCS, App servers, and iSCSI servers use the same architecture to replicate the data across DR region using Azure Site Recovery. 

NFS layer – NFS layer at primary site uses a cluster with distributed replicated block device (DRBD) for high availability replication purposes. We evaluated multiple technologies for the implementation of DR at NFS layer. Since DRBD and Site Recovery configurations are not compatible, solutions such as SUSE geo cluster, SIOS data keeper, or simple VM snapshot backups and restore are available for achieving NFS layer DR. Since DRBD enables high availability at NFS layer using disk replication, Site Recovery replication is not supported. In case where DRBD is enabled, the cost-effective solution to achieve DR for NFS layer is by using simple backup/restore using VM snapshot backups.

Steps for invoking DR or a DR drill

Microsoft Azure Site Recovery technology helps in faster replication of data at the DR region. In a DR implementation where Site Recovery is not used or configured, it would take more than 24 hours to recover about five systems, and eventually RTO will result in 24 or more hours. However, when Site Recovery is used at the application layer with database specific method of replication at DB Layer being leveraged, it is possible to reduce the RTO value to well below four hours for same number of systems. Below diagram describes timeline view with the steps to activate disaster recovery with four hours RTO Value.

Steps for Invoking DR or a DR drill:

DNS Changes for VMs to use new IP addresses
Bring up iSCSI – single VM from ASR Replicated data
Recover Databases and Resize the VMs to required capacity
Manually provision NFS – Single VM using snapshot backups
Build Application layer VMs from ASR Replicated data
Perform cluster changes
Bring up applications
Validate Applications
Release systems

Recommendations on non-disruptive DR drills

Some businesses cannot afford down-time during DR drills. Non-disruptive DR drills are suggested in case where it is not possible to arrange downtimes to perform DR. A non-disruptive DR procedure can be achieved by creating an additional DR VNet, isolating it from the network, and carrying out DR Drill with below steps.

As a prerequisite, build SAP HANA database servers in the isolated VNet and configure SAP HANA system replication.

Disconnect express route circuit to DR region, as express route gets disconnected it simulates abrupt unavailability of systems in primary region
As a prerequisite, backup domain controller is required to be active and in replication mode with primary domain controller until the time of express route disconnection
DNS server needs to be configured in isolated DR VNet (additional DR VNet Created for non-disruptive DR drill) and kept in standby mode until the time of express route disconnection
Establish point to site VPN tunnel for administrators and key users for DR test
Manually update the NSGs so that DR VNet is isolated from the entire network
Bring up applications using DR enable procedure in DR region
Once test is concluded, reconfigure NSGs, express route, and DR replications

Involvement of relevant infrastructure and SAP subject matter experts is highly recommended during DR tests.

Note that the non-disruptive DR procedure need to be executed with extreme caution with prior validation and testing with non-production systems. Database VMs capacity at DR region should be decided with a tradeoff between reserving full capacity vs. Microsoft’s timeline to allocate required capacity to resize the database VMs.

Next steps

To learn more about architecting a optimal Azure infrastructure for SAP see the following resources:

SAP on Azure – Designing for security

SAP on Azure – Designing for performance and scalability

SAP on Azure – Designing for availability and recoverability

SAP on Azure- Designing for Efficiency and Operations

Quelle: Azure

Azure Cost Management updates – October 2019

Whether you're a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management comes in!

We're always looking for ways to learn more about your challenges and how Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Cost Management at Microsoft Ignite 2019
Cost Management update for partners
Major refresh for the Power BI connector
BP implements cloud governance and effective cost management
What's new in Cost Management Labs
Scope selection and navigation optimized for active billing accounts
Improved right-sizing recommendations for virtual machines
New ways to save money with Azure!
New videos
Documentation updates

Let's dig into the details.

 

Cost Management at Microsoft Ignite 2019

Microsoft Ignite 2019 is right around the corner! Come join us in these Azure Cost Management sessions and don't forget to stop by the Azure Cost Management booth on the expo floor to say hi and get some cool swag.

Analyze, manage, and optimize your cloud cost with Azure Cost Management (Session BRK3190, November 5, 3:30-4:15 PM)
Learn how Azure Cost Management can help you gain visibility, drive accountability, and optimize your cloud costs. Special guest, Mars Inc, will show how they use Azure Cost Management to get the most value out of Azure.
Manage and optimize your cloud cost with Azure Cost Management (Session THR2184, November 7, 9:00-9:20 AM)
Can't make the full hour? Join us for a quick overview of Azure Cost Management in this short, theater session.

And if you're still hungry for more, here are a few other sessions you might be interested in:

Get the most out of Microsoft Azure with Azure Advisor (Session THR2181, 20m)
Keeping costs down in Azure (Session AFUN70, 45m)
Make the most of Azure to reduce your cloud spend (Session BRK2140, 45m)
Optimizing cost for Azure solutions (Session THR2364, 20m)
Optimize Azure spend while maximizing cloud potential (Session THR2288, 20m)
Lessons learned in gaining visibility and lowering cost in our Azure environments (Session THR2220, 20m)

 

Cost Management update for partners

November will bring a lot of exciting announcements across Azure and Microsoft as a whole. Perhaps the one we’re most eager to see is the one we mentioned in our July update: the launch of Microsoft Customer Agreement support for partners, where Azure Cost Management will become available to Microsoft Cloud Solution Provider (CSP) partners and customers. CSP partners who have onboarded their customers to Microsoft Customer Agreement will be able to take advantage of all the native cost management tools Microsoft Enterprise Agreement and pay-as-you-go customers have today, but optimized for CSP.

Partners will be able to:

Understand and analyze costs directly in the portal and break them down by customer, subscription, meter, and more
Setup budgets to be notified or trigger automated actions when costs exceed predefined thresholds
Review invoiced costs and partner-earned credits associated with customers, subscriptions, and services
Enable Cost Management for customers using pay-as-you-go rates

And once Cost Management has been enabled for CSP customers, they’ll also be able to take advantage of these native tools when managing their subscriptions and resource groups.

All of this and more will be available to CSP partners and customers within the Azure portal and the underlying Resource Manager APIs to enable rich automation and integration to meet your specific needs. And this is just the first of a series of updates to enable Azure Cost Management for partners and their customers. We hope you find these tools valuable as an addition to all the new functionality Microsoft Customer Agreement offers and look forward to delivering even more cost management capabilities next year, including support for existing CSP customers. Stay tuned for the full Microsoft Customer Agreement announcement coming in November!

 

Major refresh for the Power BI connector

Azure Cost Management offers several ways to report on your cost and usage data. You can start with cost analysis in the portal, then download data for offline analysis. If you need more automation, you can use Cost Management APIs or schedule an export to push data to a storage account on a daily basis. But maybe you just need detailed reporting alongside other business reports. This is where the Azure Cost Management connector for Power BI comes in. This month you'll see a few major updates to the Power BI connector.

First and foremost, this is a new connector that replaces both the Azure Consumption Insights connector for Enterprise Agreement accounts and the Azure Cost Management (Beta) connector for Microsoft Customer Agreement accounts. The new connector supports both by accepting either an Enterprise Agreement billing account ID (enrollment number) or Microsoft Customer Agreement billing profile ID.

The next change Enterprise Agreement admins will notice is that you no longer need an API key. Instead, the new connector uses Azure Active Directory. The connector still requires access to the entire billing account, but now a read-only user can set it up without requiring a full admin to create an API key in the Enterprise Agreement portal.

Lastly, you'll also notice a few new tables for reservation details and recommendations. Reservation and Marketplace purchases have been added to the Usage details table as well as a new Usage details amortized table, which includes the same amortized data available in cost analysis. For more details, refer to the Reservation and Marketplace purchases update we announced in June 2019. Those same great changes are now available in Power BI.

Please check out the new connector and let us know what you'd like to see next!

 

BP implements cloud governance and effective cost management

BP has moved a significant portion of its IT resources to the Microsoft Azure cloud platform over the past five years as part of a company-wide digital transformation. To manage and deliver all its Azure resources in the most efficient possible way, BP uses Azure Policy for governance to control access to Azure services. At the same time, the company uses Azure Cost Management to track usage of Azure services. BP has been able to reduce its cloud spend by 40 percent with the insights it has gained.

"We’ve used Azure Cost Management to help cut our cloud costs by 40 percent. Even though our total usage has close to doubled, our total spending is still well below what it used to be."
– John Maio, Microsoft Platform Chief Architect

Learn more about BP's customer story.

 

What's new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what's coming in Azure Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

Get started quicker with the cost analysis Home view
Cost Management offers five  built-in views to get started with understanding and drilling into your costs. The Home view gives you quicker access to those views so you get to what you need faster.
New: Scope selection and navigation optimized for active billing accounts – Now available in the portal
Cost Management now prioritizes active billing accounts when selecting a default scope and displaying available scopes in the scope picker.
New: Performance optimizations in cost analysis and dashboard tiles
Whether you're using tiles pinned to the dashboard or the full experience, you'll find cost analysis loads faster than ever.

Of course, that's not all. Every change in Cost Management is available in Cost Management Labs a week before it's in the full Azure portal. We're eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today.

 

Scope selection and navigation optimized for active billing accounts

Cost Management is available at every scope above your resources – from a billing account or management group down to the individual resource groups where you manage your apps. You can manage costs in the context of the scope you're interested in or start in Cost Management and switch between scopes without navigating around the portal. Whatever works best for you. This month, we're introducing a few small tweaks to make it even easier to manage costs for your active billing accounts and subscriptions.

For those who start in Cost Management, you may notice the default scope has changed for you. Cost Management now prioritizes active billing accounts and subscriptions over renewed, cancelled, or disabled ones. This will help you get started even quicker without needing to change scope.

When you do change scope, the list of billing accounts may be a little shorter than you last remember. This is because those older billing accounts are now hidden by default, keeping you focused on your active billing accounts. To see your inactive billing accounts, uncheck the "Only show active billing accounts" checkbox at the bottom of the scope picker. This option also allows you to see all subscriptions, regardless of what's been pre-selected in the global subscription filter.

Lastly, when you're looking at all billing accounts and subscriptions, you'll see the inactive ones at the bottom of the list, with their status clearly called out for ultimate transparency and clarity.

We hope these changes will make it easier for you manage costs across scopes. Let us know what you'd like to see next.

 

Improved right-sizing recommendations for virtual machines

One of the most critical learnings when moving to the cloud is how important it is to size virtual machines for the workload and use auto-scaling capabilities to grow (or shrink) to meet usage demands. In an effort to ensure your virtual machines are using the optimal size, Azure Advisor now factors CPU usage, memory, and network usage into right-sizing recommendations for more accurate recommendations you can trust. Learn more about the change in the latest Advisor update.

 

New ways to save money with Azure

There have been several new cost optimization improvements over the past month. Here are a few you might be interested in:

Save up to 25 percent with the new capacity-based pricing options for Azure Monitor Log Analytics
Only pay for the licenses you use with the new Azure DevOps assignment-based billing option
Take advantage of the free, promotional pricing for data transfer to Azure Front Door through the end of November 2019

 

New videos

For those visual learners out there, here are a couple new videos you should check out:

How to apply budgets to subscriptions (5m)
How to use cost analysis (2.5m)

Subscribe to the Azure Cost Management YouTube channel to stay in the loop with new videos as they're released and let us know what you'd like to see next.

 

Documentation updates

There were a lot of documentation updates. Here are a few you might be interested in:

Lots of updates around Microsoft Partner Agreement for partners – start with the Getting started with your Microsoft Partner Agreement billing account
Added Microsoft Partner Agreement scopes to Understand and work with scopes
Summarized a few of the common uses of cost analysis
Added Microsoft Customer Agreement details for virtual machine reservations

Want to keep an eye on all of the documentation updates? Check out the Cost Management doc change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request.

 

What's next?

These are just a few of the big updates from last month. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. And, as always, share your ideas and vote up others in the Cost Management feedback forum.
Quelle: Azure

Understanding Kubernetes Security on Docker Enterprise 3.0

This is a guest post by Javier Ramírez, Docker Captain and IT Architect at Hopla Software. You can follow him on Twitter @frjaraur or on Github.
Docker began including Kubernetes with Docker Enterprise 2.0 last year. The recent 3.0 release includes CNCF Certified Kubernetes 1.14, which has many additional security features. In this blog post, I will review Pod Security Policies and Admission Controllers.
What are Kubernetes Pod Security Policies?
Pod Security Policies are rules created in Kubernetes to control security in pods. A pod will only be scheduled on a Kubernetes cluster if it passes these rules. These rules are defined in the  “PodSecurityPolicy” resource and allow us to manage host namespace and filesystem usage, as well as privileged pod features. We can use the PodSecurityPolicy resource to make fine-grained security configurations, including:

Privileged containers.
Host namespaces (IPC, PID, Network and Ports).
Host paths and their permissions and volume types.
User and group for containers process execution and setuid capabilities inside container.
Change default containers capabilities.
Behaviour of Linux security modules.
Allow host kernel configurations using sysctl.

The Docker Universal Control Plane (UCP) 3.2 provides two Pod Security Policies by default – which is helpful if you’re just getting started with Kubernetes.These default policies will allow or prevent execution of privileged containers inside pods. To manage Pod Security Policies, you need to have administrative privileges on the cluster.
Reviewing and Configuring Pod Security Policies
To review defined Pod Security Policies in a Docker Enterprise Kubernetes cluster, we connect using an administrator’s UCP Bundle:
$ kubectl get PodSecurityPolicies
NAME           PRIV    CAPS   SELINUX    RUNASUSER   FSGROUP    SUPGROUP   READONLYROOTFS   VOLUMES                                                
privileged     true    *      RunAsAny   RunAsAny    RunAsAny   RunAsAny   false            *
unprivileged   false          RunAsAny   RunAsAny    RunAsAny   RunAsAny   false            *
These default policies control the execution of privileged containers inside pods.
Let’s create a policy to disallow execution of containers using root for main process. If you are not familiar with Kubernetes, we can reuse the “unprivileged” Pod Security Policy content as a template:
$ kubectl get psp  privileged -o yaml –export > /tmp/mustrunasnonroot.yaml
We removed non-required values and will have the following Pod Security Policy file: /tmp/mustrunasnonroot.yaml 
Change the runAsUser rule with “MustRunAsNonRoot” value:
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:  
  name: psp-mustrunasnonroot
spec:
  allowPrivilegeEscalation: false
  allowedHostPaths:
  – pathPrefix: /dev/null
    readOnly: true
  fsGroup:
    rule: RunAsAny
  hostPorts:
  – max: 65535
    min: 0
  runAsUser:
    rule: MustRunAsNonRoot
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  – ‘*’
We create this new policy as an administrator user in the current namespace (if none was selected, the policy will be applied to the “default” namespace):
$ kubectl create -f mustrunasnonroot.yaml                      
podsecuritypolicy.extensions/psp-mustrunasnonroot created
Now we can review Pod Security Policies:
$ kubectl get PodSecurityPolicies –all-namespaces
NAME               PRIV    CAPS   SELINUX    RUNASUSER          FSGROUP    SUPGROUP   READONLYROOTFS   VOLUMES
psp-mustrunasnonroot   true    *      RunAsAny   MustRunAsNonRoot   RunAsAny   RunAsAny   false            *
privileged         true    *      RunAsAny   RunAsAny           RunAsAny   RunAsAny   false            *
unprivileged       false          RunAsAny   RunAsAny           RunAsAny   RunAsAny   false            *
Next, we create a Cluster Role that will allow our test user to use the Pod Security Policy we just created, using role-mustrunasnonroot.yaml.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: role-mustrunasnonroot
rules:
– apiGroups:
  – policy
  resourceNames:
  – psp-mustrunasnonroot
  resources:
  – podsecuritypolicies
  verbs:
  – use
Next, we add a Cluster Role Binding to associate a new non-admin role to our user (jramirez for this example). We created rb-mustrunasnonroot-jramirez.yaml with following content:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rb-mustrunasnonroot-jramirez
  namespace: default
roleRef:
  kind: ClusterRole
  name: role-mustrunasnonroot
  apiGroup: rbac.authorization.k8s.io
subjects:
– kind: User
  name: jramirez
  namespace: default
We create both the Cluster Role and Cluster Role Binding to allow jramirez to use the defined Pod Security Policy:
$ kubectl create -f role-mustrunasnonroot.yaml
clusterrole.rbac.authorization.k8s.io/role-mustrunasnonroot created

$ kubectl create -f rb-mustrunasnonroot-jramirez.yaml
rolebinding.rbac.authorization.k8s.io/rb-mustrunasnonroot-jramirez created
Now that we’ve applied this policy, we should delete the default rules (privileged or unprivileged). In this case, the default “ucp:all:privileged-psp-role” was applied.
$ kubectl delete clusterrolebinding ucp:all:privileged-psp-role
clusterrolebinding.rbac.authorization.k8s.io “ucp:all:privileged-psp-role” deleted
We can review jramirez’s permissions to create new pods on the default namespace.
$ kubectl auth can-i create pod –as jramirez
yes
Now we can create a pod using the following manifest from nginx-as-root.yaml:
apiVersion: v1
kind: Pod
metadata:
 name: nginx-as-root
 labels:
   lab: nginx-as-root
spec:
 containers:
 – name: nginx-as-root
   image: nginx:alpine
We’ll now need to login as jramirez using ucp-bundle, our test non-admin user. We can then test deployment to see if it works:
$ kubectl create -f nginx-as-root.yaml
pod/nginx-as-root created
We will get a CreateContainerConfigError because the image doesn’t have any users defined, so the command will try to create a root container, which the policy blocks.
Events:
 Type     Reason     Age                    From               Message
 —-     ——     —-                   —-               ——-
 Normal   Scheduled  6m9s                   default-scheduler  Successfully assigned default/nginx-as-root to vmee2-5
 Warning  Failed     4m12s (x12 over 6m5s)  kubelet, vmee2-5   Error: container has runAsNonRoot and image will run as root
 Normal   Pulled     54s (x27 over 6m5s)    kubelet, vmee2-5   Container image “nginx:alpine” already present on machine
What can we do to avoid this? As a best practice,  we should not allow containers with root permissions. However, we can create an Nginx image without root permissions. Here’s a lab image that will work for our purposes (but it’s not production ready):
FROM alpine

RUN addgroup -S nginx
&& adduser -D -S -h /var/cache/nginx -s /sbin/nologin -G nginx -u 10001 nginx
&& apk add –update –no-cache nginx
&& ln -sf /dev/stdout /var/log/nginx/access.log
&& ln -sf /dev/stderr /var/log/nginx/error.log
&& mkdir /html

COPY nginx.conf /etc/nginx/nginx.conf

COPY html /html

RUN chown -R nginx:nginx /html

EXPOSE 1080

USER 10001

CMD [“nginx”, “-g”, “pid /tmp/nginx.pid;daemon off;”]
We created a new user nginx to launch the nginx main process under this one (in fact, the nginx installation will provide a special user www-data or nginx, depending on base operating system). We added the user under a special UID because we will use that UID on Kubernetes to specify the user that will be used to launch all containers in our nginx-as-nonroot pod.
You can see that we are using a new nginx.conf. Since we are not using root to start Nginx, we can’t use ports below 1024. Consequently, we exposed port 1080 in the Dockerfile. This is the simplest Nginx config required.
worker_processes  1;

events {
   worker_connections  1024;
}

http {
   include       mime.types;
   default_type  application/octet-stream;
   sendfile        on;
   keepalive_timeout  65;
   server {
       listen       1080;
       server_name  localhost;

       location / {
           root   /html;
           index  index.html index.htm;
       }

       error_page   500 502 503 504  /50x.html;
       location = /50x.html {
           root   /html;
       }

   }

}
We added a simple index.html with just one line:
$ cat html/index.html  
It worked!!
And our pod definition has new security context settings:
apiVersion: v1
kind: Pod
metadata:
 name: nginx-as-nonroot
 labels:
   lab: nginx-as-root
spec:
 containers:
 – name: nginx-as-nonroot
   image: frjaraur/non-root-nginx:1.2
   imagePullPolicy: Always
 securityContext:
   runAsUser: 10001
We specified a UID for all containers in that pod. Therefore, the Nginx main process will run under 10001 UID, the same one specified in image.
If we don’t specify the same UID, we will get permission errors because the main process will use pod-defined settings with different users and Nginx will not be able to manage files:
nginx: [alert] could not open error log file: open() “/var/lib/nginx/logs/error.log” failed (13: Permission denied)
2019/10/17 07:36:10 [emerg] 1#1: mkdir() “/var/tmp/nginx/client_body” failed (13: Permission denied)
If we do not specify any security context, it will use the image-defined UID with user 10001. It will work correctly since the process doesn’t require root access.  
We can go back to the previous situation by deleting the custom Cluster Role Binding we created earlier (rb-mustrunasnonroot-jramirez) and adding the UCP role again:
ucp:all:privileged-psp-role
Create rb-privileged-psp-role.yaml with following content:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ucp:all:privileged-psp-role
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: privileged-psp-role
subjects:
– kind: Group
  name: system:authenticated
  apiGroup: rbac.authorization.k8s.io
– kind: Group
  name: system:serviceaccounts
  apiGroup: rbac.authorization.k8s.io
And create the ClusterRoleBinding object using $ kubectl create -f rb-privileged-psp-role.yaml as administrator.
Kubernetes Admission Controllers
Admission Controllers are a feature added to Kubernetes clusters to manage and enforce default resource values or properties and prevent potential risks or misconfigurations. They occur before workload execution, intercepting requests to validate or modify its content. The Admission Controllers gate user interaction with cluster API, applying policies to any actions on Kubernetes.
We can review which Admission Controllers are defined in Docker Enterprise by taking a look at the ucp-kube-apiserver command-line used to start this Kubernetes API Server container. On any of our managers, we can describe container configuration:
$ docker inspect ucp-kube-apiserver –format ‘json {{ .Config.Cmd }}’  
json [–enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,
NodeRestriction,ResourceQuota,PodNodeSelector,PodSecurityPolicy, UCPAuthorization,CheckImageSigning,UCPNodeSelector


These are the  Admission Controllers deployed with Docker Enterprise Kubernetes:

NamespaceLifecycle will manage important namespace features. It will prevent users from removing the default, kube-system and kube-public namespaces, and it will provide the integrity for other namespaces deletion, removing all objects on it prior to deletion (for example). It will also prevent new object creation on a namespace that is in the process of being removed (it can take time because running objects must be removed).
LimitRanger will manage default resource requests to pods that don’t specify any. It also verifies that Namespace associated resources doesn’t pass its defined limit.  
ServiceAccount will associate pods to a default ServiceAccount if they don’t provide one, and ensure that one exists if it is present on Pod definition. It will also manage API account accessibility.
PersistentVolumeLabel will add special labels for regions or zones to ensure that right volumes are mounted per region or zone.
DefaultStorageClass will add a default StorageClass when none was declared, and a PersistentVolumeClaim ask for storage.
DefaultTolerationSeconds will set default pod toleration values, evicting nodes not ready or unreachable for more than 300 seconds.
NodeRestriction will allow only kubelet modifications to its own Node or Pods.
ResourceQuota will manage resource quota limits not reached within namespaces.
PodNodeSelector provides default node selections within namespaces.
PodSecurityPolicy reviews Pod Security Policies to determine if a Pod can be executed or not.
UCPAuthorization provides UCP Roles to Kubernetes integration, preventing deletion of system-required cluster roles and bindings. It will also prevents using host paths volumes or privileged containers for non-admins (or non-privileged accounts), even if it is allowed in Pod Security Policies.  
CheckImageSigning prevents execution of Pods based on unsigned images by authorized users.
UCPNodeSelector manages execution of non-system Kubernetes workloads only on non-mixed UCP hosts.

The last few are Docker designed and created to ensure UCP and Kubernetes integration and improved access and security. These Admission Controllers will be set up during installation. They can’t be disabled since doing so can compromise cluster security, or even break some unnoticeable but important functionalities.
As we learned, Docker Enterprise 3.0 now provides Kubernetes security features by default that will complement and improve users interaction with the cluster, maintaining the highest security environment out-of-box.
To learn more about you can run Kubernetes with Docker Enterprise:

Read the Kubernetes Made Easy eBook.
Try Play with Kubernetes, powered by Docker.

#Kubernetes Security on Docker Enterprise 3.0 by #DockerCaptain @frjaraurClick To Tweet

The post Understanding Kubernetes Security on Docker Enterprise 3.0 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

New in Stream Analytics: Machine Learning, online scaling, custom code, and more

Azure Stream Analytics is a fully managed Platform as a Service (PaaS) that supports thousands of mission-critical customer applications powered by real-time insights. Out-of-the-box integration with numerous other Azure services enables developers and data engineers to build high-performance, hot-path data pipelines within minutes. The key tenets of Stream Analytics include Ease of use, Developer productivity, and Enterprise readiness. Today, we're announcing several new features that further enhance these key tenets. Let's take a closer look at these features:

Preview Features

Rollout of these preview features begins November 4th, 2019. Worldwide availability to follow in the weeks after. 

Online scaling

In the past, changing Streaming Units (SUs) allocated for a Stream Analytics job required users to stop and restart. This resulted in extra overhead and latency, even though it was done without any data loss.

With online scaling capability, users will no longer be required to stop their job if they need to change the SU allocation. Users can increase or decrease the SU capacity of a running job without having to stop it. This builds on the customer promise of long-running mission-critical pipelines that Stream Analytics offers today.

Change SUs on a Stream Analytics job while it is running.

C# custom de-serializers

Azure Stream Analytics has always supported input events in JSON, CSV, or AVRO data formats out of the box. However, millions of IoT devices are often programmed to generate data in other formats to encode structured data in a more efficient yet extensible format.

With our current innovations, developers can now leverage the power of Azure Stream Analytics to process data in Protobuf, XML, or any custom format. You can now implement custom de-serializers in C#, which can then be used to de-serialize events received by Azure Stream Analytics.

Extensibility with C# custom code

Azure Stream Analytics traditionally offered SQL language for performing transformations and computations over streams of events. Though there are many powerful built-in functions in the currently supported SQL language, there are instances where a SQL-like language doesn't provide enough flexibility or tooling to tackle complex scenarios.

Developers creating Stream Analytics modules in the cloud or on IoT Edge can now write or reuse custom C# functions and invoke them right in the query through User Defined Functions. This enables scenarios such as complex math calculations, importing custom ML models using ML.NET, and programming custom data imputation logic. Full-fidelity authoring experience is made available in Visual Studio for these functions.

Managed Identity authentication with Power BI

Dynamic dashboarding experience with Power BI is one of the key scenarios that Stream Analytics helps operationalize for thousands of customers worldwide.

Azure Stream Analytics now offers full support for Managed Identity based authentication with Power BI for dynamic dashboarding experience. This helps customers align better with their organizational security goals, deploy their hot-path pipelines using Visual Studio CI/CD tooling, and enables long-running jobs as users will no longer be required to change passwords every 90 days.

While this new feature is going to be immediately available, customers will continue to have the option of using the Azure Active Directory User-based authentication model.

Stream Analytics on Azure Stack

Azure Stream Analytics is supported on Azure Stack via IoT Edge runtime. This enables scenarios where customers are constrained by compliance or other reasons from moving data to the cloud, but at the same time wish to leverage Azure technologies to deliver a hybrid data analytics solution at the Edge.

Rolling out as a preview option beginning January 2020, this will offer customers the ability to analyze ingress data from Event Hubs or IoT Hub on Azure Stack, and egress the results to a blob storage or SQL database on the same. You can continue to sign up for preview of this feature until then.

Debug query steps in Visual Studio

We've heard a lot of user feedback about the challenge of debugging the intermediate row set defined in a WITH statement in Azure Stream Analytics query. Users can now easily preview the intermediate row set on a data diagram when doing local testing in Azure Stream Analytics tools for Visual Studio. This feature can greatly help users to breakdown their query and see the result step-by-step when fixing the code.

Local testing with live data in Visual Studio Code

When developing an Azure Stream Analytics job, developers have expressed a need to connect to live input to visualize the results. This is now available in Azure Stream Analytics tools for Visual Studio Code, a lightweight, free, and cross-platform editor. Developers can test their query against live data on their local machine before submitting the job to Azure. Each testing iteration takes less than two to three seconds on average, resulting in a very efficient development process.

Live Data Testing feature in Visual Studio Code

Private preview for Azure Machine Learning

Real-time scoring with custom Machine Learning models

Azure Stream Analytics now supports high-performance, real-time scoring by leveraging custom pre-trained Machine Learning models managed by the Azure Machine Learning service, and hosted in Azure Kubernetes Service (AKS) or Azure Container Instances (ACI), using a workflow that requires users to write absolutely no code.

Users can build custom models by using any popular python libraries such as Scikit-learn, PyTorch, TensorFlow, and more to train their models anywhere, including Azure Databricks, Azure Machine Learning Compute, and HD Insight. Once deployed in Azure Kubernetes Service or Azure Container Instances clusters, users can use Azure Stream Analytics to surface all endpoints within the job itself. Users simply navigate to the functions blade within an Azure Stream Analytics job, pick the Azure Machine Learning function option, and tie it to one of the deployments in the Azure Machine Learning workspace.

Advanced configurations, such as the number of parallel requests sent to Azure Machine Learning endpoint, will be offered to maximize the performance.

You can sign up for preview of this feature now.

Feedback and engagement

Engage with us and get early glimpses of new features by following us on Twitter at @AzureStreaming.

The Azure Stream Analytics team is highly committed to listening to your feedback and letting the user's voice influence our future investments. We welcome you to join the conversation and make your voice heard via our UserVoice page.
Quelle: Azure