Automatisierte Entwicklung von WebSocket-APIs im Amazon API Gateway mit AWS CloudFormation

Sie können nun AWS CloudFormation-Vorlagen verwenden, um WebSocket-APIs im Amazon API Gateway zu entwickeln. AWS CloudFormation bietet Ihnen eine gemeinsame Sprache, um alle Infrastrukturressourcen in Ihrer Cloud-Umgebung über alle Regionen und Konten hinweg zu beschreiben und bereitzustellen, was die Entwicklung von Anwendungen in der Cloud vereinfacht.  
Quelle: aws.amazon.com

OpenShift Protects against Nasty Container Exploit

Have you ever done something that was difficult for you to do, but you did it anyway because you cared about the people it would affect? Maybe it was something people honestly forgot you were even doing because you have been doing it for so long? This week I would like to pause and say […]
The post OpenShift Protects against Nasty Container Exploit appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

AWS OpsWorks for Chef Automate und AWS OpsWorks for Puppet Enterprise unterstützen jetzt AWS CloudFormation

Ab heute können Sie AWS CloudFormation-Vorlagen verwenden, um AWS OpsWorks for Chef Automate und AWS OpsWorks for Puppet Enterprise Server geordnet und vorhersehbar zu erstellen, zu aktualisieren und zu löschen. AWS OpsWorks ist ein vollständig verwalteter Konfigurationsverwaltungsservice, der Chef Automate- und Puppet Enterprise-Server hostet und skaliert.
Quelle: aws.amazon.com

Maximize throughput with repartitioning in Azure Stream Analytics

Customers love Azure Stream Analytics for its ease of analyzing streams of data in movement, with the ability to set up a running pipeline within five minutes. Optimizing throughput has always been a challenge when trying to achieve high performance in a scenario that can't be fully parallelized. This occurs when you don't control the partition key of the input stream, or your source “sprays” input across multiple partitions that later need to be merged. You can now use a new extension of Azure Stream Analytics SQL to specify the number of partitions of a stream when reshuffling the data. This new capability unlocks performance and aids in maximizing throughput in such scenarios.

The new extension of Azure Stream Analytics SQL includes a keyword INTO that allows you to specify the number of partitions for a stream when performing reshuffling using a PARTITION BY statement. This new keyword, and the functionality it provides, is a key feature to achieve high performance throughput for the above scenarios, as well as to better control the data streams after a shuffle. To learn more about what’s new in Azure Stream Analytics, please see, “Eight new features in Azure Stream Analytics.”

What is repartitioning?

Repartitioning, or reshuffling, is required when processing data on a stream that is not sharded according to the natural input scheme, such as the PartitionId in the Event Hubs case. This might happen when you don’t control the routing of the event generators or you need to scale out your flow due to resource constraints. After repartitioning, each shard can be processed independently of others, and progress without additional synchronization between the shards. This allows you to linearly scale out your streaming pipeline.

You can specify the number of partitions the stream should be split into by using a newly introduced keyword INTO after a PARTITION BY statement, with a strictly positive integer that indicates the partition count. Please see below for an example:

SELECT * INTO [output] FROM [input] PARTITION BY DeviceID INTO 10

The query above will read from the input, regardless of it being naturally partitioned, and repartition the stream tenfold according to the DeviceID dimension and flush the data to output. Hashing of the dimension value (DeviceID) is used to determine which partition shall accept which substream. The data will be flushed independently for each partitioned stream, assuming the output supports partitioned writes, and either has 10 partitions, or can handle an arbitrary number of such.

A diagram of the data flow with the repartition in place is below:

Why and how to use repartitioning?

Use repartitioning to optimize the heavy parts of processing. It will process the data independently and simultaneously on disjoint subsets, even when the data is not naturally partitioned properly on input. The partitioning scheme is carried forward as long as the partition key stays the same.

Experiment and observe the resource utilization of your job to determine the exact number of partitions needed. Remember, Streaming Unit (SU) count, which is the unit of scale for Azure Stream Analytics, must be adjusted so the number of physical resources available to the job can fit the partitioned flow. In general, six SUs is a good number to assign to each partition. In case there are insufficient resources assigned to the job, the system will only apply the repartition if it benefits the job.

When joining two streams of data explicitly repartitioned, these streams must have the same partition key and partition count. The outcome is a stream that has the same partition scheme. Please see below for an example:

WITH step1 AS (SELECT * FROM [input1] PARTITION BY DeviceID INTO 10),
step2 AS (SELECT * FROM [input2] PARTITION BY DeviceID INTO 10)

SELECT * INTO [output] FROM step1 PARTITION BY DeviceID UNION step2 PARTITION BY DeviceID

Specifying a mismatching number of partitions or partition key would yield a compilation error when creating the job.

When writing a partitioned stream to an output, it works best if the output scheme matches the stream scheme by key and count, so each substream can be flushed independently of others. Alternatively, the stream must be merged and possibly repartitioned again by a different scheme before flushing. This would add to the general latency of the processing, as well as the resource utilization and should be avoided.

For use cases with SQL output, use explicit repartitioning to match optimal partition count to maximize throughput. Since SQL works best with eight writers, repartitioning the flow to eight before flushing, or somewhere further upstream, may prove beneficial for the job’s performance. For more information, please refer to the documentation, “Azure Stream Analytics output to Azure SQL Database.”

Next steps

Get started with Azure Stream Analytics and have a look at our documentation to understand how to leverage query parallelization in Azure Stream Analytics.

For any question, join the conversation on Stack Overflow.
Quelle: Azure

AWS AppSync ist jetzt in der Region EU (London) verfügbar

AWS AppSync ist ein serverloser Backend-Service für Web-, Mobil- und Unternehmensanwendungen, der Echtzeit-Datensynchronisierung und Offline-Funktionen auf Unternehmensebene unterstützt. AWS AppSync erleichtert Datenzugriff, Datenverarbeitung und Datensynchronisierung für mehrere Datenquellen, wie z. B. Amazon DynamoDB, Amazon Elasticsearch Service, AWS Lambda, Amazon RDS und einer beliebigen HTTP-Datenquelle. Es basiert auf GraphQL, einem offenen Standard, über den Anwendungen genau die erforderlichen Daten in einer einzigen Netzwerkanfrage anfordern, ändern und abonnieren können.
Mit dem heutigen Start ist AWS AppSync nun in 11 AWS-Regionen verfügbar: USA Ost (Nord-Virginia), USA Ost (Ohio), USA West (Oregon), Asien-Pazifik (Mumbai), Asien-Pazifik (Seoul), Asien-Pazifik (Singapur), Asien-Pazifik (Sydney), Asien-Pazifik (Tokio), EU (Frankfurt), EU (Irland) und EU (London).
Weitere Informationen finden Sie auf der AWS AppSync-Webseite.  
Quelle: aws.amazon.com

Docker Security Update: CVE-2018-5736 and Container Security Best Practices

On Monday, February 11, Docker released an update to fix a privilege escalation vulnerability (CVE-2019-5736) in runC, the Open Container Initiative (OCI) runtime specification used in Docker Engine and containerd. This vulnerability makes it possible for a malicious actor that has created a specially-crafted container image to gain administrative privileges on the host. Docker engineering worked with runC maintainers on the OCI to issue a patch for this vulnerability.
Docker recommends immediately applying the update to avoid any potential security threats. For Docker Engine-Community, this means updating to 18.09.2 or 18.06.2. For Docker Engine- Enterprise, this means updating to 18.09.2, 18.03.1-ee-6, or 17.06.2-ee-19. Read the release notes before applying the update due to specific instructions for Ubuntu and RHEL operating systems.
Summary of the Docker Engine versions that address the vulnerability:
 

Docker Engine Community

Docker Engine Enterprise

18.09.2

18.09.2

18.06.2

18.03.1-ee-6

17.06.2-ee-19

To better protect the container images run by Docker Engine, here are some additional recommendations and best practices:
Use Docker Official Images
Official Images are a curated set of Docker repositories hosted on Docker Hub that are designed to:

Provide essential base OS repositories (for example, ubuntu, centos) that serve as the starting point for the majority of users.
Provide drop-in solutions for popular programming language runtimes, data stores and other services.
Exemplify Dockerfile best practices and provide clear documentation to serve as a reference for other Dockerfile authors. Specific to this vulnerability, running containers as a non-privileged user, as outlined in the section on USER practices within the Dockerfile can mitigate this issue.
Ensure that security updates are applied in a timely manner. Security updates should be applied immediately and as a result, users should rebuild and publish their images. This is particularly important as many Official Images are some of the most popular on Docker Hub.

Docker sponsors a dedicated team that is responsible for reviewing and publishing all content in the Official Images. This team works in collaboration with upstream software maintainers, security experts, and the broader Docker community to ensure the security of these images.
Use Docker Certified Containers
The Docker Enterprise container platform enables you to ensure the integrity of your images. Security is not a static, one-time activity but a continuous process that follows the application across the different stages of the application pipeline. To prevent systems from being compromised, Docker Enterprise provides integrated security across the supply chain. Docker Enterprise users that follow security best practices and run trusted code based on Docker Certified images can be assured that their software images:

Have been tested and are supported on the Docker Enterprise container platform by verified publishers
Adhere to Docker’s container best practices for building dockerfiles/images
Pass a functional API test suite
Complete a vulnerability scanning assessment

Docker Certification gives users and enterprises a trusted way to run more technology in containers with support from both Docker and the publisher. Customers can quickly identify the certified content with visible badges and be confident that they were built with best practices, tested to operate smoothly on Docker Enterprise.
Leverage Docker Enterprise Features for Additional Protection
Docker Enterprise provides additional layers of protection across the software supply chain through content validation and runtime application security. This includes role-based access control (RBAC) for flexible and granular access privileges across multiple teams to determine who in the organization can run a container. Administrators can also set a policy restricting the ability for any user to run a privileged container on a cluster.
Additionally, Docker Content Trust enables cryptographic digital signing to confirm container image provenance and authenticity – in effect providing your operations team with details about the author of an application and confirming that it hasn’t been tampered with or modified in any way. With policy enforcement at runtime, Docker Enterprise ensures that only container images signed by trusted teams can run in a cluster.
For more information:
Find out how to upgrade Docker Engine – Enterprise
Learn how to upgrade Docker Engine – Community
Get more information on Docker Enterprise
Learn more about Docker Security.

#Docker #Security Update: CVE-2018-5736 and #Container Security Best PracticesClick To Tweet

The post Docker Security Update: CVE-2018-5736 and Container Security Best Practices appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/