How Azure Security Center helps you protect your environment from new vulnerabilities

Recently the disclosure of a vulnerability (CVE-2019-5736) was announced in the open-source software (OSS) container runtime, runc. This vulnerability can allow an attacker to gain root-level code execution on a "host. runc" which is the underlying container runtime underneath many popular containers.

Azure Security Center can help you detect vulnerable resources in your environment within Microsoft Azure, on-premises, or other clouds. Azure Security Center can also detect that an exploitation has occurred and alert you.

Azure Security Center offers several methods that can be applied to mitigate or detect malicious behavior:

Strengthen security posture – Azure Security Center periodically analyzes the security state of your resources. When it identifies potential security vulnerabilities it creates recommendations. The recommendations guide you through the process of configuring the necessary controls. We have plans to add recommendations when unpatched resources are detected. You can find more information about strengthening security posture by visiting our documentation, “Managing security recommendations in Azure Security Center.”
File Integrity Monitoring (FIM) – This method examines files and registry keys of operating systems, application software, and more, for changes that might indicate an attack. By enabling FIM, Azure Security Center will be able detect changes in the directory which can indicate malicious activity. Guidance on how to enable FIM and add file tracking on Linux machines can be found in our documentation, “File Integrity Monitoring in Azure Security Center.”
Security alerts – Azure Security Center detects suspicious activities on Linux machines using auditd framework. Collected records flow into a threat detection pipeline and surface as an alert when malicious activity is detected. Security alerts coverage will soon include new analytics to identify compromised machines by runc vulnerability. You can find more information about security alerts by visiting our documentation, “Azure Security Center detection capabilities.”

To apply the best security hygiene practices, it is recommended to have your environment configured so that it posses the latest updates from your distribution provider. System updates can be performed through Azure Security Center, for more guidance visit our documentation, “Apply system updates in Azure Security Center.”
Quelle: Azure

Update 19.02 for Azure Sphere public preview now available

The Azure Sphere 19.02 release is available today. In our second quarterly release after public preview, our focus is on broader enablement of device capabilities, reducing your time to market with new reference solutions, and continuing to prioritize features based on feedback from organizations building with Azure Sphere.

Today Azure Sphere’s hardware offerings are centered around our first Azure Sphere certified MCU, the MediaTek MT3620. Expect to see additional silicon announcements in the near future, as we work to expand our silicon and hardware ecosystems to enable additional technical scenarios and ultimately deliver more choice to manufacturers.

Our 19.02 release focuses on broadening what you can accomplish with MT3620 solutions. With this release, organizations will be able to use new peripheral classes (I2C, SPI) from the A7 core. We continue to build on the private Ethernet functionality by adding new platform support for critical networking services (DHCP and SNTP) that enable a set of brownfield deployment scenarios. Additionally, by leveraging our new reference solutions and hardware modules, device builders can now bring the security of Azure Sphere to products even faster than before.

To build applications that leverage this new funcionality, you will need to ensure that you have installed the latest Azure Sphere SDK Preview for Visual Studio. All Wi-Fi connected devices will automatically receive an updated Azure Sphere OS.

New connectivity options – This release supports DHCP and SNTP servers in private LAN configurations. You can optionally enable these services when connecting a MT3620 to a private Ethernet connection.
Broader device enablement – Beta APIs now enable hardware support for both I2C and SPI peripherals. Additionally, we have enabled broader configurability options for UART.
More space for applications – The MT3620 now supports 1 MB of space dedicated for your production application binaries.
Reducing time to market of MT3620-enabled products – To reduce complexity in getting started with the many aspects of Azure Sphere we have added several samples and reference solutions to our GitHub samples repo:

Private Ethernet – Demonstrates how to wire the supported microchip part and provides the software to begin developing a private Ethernet-based solution.
Real-time clock – Demonstrates how to set, manage, and integrate the MT3620 real time clock with your applications.
Bluetooth command and control – Demonstrates how to enable command and control scenarios by extending the Bluetooth Wi-Fi pairing solution released in 18.11.
Better security options for BLE – Extends the Bluetooth reference solution to support a PIN between the paired device and Azure Sphere.
Azure IoT – Demonstrates how to use Azure Sphere with either Azure IoT Central or an Azure IoT Hub.
CMake preview – Provides an early preview of CMake as an alternative for building Azure Sphere applications both inside and outside Visual Studio. This limited preview lets customers begin testing the use of existing assets in Azure Sphere development.

OS update protection – The Azure Sphere OS now protects against a set of update scenarios that would cause the device to fail to boot. The OS detects and recovers from these scenarios by automatically and atomically rolling back the device OS to its last known good configuration.
Latest Azure IoT SDK – The Azure Sphere OS has updated its Azure IoT SDK to the LTS Oct 2018 version.

All Wi-Fi connected devices that were previously updated to the 18.11 release will automatically receive the 19.02 Azure Sphere OS release. As a reminder, if your device is still running a release older than 18.11, it will be unable to authenticate to an Azure IoT Hub via DPS or receive OTA updates. See the Release Notes for how to proceed in that case.

As always, continued thanks to our preview customers for your comments and suggestions. Microsoft engineers and Azure Sphere community experts will respond to product-related questions on our MSDN forum and development questions on Stack Overflow. We also welcome product feedback and new feature requests.

Visit the Azure Sphere website for documentation and more information on how to get started with your Azure Sphere development kit. You can also email us at nextinfo@microsoft.com to kick off an Azure Sphere engagement with your Microsoft representative.
Quelle: Azure

Learn how to build with Azure IoT: Upcoming IoT Deep Dive events

Microsoft IoT Show, the place to go to hear about the latest announcements, tech talks, and technical demos, is starting a new interactive, live-streaming event and technical video series called IoT Deep Dive!

Each IoT Deep Dive will bring in a set of IoT experts, like Joseph Biron, PTC CTO of IoT, and Chafia Aouissi, Azure IoT Senior Program Manager, during the first IoT Deep Dive, "Building End to End industrial Solutions with PTC ThingWorx and Azure IoT.” Join us on February 20, 2019 from 9:00 AM – 9:45 AM Pacific Standard Time to walk-through end to end IoT solutions, technical demos, and best practices.

Come learn and ask questions about how to build IoT solutions and deep dive into intelligent edge, tooling, DevOps, security, asset tracking, and other top requested technical deep dives. Perfect for developers, architects, or anyone who is ready to accelerate going from proof of concept to production, or needs best practices tips while building their solutions.

Upcoming events

IoT Deep Dive Live: Building End to End industrial Solutions with PTC ThingWorx and Azure

PTC ThingWorx and Microsoft Azure IoT are proven industrial innovation solutions with a market-leading IoT cloud infrastructure. Sitting on top of Azure IoT, ThingWorx delivers a robust and rapid creation of IoT applications and solutions that maximizes Azure services such as IoT Hub. Join the event to learn how to build an E2E industrial solution. You can setup a reminder to join the live event.

When: February 20, 2019 at 9:00 AM – 9:45 AM Pacific Standard Time | Level 300
Learn about: ThingWorx, Vuforia Studio, Azure IoT, and Dynamics 365
Special guests:

Joseph Biron, Chief Technology Officer of IoT, PTC
Neal Hagermoser, Global ThingWorx COE Lead, PTC
Chafia Aouissi, Senior Program Manager, Azure IoT
Host: Pamela Cortez, Program Manager, Azure IoT

Industries and use cases: Smart connected product manufactures in the verticals of including automotive, industrial equipment, aerospace, electronics, and high tech.

Location Intelligence for Transportation with Azure Maps 

Come learn how to use Azure Maps to provide location intelligence in different areas of transportation such as fleet management, asset tracking, and logistics.

When: March 6, 2019 9:00 AM – 9:45 AM Pacific Standard Time | Level 300
Learn about: Azure Maps, Azure IoT Hub, Azure IoT Central, and Azure Event Grid
Guest speakers:

Ricky Brundritt, Senior Program Manager, Azure IoT
Pamela Cortez, Program Manager, Azure IoT

Industries and use cases: Fleet management, logistics, asset management, and IoT

Submit questions before the events on the Microsoft IoT tech community or during the IoT Deep Dive live event itself! All videos will be hosted on Microsoft IoT Show after the live event.
Quelle: Azure

We’ve Got ❤️ For Our First Batch of DockerCon Speakers

As the world celebrates Valentine’s Day, at Docker, we are celebrating what makes our heart all aflutter – gearing up for an amazing DockerCon with the individuals and organizations that make up the Docker community. With that, we are thrilled to announce our first speakers for DockerCon San Francisco, April 29 – May 2.
DockerCon fan favorites like Liz Rice, Bret Fisher and Don Bauer are returning to the conference to share new insights and experiences to help you better learn how to containerize.
And we are excited to welcome new speakers to the DockerCon family including Ana Medina, Tommy Hamilton and Ian Coldwater to talk chaos engineering, building your production container platform stack and orchestration with Docker Swarm and Kubernetes. 

And we’re just getting started! This year DockerCon is going to bring more technical deep dives, practical how-to’s, customer case studies and inspirational stories. Stay tuned as we announce the full speaker line up this month.
<Register Now>
 

This #ValentinesDay #Docker announces its first speakers for #DockerCon San Francisco on April 29 to May 2Click To Tweet

The post We’ve Got ❤️ For Our First Batch of DockerCon Speakers appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Under the hood: Performance, scale, security for cloud analytics with ADLS Gen2

On February 7, 2018 we announced the general availability of Azure Data Lake Storage (ADLS) Gen2. Azure is now the only cloud provider to offer a no-compromise cloud storage solution that is fast, secure, massively scalable, cost-effective, and fully capable of running the most demanding production workloads. In this blog post we’ll take a closer look at the technical foundation of ADLS that will power the end to end analytics scenarios our customers demand.

ADLS is the only cloud storage service that is purpose-built for big data analytics. It is designed to integrate with a broad range of analytics frameworks enabling a true enterprise data lake, maximize performance via true filesystem semantics, scales to meet the needs of the most demanding analytics workloads, is priced at cloud object storage rates, and is flexible to support a broad range of workloads so that you are not required to create silos for your data.

A foundational part of the platform

The Azure Analytics Platform not only features a great data lake for storing your data with ADLS, but is rich with additional services and a vibrant ecosystem that allows you to succeed with your end to end analytics pipelines.

Azure features services such as HDInsight and Azure Databricks for processing data, Azure Data Factory to ingress and orchestrate, Azure SQL Data Warehouse, Azure Analysis Services, and Power BI to consume your data in a pattern known as the Modern Data Warehouse, allowing you to maximize the benefit of your enterprise data lake.

Additionally, an ecosystem of popular analytics tools and frameworks integrate with ADLS so that you can build the solution that meets your needs.

“Data management and data governance is top of mind for customers implementing cloud analytics solutions. The Azure Data Lake Storage Gen2 team have been fantastic partners ensuring tight integration to provide a best-in-class customer experience as our customers adopt ADLS Gen2.”

– Ronen Schwartz, Senior Vice president & General Manager of Data Integration and Cloud Integration, Informatica

"WANDisco’s Fusion data replication technology combined with Azure Data Lake Storage Gen2 provides our customers a compelling LiveData solution for hybrid analytics by enabling easy access to Azure Data Services without imposing any downtime or disruption to on premise operations.”

– David Richards, Co-Founder and CEO, WANdisco

“Microsoft continues to innovate in providing scalable, secure infrastructure which go hand in hand with Cloudera’s mission of delivering on the Enterprise Data Cloud. We are very pleased to see Azure Data Lake Storage Gen2 roll out globally. Our mutual customers can take advantage of the simplicity of administration this storage option provides when combined with our analytics platform.”

– Vikram Makhija, General Manager for Cloud, Cloudera

Performance

Performance is the number one driver of value for big data analytics workloads. The reason for this is simple, the more performant the storage layer, the less compute (the expensive part!) required to extract the value from your data. Therefore, not only do you gain a competitive advantage by achieving insights sooner, you do so at a significantly reduced cost.

“We saw a 40 percent performance improvement and a significant reduction of our storage footprint after testing one of our market risk analytics workflows at Zurich’s Investment Management on Azure Data Lake Storage Gen2.”

– Valerio Bürker, Program Manager Investment Information Solutions, Zurich Insurance

Let’s look at how ADLS achieves overwhelming performance. The most notable feature is the Hierarchical Namespace (HNS) that allows this massively scalable storage service to arrange your data like a filesystem with a hierarchy of directories. All analytics frameworks (eg. Spark, Hive, etc.) are built with an implicit assumption that the underlying storage service is a hierarchical filesystem. This is most obvious when data is written to temporary directories which are renamed at the completion of the job. For traditional cloud-based object stores, this is an O(n) complex operation, n copies and deletes, that dramatically impacts performance. In ADLS this rename is a single atomic metadata operation.

The other contributor to performance is the Azure Blob Filesystem (ABFS) driver. This driver takes advantage of the fact that the ADLS endpoint is optimized for big data analytics workloads. These workloads are most sensitive to maximizing throughput via large IO operations, as distinct from other general purpose cloud stores that must optimize for a much larger range of IO operations. This level of optimization leads to significant IO performance improvements that directly benefits the performance and cost aspects of running big data analytics workloads on Azure. The ABFS driver is contributed as part of Apache Hadoop® and is available in HDInsight and Azure Databricks, as well as other commercial Hadoop distributions.

Scalable

Scalability for big data analytics is also critically important. There’s no point having a solution that works great for a few TBs of data, but collapses as the data size inevitably grows. The rate of growth of big data analytics projects tend to be non-linear as a consequence of more diverse and accessible sources of data. Most projects do benefit from the principle that the more data you have, the better the insights. However, this leads to design challenges such that the system must scale at the same rate as the growth of the data. One of the great design pivots of big data analytics frameworks, such as Hadoop and Spark, is that they scale horizontally. What this means is that as the data and/or processing grows, you can just add more nodes to your cluster and the processing continues unabated. This, however, relies on the storage layer scaling linearly as well.

This is where the value of building ADLS on top of the existing Azure Blob service shines. The EB scale of this service now applies to ADLS ensuring that no limits exist on the amount of data to be stored or accessed. In practical terms, customers can store 100s of PB of data which can be accessed with throughput to satisfy the most demanding workloads.

Secure

For customers wanting to build a data lake to serve the entire enterprise, security is no lightweight consideration. There are multiple aspects to providing end to end security for your data lake:

Authentication – Azure Active Directory OAuth bearer tokens provide industry standard authentication mechanisms, backed by the same identity service used throughout Azure and Office365.
Access control – A combination of Azure Role Based Access Control (RBAC) and POSIX-compliant Access Control Lists (ACLs) to provide flexible and scalable access control. Significantly, the POSIX ACLs are the same mechanism used within Hadoop.
Encryption at rest and transit – Data stored in ADLS is encrypted using either a system supplied or customer managed key. Additionally, data is encrypted using TLS 1.2 whilst in transit.
Network transport security – Given that ADLS exposes endpoints on the public Internet, transport-level protections are provided via Storage Firewalls that securely restrict where the data may be accessed from, enforced at the packet level.

Tight integration with analytics frameworks results in an end to end secure pipeline. The HDInsight Enterprise Security Package makes end-user authentication flow through the cluster and to the data in the data lake.

Get started today!

We’re excited for you to try Azure Data Lake Storage! Get started today and let us know your feedback.

Get started with Azure Data Lake Storage.
Watch the video, “Create your first ADLS Gen2 Data Lake.”
Read the general availability announcement.
Learn how ADLS improves the Azure analytics platform in the blog post, “Individually great, collectively unmatched: Announcing updates to 3 great Azure Data Services.”
Refer to the Azure Data Lake Storage documentation.
Learn how to deploy a HDInsight cluster with ADLS.
Deploy an Azure Databricks workspace with ADLS.
Ingest data into ADLS using Azure Data Factory.

Quelle: Azure

Introducing scheduled snapshots for Compute Engine persistent disk

From web hosting to databases, workloads running on Compute Engine need a reliable, convenient and automatic way to create periodic snapshots for disks of VM instances. We are excited to announce that starting today, scheduled snapshots are now available in beta. This feature lets you create automated snapshots, as well as manage snapshot retention. It is designed to reduce errors and save time, so you can focus on initiatives that create value for your business.You can use this feature by first defining a snapshot schedule, which supports frequencies by hours, days and weeks. For example, you can create a schedule that says “Create a snapshot every six hours,” or “Create a snapshot every Monday, Wednesday and Friday of each week.” Scheduled snapshots also mean you no longer need to manage snapshot cleanup yourself. You can define the retention policy within the same schedule, and the system will automatically delete the snapshots based on your defined retention policy.The snapshot schedule can be applied to a single disk, or multiple disks within the same region, so you can create scheduled snapshots at scale.You can also use the latest storage location feature for snapshots when defining the snapshot schedules.Using the scheduled snapshots featureYou can create scheduled snapshots via the API, in the CLI (gcloud) and through the GCP Developer Console. Here’s how to get started.In this gcloud example, a snapshot schedule is created in the europe-west1 region to generate snapshots every six hours, then delete them after 15 days:and then attach the “hourly-schedule” to existing disk d1 in europe-west1-b zone:Or you can specify a snapshot schedule while creating a new disk.You can also create and manage your automated snapshots using the Developer Console. As you can see in the screenshot below, simply go to your “Snapshots” tab in Compute Engine to create and manage your snapshot schedules in the “Snapshot schedules” tab.Here is an example of how the same schedule in the CLI above would look in the Developer Console, once you create a schedule through the “Create snapshot schedule” button on the top.Navigate to the “Disks” tab to attach snapshot schedules to one or more disks. You can attach schedules to existing disks in the disk details view, or apply a schedule while creating a new disk. The screenshot below shows where to choose the snapshot schedule when creating a disk.To learn more about this feature and other best practices for managing your VMs, check out our talk from Next ’18. With the scheduled snapshots feature, you can focus on building creative applications without inventing your own tools for disk snapshots. Try it today.
Quelle: Google Cloud Platform

Monitor at scale in Azure Monitor with multi-resource metric alerts

Our customers rely on Azure to run large scale applications and services critical to their business. To run services at scale, you need to setup alerts to proactively detect, notify, and remediate issues before it affects your customers. However, configuring alerts can be hard when you have a complex, dynamic environment with lots of moving parts.

Today, we are excited to release multi-resource support for metric alerts in Azure Monitor to help you set up critical alerts at scale. Metric alerts in Azure Monitor work on a host of multi-dimensional platform and custom metrics, and notify you when the metric breaches a threshold that was either defined by you or detected automatically.

With this new feature, you will be able to set up a single metric alert rule that monitors:

A list of virtual machines in one Azure region
All virtual machines in one or more resource groups in one Azure region
All virtual machines in a subscription in one Azure region

Benefits of using multi-resource metric alerts

Get alerting coverage faster: With a small number of rules, you can monitor all the virtual machines in your subscription. Multi-resource rules set at subscription or resource group level can automatically monitor new virtual machines deployed to the same resource group/subscription (in the same Azure region). Once you have such a rule created, you can deploy hundreds of virtual machines all monitored from day one without any additional effort.
Much smaller number of rules to manage: You no longer need to have a metric alert for every resource that you want to monitor.
You still get resource level notifications: You still get granular notifications per impacted resource, so you always have the information you need to diagnose issues.
Even simpler at scale experience: Using Dynamic Thresholds along with multi-resource metric alerts, you can monitor each virtual machine without the need to manually identify and set thresholds that fit all the selected resources. Dynamic condition type applies tailored thresholds based on advanced machine learning (ML) capabilities that learn metrics' historical behavior, as well as identifies patterns and anomalies.

Setting up a multi-resource metric alert rule

When you set up a new metric alert rule in the alert rule creation experience, use the checkboxes to select all the virtual machines you want the rule to be applied to. Please note that all the resources must be in the same Azure region.

You can select one or more resource groups, or select a whole subscription to apply the rule to all virtual machines in the subscription.

If you select all virtual machines in your subscription, or one or more resource groups, you get the option to auto-grow your selection. Selecting this option means the alert rule will automatically monitor any new virtual machines that are deployed to this subscription or resource group. With this option selected, you don’t need to create a new rule or edit an existing rule whenever a new virtual machine is deployed.

You can also use Azure Resource Manager templates to deploy multi-resource metric alerts. Learn more in our documentation, “Understand how metric alerts work in Azure Monitor.”

Pricing

The pricing for metric alert rules is based on number of metric timeseries monitored by an alert rule. This same pricing applies to multi-resource metric alert rules.

Wrapping up

We are excited about this new capability that makes configuring and managing metric alerts rule at scale easier. This functionality is currently only supported for virtual machines with support for other resource types coming soon. We would love to hear what you think about it and what improvements we should make. Contact us at azurealertsfeedback@microsoft.com.
Quelle: Azure