Running Cognitive Services on Azure IoT Edge

This blog post is co-authored by Emmanuel Bertrand, Senior Program Manager, Azure IoT.

We recently announced Azure Cognitive Services in containers for Computer Vision, Face, Text Analytics, and Language Understanding. You can read more about Azure Cognitive Services containers in this blog, “Brining AI to the edge.”

Today, we are happy to announce the support for running Azure Cognitive Services containers for Text Analytics and Language Understanding containers on edge devices with Azure IoT Edge. This means that all your workloads can be run locally where your data is being generated while keeping the simplicity of the cloud to manage them remotely, securely and at scale.

Whether you don’t have a reliable internet connection, or want to save on bandwidth cost, have super low latency requirements, or are dealing with sensitive data that needs to be analyzed on-site, Azure IoT Edge with the Cognitive Services containers gives you consistency with the cloud. This allows you to run your analysis on-site and a single pane of glass to operate all your sites.

These container images are directly available to try as IoT Edge modules on the Azure Marketplace:

Key Phrase Extraction extracts key talking points and highlights in text either from English, German, Spanish, or Japanese.
Language Detection detects the natural language of text with a total of 120 languages supported.
Sentiment Analysis detects the level of positive or negative sentiment for input text using a confidence score across a variety of languages.
Language Understanding applies custom machine learning intelligence to a user’s conversational and natural language text to predict overall meaning and pull out relevant and detailed information.

Please note, the Face and Recognize Text containers are still gated behind a preview, thus are not yet available via the marketplace. However you can deploy them manually by first signing up to for the preview to get access.

In this blog, we describe how to provision Language Detection container on your edge device locally and how you manage it through Azure IoT.

Set up an IoT Edge device and its IoT Hub

Follow the first steps in this quick-start for setting up your IoT Edge device and your IoT Hub.

It first walks your through creating an IoT Hub and then registering an IoT Edge device to your IoT hub. Here is a screenshot of a newly created edge device called “LanguageDetection" under the IoT Hub called “CSContainers". Select the device, copy its primary connection string, and save it for later.

Next, it guides you through setting up the IoT Edge device. If you don’t have a physical edge device, it is recommended to deploy the Ubuntu Server 16.04 LTS and Azure IoT Edge runtime virtual machine (VM) which is available on the Azure Marketplace. It is an Azure Virtual Machine that comes with IoT Edge pre-installed.

The last step is to connect your IoT Edge device to your IoT Hub by giving it its connection string created above. To do that, edit the device configuration file under /etc/iotedge/config.yaml file and update the connection string. After the connection string is update, restart the edge device with sudo systemctl restart iotedge.

Provisioning a Cognitive Service (Language Detection IoT Edge module)

The images are directly available as IoT edge modules from the Iot Hub marketplace.

Here we’re using the Language Detection image as an example, however other images work the same way. To download the image, search for the image and select Get it now, this will take you to the Azure portal “Target Devices for IoT Edge Module” page. Select your subscription with your IoT Hub, select Find Device and your IoT Edge device, then click the Select and Create buttons.

Configuring your Cognitive Service

Now you’re almost ready to deploy the Cognitive Service to your IoT Edge device. But in order to run a container you need to get a valid API key and billing endpoints, then pass them as environment variables in the module details.

Go to the Azure portal and open the Cognitive Services blade. If you don’t have a Cognitive Service that matches the container, in this case a Text Analytics service, then select add and create one. Once you have a Cognitive Service get the endpoint and API key, you’ll need this to fire up the container:

The endpoint is strictly used for billing only, no customer data ever flows that way. Copy your billing endpoint value to the “billing” environment variable and copy your API key value to the “apikey” environment variable.

Deploy the container

All required info is now filled in and you only need to complete the IoT Edge deployment. Select Next and then Submit. Verify that the deployment is happening properly by refreshing the IoT Edge device details section.

Verify that the deployment is happening properly by refreshing the IoT Edge device details section.

Trying it out

To try things out, we’ll make an HTTP call to the IoT Edge device that has the Cognitive Service container running.

For that, we’ll first need to make sure that the port 5000 of the edge device is open. If you’re using the pre-built Ubuntu with IoT Edge Azure VM as an edge device, first go to VM details, then Settings, Networking, and Outbound port rule to add an outbound security rule to open port 5000. Also copy the Public IP address of your device.

Now you should be able to query the Cognitive Service running on your IoT Edge device from any machine with a browser. Open your favorite browser and go to http://your-iot-edge-device-ip-address:5000.

Now, select Service API Description or jump directly to http://your-iot-edge-device-ip-address:5000/swagger. This will give you a detailed description of the API.

Select Try it out and then Execute, you can change the input value as you like.

The result will show up further down on the page and should look something like the following image:

Next steps

You are now up and running! You are running the Cognitive Services on your own IoT Edge device, remotely managed via your central IoT Hub. You can use this setup to manage millions of devices in a secure way.

You can play around with the various Cognitive Services already available in the Azure Marketplace and try out various scenarios. Have fun!
Quelle: Azure

Announcing Azure Integration Service Environment for Logic Apps

A new way to integrate with resources in your virtual network

We strive with every service to provide experiences that significantly improve the development experience. We’re always looking for common pain points that everybody building software in the cloud deals with. And once we find those pain points, we build best-of-class software to address the need.

In critical business scenarios, you need to have the confidence that your data is flowing between all the moving parts. The core Logic Apps offering is a great, multi-faceted service for integrating between data sources and services, but sometimes it is necessary to have dedicated service to ensure that your integration processes are as performant as can be. That’s why we developed the Integration Service Environment (ISE), a fully isolated integration environment.

What is an Integration Service Environment?

An Integration Service Environment is a fully isolated and dedicated environment for all enterprise-scale integration needs. When you create a new Integration Service Environment, it is injected into your Azure virtual network, which allows you to deploy Logic Apps as a service on your VNET.

Direct, secure access to your virtual network resources. Enables Logic Apps to have secure, direct access to private resources, such as virtual machines, servers, and other services in your virtual network including Azure services with service endpoints and on-premises resources via an Express Route or site to site VPN.
Consistent, highly reliable performance. Eliminates the noisy neighbor issue, removing fear of intermittent slowdowns that can impact business critical processes with a dedicated runtime where only your Logic Apps execute in.
Isolated, private storage. Sensitive data subject to regulation is kept private and secure, opening new integration opportunities.
Predicable pricing. Provides a fixed monthly cost for Logic Apps. Each Integration Service Environment includes the free usage of 1 Standard Integration Account and 1 Enterprise connector. If your Logic Apps action execution count exceeds 50 million action executions per month, the Integration Service Environment could provide better value.

Integration Service Environments are available in every region that Logic Apps is currently available in, with the exception of the following locations:

West Central US
Brazil South
Canada East

Logic Apps is great for customers who require a highly reliable, private integration service for all their data and services. You can try the public preview by signing up for an Azure account. If you’re an existing customer, you can find out how to get started by visiting our documentation, “Connect to Azure virtual networks from Azure Logic Apps by using an integration service environment.”
Quelle: Azure

Instantly restore your Azure Virtual Machines using Azure Backup

Today, we are delighted to share the release of Azure Backup Instant Restore capability for Azure Virtual Machines (VMs). Instant Restore helps Azure Backup customers quickly recover VMs from the snapshots stored along with the disks. In addition, users get complete flexibility in configuring the retention range of snapshots at the backup policy level depending on the requirements and criticality of the virtual machines associated, giving users more granular control over their resources.

Key benefits

Instant recovery point: Snapshots taken as a part of the backup job are stored along with the disk and are available for recovery instantly. This eliminates the wait time for snapshots to copy to the vault before a restore can be triggered.
In-place restore capability: With instant restore, users also get a capability to perform in-place restore, thus, overwriting the data in the original disk rather than creating a copy of the disk at an alternate location. It is particularly useful in scenarios where there is a need to rollback a patch. Once the snapshot phase is done, users can go ahead and use the local snapshot to restore if the patch goes bad.
Flexibility to choose retention range for snapshots at backup policy level: Depending on the operational recovery requirements of VMs, the user has the flexibility to configure snapshot retention range at a VM backup policy level. The snapshot retention range will apply to all VMs associated with the policy and can be between one to five days, two days being the default value.

In addition, users get Azure Backup support for Standard SSD disks and disks up to 4TB size.

How to change the snapshot retention period?

We are enabling this experience starting today and rolling it out region by region. You can check the availability in your region today.

Portal:

Users can change the snapshot retention to any value between one and five days from the default value of two days.

Next steps

Learn more about Instant restore capability.
Learn more about Azure Backup.
Want more details? Check out Azure Backup documentation.
Need help? Reach out to Azure Backup forum for support
Tell us how we can improve Azure Backup by contributing new ideas and voting up existing ones.
Follow us on Twitter @AzureBackup for the latest news and updates

Quelle: Azure

Take control of your Kubernetes clusters with CSP Config Management

Kubernetes administrators know that with each new cluster comes new configurations—and the management overhead associated with them. It’s a headache, and one that only gets worse as you scramble to keep your growing fleet in line with ever-changing corporate policies.Last week, we announced the Cloud Services Platform (CSP) in beta, letting you modernize your applications on Google Cloud Platform (GCP) or with on-premises infrastructure. As part of CSP, we’re also making it easier for you to consistently implement policies across all your Kubernetes clusters, with CSP Config Management, also in beta. Now you can strengthen security and maintain compliance across all your clusters, while still helping developers move fast.CSP Config Management allows you to create a common configuration for all your administrative policies and apply it to all your clusters, at the same time. The clusters can be running in Google Kubernetes Engine (GKE) in the cloud or in your data center with GKE On-Prem or a combination of both. By integrating with the popular Git version control system, CSP Config Management evaluates each commit to the repository and rolls them out to clusters all over the globe, so that your cluster is always in the desired state.For example, you can have a set of Kubernetes Namespaces with policies like NetworkPolicies, ConfigMaps, or RBAC RoleBindings, and automatically create them across all your clusters.CSP Config Management uses the native Kubernetes configuration format (in YAML or JSON) to store multi-cluster policies, so migrating your existing definitions is a snap. You can configure different policies for groups of clusters or namespaces (for example, applying different quota levels to staging vs. production), making it easy to manage complex environments. And you don’t need to worry about pushing bad configurations—CSP Config Management includes a validator that looks at every line of code before pushing it to your repository.Then, once the desired state is achieved, CSP Config Management actively monitors the clusters to keep them that way.In short, CSP Config Management:Enables new teams to get up and running quickly by creating a multi-cluster namespace with common RBAC policies and other access control rulesEnforces states needed for compliance by preventing configuration drift through continuous monitoring of the cluster stateCentrally manages the configuration of your Istio service mesh, pod security policies, quota policies, and other sensitive guardrails to ensure comprehensive and consistent coverage for your fleetBrings the power of source control to your clusters: stage configuration changes in separate branches, collaborate in code reviews, or easily revert clusters to their last healthy state.CSP Config Management is available today with the beta release of CSP; use it to take control of cluster sprawl and increase the security of your Kubernetes clusters at scale. Sign up for CSP Config Management beta.
Quelle: Google Cloud Platform

Kickstart your cryptography with new Cloud KMS client libraries and samples

Cloud Key Management Service (KMS) is a fast, scalable, and automated cryptographic key management service that provides symmetric and asymmetric support for encryption and signing. It also provides fully automated and at-will key rotation, rich auditing and logging functionality, and deep integrations with Cloud Identity and Access Management (IAM), all backed by global high availability.Today we are pleased to announce our new client libraries and code samples for Cloud KMS. These new client libraries are available today and support full Cloud KMS API coverage in seven programming languages:  C#, Go, Java, Node, PHP, Python, and Ruby.In addition to the new client libraries, we are also releasing a revamped collection of code samples for interacting with Cloud KMS. These code samples showcase common Cloud KMS functionality using the official client library and the idiomatic patterns of the language, making it easy to start integrating Cloud KMS into applications and services.C# Cloud KMS code samplesGo Cloud KMS code samplesJava Cloud KMS code samplesNode.js Cloud KMS code samplesPHP Cloud KMS code samplesPython Cloud KMS code samplesRuby Cloud KMS code samplesWhat’s new?The new Cloud KMS client libraries offer new features and functionality including:gRPC for communication – gRPC is an open source Cloud Native Computing Foundation (CNCF) project that largely follows traditional HTTP semantics, but allows for full-duplex streaming and is used at companies like Square, Netflix, Docker, and Google. By switching to gRPC over HTTP/2, the new client libraries provide lower latency and higher scalability.Language-idiomatic – Partnering with Google’s internal language experts and external community members, we designed the new libraries follow the idiomatic patterns of their respective languages. The new libraries will feel more welcoming and natural to users.API parity – By leveraging code generation, the new client libraries offer more API parity for available Cloud KMS functions, fields, and parameters. As we add new fields or methods to the Cloud KMS API, these new client libraries are automatically regenerated with support for that functionality. This means you will be able to programatically adopt new features and functionality faster.Getting startedTo get started, install an official client library using your language’s preferred dependency management software. For example in the Go programming language:Then import the client library and call the functions as needed. Here is an example that encrypts the plaintext string “my secret” using a Cloud KMS key in the Go programming language:For more information about installation, usage, samples, or authentication, please see the Cloud KMS client libraries documentation.Choosing between new and existing Cloud KMS client librariesWe encourage you to adopt the new libraries as they are faster, more consistent, and more performant than their predecessors. At the same time, there are use cases where the new libraries are not a viable replacement, such as regulated environments that don’t permit HTTP/2. This is one of many reasons why we are not deprecating the old client libraries, and will continue to support them. We want you to be successful when using our Cloud KMS client libraries, regardless of which one you choose.We realize the decision to have two client libraries providing similar functionality may be confusing, but we feel this approach is less disruptive than removing an existing client library from an ecosystem. To aid in the transition, we have already updated the documentation and samples on cloud.google.com to reference the new libraries, and we will be marking the old libraries as “not recommended” and discourage their use in new projects.Toward a great, secure developer experienceThe Cloud KMS client libraries enable organizations to focus on building better and more secure applications by offloading key management to Cloud KMS while retaining full transparency and access over keys. These new libraries provide complete coverage of the Cloud KMS APIs and consistency across languages for polyglot organizations. We are excited to see how these new client libraries enable organizations to build great integrations on GCP. Be sure to follow us on Twitter to leave feedback and ask any questions.
Quelle: Google Cloud Platform