Expanded Jobs functionality in Azure IoT Central

Since announcing the release of our Jobs feature during the Azure IoT Central general availability launch, we are excited to share how we are working to improve your device management workflow through additional jobs functionalities. Today, you are now able to copy an existing job you’ve created, save a job to continue working on later, stop or resume a running job, and download a job details report once your job has completed running. These additional Jobs functionalities make managing your devices at scale much easier.

In order to copy a job you’ve created, simply select a job from your main jobs list and select “Copy”. This will open a copy of the job where you can optionally update any part of the job configuration. If any changes have been made to your device set since its creation, your copied job will reflect those changes for you to edit.

While you are editing your job, you now have the option to save the job to continue working on later by selecting “Save”. This saved job will appear on your main jobs list with a status of “Saved” and you can open it again at any time to continue editing.

Once you have chosen to run your job, you can select the “Stop” button to stop the job from executing any further. You can open a stopped job from your list and select “Run” again at any time you’d like.

Whether your job has been stopped or is completed running, you can select “Download Device Report” near your device list in order to download a .csv file that lists the device ID, time the job was completed or stopped, status of the device, and the error message (if applicable). This can be used to troubleshoot devices or as a sorting tool.

We are continually working on improving your device management experience to make managing devices at scale easier than ever. If you have any suggestions for the device management or Jobs functionalities you would find useful in your workflow, please leave us feedback.

Learn more about how to run a job in Azure IoT Central.
Quelle: Azure

A Self-Hosted Global Load Balancer for OpenShift

Introduction This is the fifth installment on a series of blog posts related to deploying OpenShift in multi-cluster configurations. In the first two posts (part 1 and part 2), we explored how to create a network tunnel between multiple clusters. In the third post, it was demonstrated how to deploy Istio multicluster across multiple clusters […]
The post A Self-Hosted Global Load Balancer for OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

TensorFlow 2.0 and Cloud AI make it easy to train, deploy, and maintain scalable machine learning models

Since it was open-sourced in 2015, TensorFlow has matured into an entire end-to-end ML ecosystem that includes a variety of tools, libraries, and deployment options to help users go from research to production easily. This month at the 2019 TensorFlow Dev Summit we announced TensorFlow 2.0 to make machine learning models easier to use and deploy.TensorFlow started out as a machine learning framework and has grown into a comprehensive platform that gives researchers and developers access to both intuitive higher-level APIs and low-level operations. In TensorFlow 2.0, eager execution is enabled by default, with tight Keras integration. You can easily ingest datasets via tf.data pipelines, and you can monitor your training in TensorBoard directly from Colab and Jupyter Notebooks. The TensorFlow team will continue to work on improving TensorFlow 2.0 alpha with a general release candidate coming later in Q2 2019. Making ML easier to useThe TensorFlow team’s decision to focus on developer productivity and ease of use doesn’t stop at iPython notebooks and Colab, but extends to make API components integrate far more intuitively with tf.keras (now the standard high level API), and to TensorFlow Datasets, which let users import common preprocessed datasets with only one line of code. Data ingestion pipelines can be orchestrated with tf.data, pushed into production with TensorFlow Extended (TFX), and scaled to multiple nodes and hardware architectures with minimal code change using distribution strategies.The TensorFlow engineering team has created an upgrade tool and several migration guides to support users who wish to migrate their models from TensorFlow 1.x to 2.0. TensorFlow is also hosting a weekly community testing stand-up for users to ask questions about TensorFlow 2.0 and migration support. If you’re interested, you can find more information on the TensorFlow website.Upgrading a model with the tf_upgrade_v2 tool.Experiment and iterateBoth researchers and enterprise data science teams must continuously iterate on model architectures, with a focus on rapid prototyping and speed to a first solution. With eager execution a focus in TensorFlow 2.0, researchers have the ability to use intuitive Python control flows, optimize their eager code with tf.function, and save time with improved error messaging. Creating and experimenting with models using TensorFlow has never been so easy.Faster training is essential for model deployments, retraining, and experimentation. In the past year, the TensorFlow team has worked diligently to improve training performance times on a variety of platforms including the second-generation Cloud TPU (by a factor of 1.6x) and the NVIDIA V100 GPU (by a factor of more than 2x). For inference, we saw speedups of over 3x with Intel’s MKL library, which supports CPU-based Compute Engine instances.Through add-on extensions, TensorFlow expands to help you build advance models. For example, TensorFlow Federated lets you train models both in the cloud and on remote (IoT or embedded) devices in a collaborative fashion. Often times, your remote devices have data to train on that your centralized training system may not. We also recently announced the TensorFlow Privacy extension, which helps you strip personally identifiable information (PII) from your training data. Finally, TensorFlow Probability extends TensorFlow’s abilities to more traditional statistical use cases, which you can use in conjunction with other functionality like estimators.Deploy your ML model in a variety ofenvironments and languagesA core strength of TensorFlow has always been the ability to deploy models into production. In TensorFlow 2.0, the TensorFlow team is making it even easier. TFX Pipelines give you the ability to coordinate how you serve your trained models for inference at runtime, whether on a single instance, or across an entire cluster. Meanwhile, for more resource-constrained systems, like mobile or IoT devices and embedded hardware, you can easily quantize your models to run with TensorFlow Lite. Airbnb, Shazam, and the BBC are all using TensorFlow Lite to enhance their mobile experiences, and to validate as well as classify user-uploaded content.Exploring and analyzing data with TensorFlow Data Validation.JavaScript is one of the world’s most popular programming languages, and TensorFlow.js helps make ML available to millions of JavaScript developers. The TensorFlow team announced TensorFlow.js version 1.0. This integration means you can not only train and run models in the browser, but also run TensorFlow as a part of server-side hosted JavaScript apps, including on App Engine. TensorFlow.js now has better performance than ever, and its community has grown substantially: in the year since its initial launch, community members have downloaded TensorFlow.js over 300,000 times, and its repository now incorporates code from over 100 contributors.How to get startedIf you’re eager to get started with TensorFlow 2.0 alpha on Google Cloud, start up a Deep Learning VM and try out some of the tutorials. TensorFlow 2.0 is available through Colab via pip install, if you’re just looking to run a notebook anywhere, but perhaps more importantly, you can also run a Jupyter instance on Google Cloud using a Cloud Dataproc Cluster, or launch notebooks directly from Cloud ML Engine, all from within your GCP project.Using TensorFlow 2.0 with a Deep Learning VM and GCP Notebook Instances.Along with announcing the alpha release of TensorFlow 2.0, we also announced new community and education partnerships. In collaboration with O’Reilly Media, we’re hosting TensorFlow World, a week-long conference dedicated to fostering and bringing together the open source community and all things TensorFlow. Call for proposals is open for attendees to submit papers and projects to be highlighted at the event. Finally, we announced two new courses to help beginners and learners new to ML and TensorFlow. The first course is deeplearning.ai’s Course 1 – Introduction to TensorFlow for AI, ML and DL, part of the TensorFlow: from Basic to Mastery series. The second course is Udacity’s Intro to TensorFlow for Deep Learning.If you’re using TensorFlow 2.0 on Google Cloud, we want to hear about it! Make sure to join our Testing special interest group, submit your project abstracts to TensorFlow World, and share your projects in our #PoweredByTF Challenge on DevPost. To quickly get up to speed on TensorFlow, be sure to check out our free courses on Udacity and DeepLearning.ai.
Quelle: Google Cloud Platform

It's raining APIs: How AccuWeather shares data with developers using Apigee

Editor’s note: We’re hearing today from AccuWeather, the popular weather data provider. The company has evolved into a digital business through the years, and the company’s APIs are essential to what it offers. Here’s how AccuWeather uses Google’s Apigee API management platform to make it all work smoothly.Since AccuWeather was founded in 1962, our company has become the world’s leading provider of weather forecasts and warnings. We maintain a huge, accurate and comprehensive collection of weather warning data.Back then, we brought data to local forecasts, newspapers, radio stations and small businesses. While we started by putting pen to paper and providing solutions to business customers, AccuWeather has really evolved into a digital platform over the past decade. This entire transformation was powered by APIs. We are extremely proud of how broadly our enterprise APIs are used. They provide life-saving weather information and warnings to major companies worldwide, including nine out of the 10 major smartphone OEMs, IoT producers, and others in some of the world’s biggest industries, including more than half of Fortune 500 companies and thousands more globally.You can see more about AccuWeather’s APIs in this short documentary:Bringing weather data to new audiencesWe faced an interesting challenge when we moved to expand our reach and engage new audiences, especially small- to mid-sized businesses, entrepreneurs, individual developers, and students. We knew a long onboarding process wouldn’t work for these developers, and we knew we had to make it easy for these developers to access our APIs quickly without a lot of overhead.Increasingly, these prospective customers needed an easy, frictionless and automated sign-up process to evaluate and integrate our APIs as quickly as possible into the applications they are developing. To facilitate that innovation and development, we needed to give developers fast, simple, and cost-effective access to AccuWeather’s unique weather data. We source our global data in real time from multiple sources, both public and private, and blend it in our Global Forecast System with custom software algorithms, artificial intelligence, and machine learning. That’s then combined with the experience of more than 100 operational meteorologists to generate detailed, accurate, and localized forecasts. That data has been proven most accurate in the weather industry for the past three years in an independent study.Building a developer portalTo give these smaller, specialized audiences access to all this weather data, we began partnering with Google’s Apigee to use its API management solutions to expand our reach.We built the AccuWeather API Developer Portal, which provides turnkey package options so developers can access detailed global weather forecasts and warnings on the Apigee platform.Apigee’s monetization module was a key selling point for AccuWeather. This allowed us to package our APIs into set products, which enables developers to purchase our APIs (or test them for free), so they can tailor their API consumption to their specific needs. Since AccuWeather offers so many types of data, and many variations of specific data, these API packages let developers and small businesses pick and choose data content as they need it. Data points include extended forecasts or specific forecast periods like hourly or daily.The analytics capabilities offered by Apigee have helped us customize our API products to the needs of developers by revealing traffic patterns and making sure users get weather data when and how they want it to best achieve their desired outcomes. Using these traffic patterns, we can see which developers are most active, which APIs are most heavily used, what time of day people look at the weather, what clients are growing fast and which ones may need more support. This lets us be proactive to continue building useful products.What’s next for AccuWeather and ApigeeWe have been thrilled with the results. Since partnering with Apigee and launching the AccuWeather API Developer Portal in May 2017, we have watched the number of developers who have signed up to use our APIs grow to more than 60,000.We’re now reaching important new developer audiences who are exploring ways to incorporate our troves of weather data into their own applications. We’re excited to make our APIs available to more developers—any of whom might be working on the next big thing. Innovation has a better chance to bubble up with the right tools, and the AccuWeather API Developer Portal, powered by Apigee, provides the right recipe to inspire developers to produce something powerful and innovative.
Quelle: Google Cloud Platform

Data integration with ADLS Gen2 and Azure Data Explorer using Data Factory

Microsoft announced the general availability of Azure Data Lake Storage (ADLS) Gen2 and Azure Data Explorer in early February, which arms Azure with unmatched price performance and security as one of the best clouds for analytics. Azure Data Factory (ADF), is a fully-managed data integration service, that empowers you to copy data from over 80 data sources with a simple drag-and-drop experience and operationalize and manage the ETL/ELT flows with flexible control flow, rich monitoring, and continuous integration and continuous delivery (CI/CD) capabilities. In this blog post, we’re excited to update you on the latest integration in Azure Data Factory with ADLS Gen2 and Azure Data Explorer. You can now meet the advanced needs of your analytics workloads by leveraging these services.

Ingest and transform data with ADLS Gen2

Azure Data Lake Storage is a no-compromises data lake platform that combines the rich feature set of advanced data lake solutions with the economics, global scale, and enterprise grade security of Azure Blob Storage. Our recent post provides you with a comprehensive insider view on this powerful service.

Azure Data Factory supports ADLS Gen2 as a preview connector since ADLS Gen2 limited public preview. Now the connector has also reached general availability along with ADLS Gen2. Moreover, with ADF, you can now:

Ingest data from over 80 data sources located on-premises and in the cloud into ADLS Gen2 with great performance.
Orchestrate data transformation using Databricks Notebook, Apache Spark in Python, and Spark JAR against data stored in ADLS Gen2.
Orchestrate data transformation using HDInsights with ADLS Gen2 as the primary store and script store on either bring-your-own or on-demand cluster.
Egress data from ADLS Gen2 to a data warehouse for reporting.
Leverage Azure’s Role Based Access Control (RBAC) and Portable Operating System Interface (POSIX) compliant access control lists (ACLs) that restrict access to only authorized accounts.
Invoke control flow operations like Lookup and GetMetadata against ADLS Gen2.

Get started today

Tutorial on ingesting data into ADLS Gen2
ADLS Gen2 connector
Databricks Notebook activity to transform data in ADLS Gen2
HDInsights activity to transform data in ADLS Gen2

Populate Azure Data Explorer for real-time analysis

Azure Data Explorer is a fast and highly scalable data exploration service for log and telemetry data. It helps you handle the many data streams emitted by modern software and is designed for analyzing large volumes of diverse data.

Bringing data into Azure Data Explorer is the first challenge customers often face to adopt the service. Complimentary to Azure Data Explorer’s native support on continuous data ingestion from event streams, Azure Data Factory enables you to batch ingress data from a broad set of data stores in a codeless manner. With simple drag-and-drop features in ADF, you can now:

Ingest data from over 80 data sources – on-premises and cloud-based, structured, semi-structured, and unstructured into Azure Data Explorer for real-time analysis.
Egress data from Azure Data Explorer based on the Keyword Query Language (KQL) query.
Lookup Azure Data Explorer for control flow operations.

Get started

Azure Data Explorer connector

 

We will keep adding new features in ADF to tighten the integration with ADLS Gen2 and Azure Data Explorer. Stay tuned and let us know your feedback!
Quelle: Azure

GPU Technology Conference: Nvidia lässt sich Zeit

Alles sei exzellent, sagt Nvidia-Chef Jensen Huang über die Hard- und Software seines Unternehmens. Statt echter Neuheiten gab es auf der Hausmesse des Unternehmens inkrementelle Verbesserungen, für die Einstein eine wichtige Rolle spielte. Eine Analyse von Marc Sauter (Nvidia, IBM)
Quelle: Golem