Docker’s 6th Birthday: How do you #Docker?

 
 
Docker is turning 6 years old! Over the years, Docker Community members have found some amazing and innovative ways of using Docker technology and we’ve been blown away by all the use-cases we’ve seen from the community at DockerCon. From Docker for Space where NASA used Docker technology to build software to deflect asteroids to using “gloo” to glue together traditional apps, microservices and serverless, you all continue to amaze us year over year.
So this year, we want to celebrate you! From March 18th to the 31st, Docker User Groups all over the world will be hosting local birthday show-and-tell celebrations. Participants will each have 10-15 minutes of stage time to present how they’ve been using Docker. Think of these as lightning talks – your show-and-tell doesn’t need to be polished and it can absolutely be a fun hack and/or personal project. Everyone who presents their work will get a Docker Birthday #6 t-shirt and have the opportunity to submit their Docker Birthday Show-and-tell to present at DockerCon.
Are you new to Docker? Not sure you’d like to present? No worries! Join in the fun and come along to listen, learn, add to your sticker collection and eat cake. Everyone is welcome!

Find a Birthday meetup near you!
There are already Docker Birthday #6 celebrations scheduled around the world with more on the way! Check back as more events are announced.

Don’t see an event in your city?

Contact your local Community Leaders via their user group page and see if you can help them organize a celebration!

Want to sponsor a birthday event?

Contact the local Community Leaders via their user group page

#Docker is turning 6! Check out this blog post to find a #DockerBday Show-and-tell celebration near you.Click To Tweet

Can’t attend but still want to be involved and learn more about Docker?

Follow the fun on social media via #Dockerbday
Register for DockerCon SF.
Subscribe to the Docker Weekly newsletter
Join your local user group to be notified of future events

The post Docker’s 6th Birthday: How do you #Docker? appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Train fast on TPU, serve flexibly on GPU: switch your ML infrastructure to suit your needs

When developing machine learning models, fast iteration and short training times are of utmost importance. In order for you or your data science team to reach higher levels of accuracy, you may need to run tens or hundreds of training iterations in order to explore different options.A growing number of organizations use Tensor Processing Units (Cloud TPUs) to train complex models due to their ability to reduce the training time from days to hours (roughly a 10X reduction) and the training costs from thousands of dollars to tens of dollars (roughly a 100X reduction). You can then deploy your trained models to CPUs, GPUs, or TPUs to make predictions at serving time. In some applications for which response latency is critical—e.g., robotics or self-driving cars—you might need to make additional optimizations. For example, many data scientists frequently use NVIDIA’s TensorRT to improve inference speed on GPUs. In this post, we walk through training and serving an object detection model and demonstrate how TensorFlow’s comprehensive and flexible feature set can be used to perform each step, regardless of which hardware platform you choose.A TensorFlow model consists of many operations (ops) that are responsible for training and making predictions, for example, telling us whether a person is crossing the street. Most of TensorFlow ops are platform-agnostic and can run on CPU, GPU, or TPU. In fact, if you implement your model using TPUEstimator, you can run it on a Cloud TPU by just setting the use_tpu flag to True, and run it on a CPU or GPU by setting the flag to False.NVIDIA has developed TensorRT (an inference optimization library) for high-performance inference on GPUs. TensorFlow (TF) now includes TensorRT integration (TF-TRT) module that can convert TensorFlow ops in your model to TensorRT ops. With this integration, you can train your model on TPUs and then use TF-TRT to convert the trained model to a GPU-optimized one for serving. In the following example we will train a state-of-the-art object detection model, RetinaNet, on a Cloud TPU, convert it to a TensorRT-optimized version, and run predictions on a GPU.Train and save a modelYou can use the following instructions for any TPU model, but in this guide, we choose as our example the TensorFlow TPU RetinaNet model. Accordingly, you can start by following this tutorial to train a RetinaNet model on Cloud TPU. Feel free to skip the section titled “Evaluate the model while you train (optional)”1.For the RetinaNet model that you just trained, if you look inside the model directory (${MODEL_DIR} in the tutorial) in Cloud Storage you’ll see multiple model checkpoints. Note that checkpoints may be dependent on the architecture used to train a model and are not suitable for porting the model to a different architecture.TensorFlow offers another model format, SavedModel, that you can use to save and restore your model independent of the code that generated it. A SavedModel is language-neutral and contains everything you need (graph, variables, and metadata) to port your model from TPU to GPU or CPU.Inside the model directory, you should find a timestamped subdirectory (in Unix epoch time format, for example, 1546300800 for 2019-01-01-12:00:00 GMT) that contains the exported SavedModel. Specifically, your subdirectory contains the following files:saved_model.pbvariables/variables.data-00000-of-00001variables/variables.indexThe training script stores your model graph as saved_model.pb in a protocol buffer (protobuf) format, and stores in the variables in the aptly named variables subdirectory. Generating a SavedModel involves two steps—first, to define a serving_input_receiver_fn and then, to export a SavedModel.At serving time, the serving input receiver function ingests inference requests and prepares them for the model, just as at training time the input function input_fn ingests the training data and prepares them for the model. In the case of RetinaNet, the following code defines the serving input receiver function:The  serving_input_receiver_fn returns a tf.estimator.export.ServingInputReceiver object that takes the inference requests as arguments in the form of receiver_tensors and the features used by model as features. When the script returns a ServingInputReceiver, it’s telling TensorFlow everything it needs to know in order to construct a server. The features arguments describe the features that will be fed to our model. In this case, features is simply the set of images to run our detector on. receiver_tensors specifies the inputs to our server. Since we want our server to take JPEG encoded images, there will be a tf.placeholder for an array of strings. We decode each string into an image, crop it to the correct size and return the resulting image tensor.To export a SavedModel, call the export_saved_model method on your estimator shown in the following code snippet:Running export_saved_model generates a `SavedModel` directory in your FLAGS.model_dir directory. The SavedModel exported from TPUEstimator contains information on how to serve your model on CPU, GPU and TPU architectures.InferenceYou can take the SavedModel that you trained on a TPU and load it on a machine with CPU(s)—and optionally GPU(s)—to run predictions. The following lines of code restore the model and run inference.model_dir is your model directory where the SavedModel is stored. loader.load returns a MetaGraphDef protocol buffer loaded in the provided session. model_outputs is the list of model outputs you’d like to predict, model_input is the name of the placeholder that receives the input data, and input_image_batch is the input data directory2.With TensorFlow, you can very easily train and save a model on one platform (like TPU) and load and serve it on another platform (like GPU or CPU). You can choose from different Google Cloud Platform services such Cloud Machine Learning Engine, Kubernetes Engine, or Compute Engine to serve your models. In the remainder of this post you’ll learn how to optimize the SavedModel using TF-TRT, which is a common process if you plan to serve your model on one or more GPUs.TensorRT optimizationWhile you can use the SavedModel exported earlier to serve predictions on GPUs directly, NVIDIA’s TensorRT allows you to get improved performance from your model by using some advanced GPU features. To use TensorRT, you’ll need a virtual machine (VM) with a GPU and NVIDIA drivers. Google Cloud’s Deep Learning VMs are ideal for this case, because they have everything you need pre-installed.Follow these instructions to create a Deep Learning VM instance with one or more GPUs on Compute Engine. Select the checkbox “Install NVIDIA GPU driver automatically on first startup?” and choose a “Framework” (for example, “Intel optimized TensorFlow 1.12″ at the time of writing this post) that comes with the most recent version of CUDA and TensorRT that satisfy the dependencies for the TensorFlow with GPU support and TF-TRT modules. After your VM is initialized and booted, you can remotely log into it by clicking the SSH button next to its name on the Compute Engine page on Cloud Console or using the gcloud compute ssh command. Install the dependencies (recent versions of TensorFlow include TF-TRT by default) and clone the TensorFlow TPU GitHub repository3.Now run tpu/models/official/retinanet/retinanet_tensorrt.py and provide the location of the SavedModel as an argument:In the preceding code snippet, SAVED_MODEL_DIR is the path where SavedModel is stored (on Cloud Storage or local disk). This step converts the original SavedModel to a new GPU optimized SavedModel and prints out the prediction latency for the two models.If you look inside the model directory you can see that retinanet_tensorrt.py has converted the original SavedModel to a TensorRT-optimized SavedModel and stored it in a new folder ending in _trt. This step was done using the command.In the new SavedModel, the TensorFlow ops have been replaced by their GPU-optimized TensorRT implementations. During conversion, the script converts all variables to constants, and writes out to saved_model.pb, and therefore the variables folder is empty. TF-TRT module has an implementation for the majority of TensorFlow ops. For some ops, such as control flow ops such as Enter, Exit, Merge, and Switch, there are no TRT implementation, therefore they stay unchanged in the new SavedModel, but their effect on prediction latency is negligible.Another method to convert the SavedModel to its TensorRT inference graph is to use the saved_model_cli tool using the following command:In the preceding command MY_DIR is the shared filesystem directory and SAVED_MODEL_DIR is the directory inside the shared filesystem directory where the SavedModel is stored.retinanet_tensorrt.py also loads and runs two models before and after conversion and prints the prediction latency. As we expect, the converted model has lower latency. Note that for inference, the first prediction often takes longer than subsequent predictions. This is due to startup overhead and for TPUs, the time taken to compile the TPU program via XLA. In our example, we skip the time taken by the first inference step, and average the remaining steps from the second iteration onwards.You can apply these steps to other models to easily port them to a different architecture, and optimize their performance. The TensorFlow and TPU GitHub repositories contain a diverse collection of different models that you can try out for your application including another state of the art object detection model, Mask R-CNN. If you’re interested in trying out TPUs, to see what they can offer you in terms of training and serving times, try this Colab and quickstart.
Quelle: Google Cloud Platform

Announcing Azure Monitor AIOps Alerts with Dynamic Thresholds

We are happy to announce that Metric Alerts with Dynamic Thresholds is now available in public preview. Dynamic Thresholds are a significant enhancement to Azure Monitor Metric Alerts. With Dynamic Thresholds you no longer need to manually identify and set thresholds for alerts. The alert rule leverages advanced machine learning (ML) capabilities to learn metrics’ historical behavior, while identifying patterns and anomalies that indicate possible service issues.

Metric Alerts with Dynamic Thresholds are supported through a simple Azure portal experience, as well as provides support for Azure workloads operations at scale by allowing users to configure alert rules through an Azure Resource Manager (ARM) API in a fully automated manner.

Why and when should I apply Dynamic Thresholds to my metrics alerts?

Smart metric pattern recognition – A big pain point with setting static threshold is that you need to identify patterns on your own and create an alert rule for each pattern. With Dynamic Thresholds, we use a unique ML technology to identify the patterns and come up with a single alert rule that has the right thresholds and accounts for seasonality patterns such as hourly, daily, or weekly. Let’s take the example of HTTP requests rate. As you can see below, there is definite seasonality here. Instead of setting two or more different alert rules for weekdays and weekends, you can now get Azure Monitor to analyze your data and come up with a single alert rule with Dynamic Thresholds that changes between weekdays and weekends.

Scalable alerting – Wouldn’t it be great if you could automatically apply an alert rule on CPU usage to any virtual machine (VM) or application that you create? With Dynamic Thresholds, you can create a single alert rule that can then be applicable automatically to any resource that you create. You don’t need to provide thresholds. The alert rule will identify the baseline for the resource and define the thresholds automatically for you. With Dynamic Thresholds, you now have a scalable approach that will save a significant amount of time on management and creation of alerts rules.

Domain knowledge – Setting a threshold often requires a lot of domain knowledge. Dynamic Thresholds eliminates that need with the use of your ML algorithms. Further, we have optimized the algorithms for common use cases such as CPU usage for a VM or requests duration for an application. So you can have full confidence that the alert will capture any anomalies while still reducing the noise for you.

Intuitive configuration – Dynamic Thresholds allow setting up metric alerts rules using high-level concepts, alleviating the need to have extensive domain knowledge about the metric. This is expressed by only requiring users to select the sensitivity for deviations (low, medium, high) and boundaries (lower, higher, or both thresholds) based on the business impact of the alert in the UI or ARM API.

Dynamic Thresholds also allow you to configure a minimum amount of deviations required within a certain time window for the system to raise an alert, the default time window is four deviations in 20 minutes. The user can configure this and choose what he/she would like to be alerted on by changing the failing periods and time window.

Metric Alerts with Dynamic Threshold is currently available for free during the public preview. To see the pricing that will be effective at general availability, visit our pricing page. To get started, please refer to the documentation, “Metric Alerts with Dynamic Thresholds in Azure Monitor (Public Preview).” We would love to hear your feedback! If you have any questions or suggestions, please reach out to us at azurealertsfeedback@microsoft.com.

Please note, Dynamic Threshold based alerts are available for all Azure Monitor based metric sources listed in the documentation, “Supported resources for metric alerts in Azure Monitor.”
Quelle: Azure

Improving the TypeScript support in Azure Functions

TypeScript is becoming increasingly popular in the JavaScript community. Since Azure Functions runs Node.js, and TypeScript compiles to JavaScript, motivated users already could get TypeScript code up and running in Azure Functions. However, the experience wasn’t seamless, and things like our default folder structure made getting started a bit tricky. Today we’re pleased to announce a set of tooling improvements that improve this situation. Azure Functions users can now easily develop with TypeScript when building their event-driven applications!

For those unfamiliar, TypeScript is a superset of JavaScript which provides optional static typing, classes, and interfaces. These features allow you to catch bugs earlier in the development process, leading to more robust software engineering. TypeScript also indirectly enables you to leverage modern JavaScript syntax, since TypeScript is compatible with ECMAScript 2015.

With this set of changes to the Azure Functions Core Tools and the Azure Functions Extension for Visual Studio Code, Azure Functions now supports TypeScript out of the box! Included with these changes are a set of templates for TypeScript, type definitions, and npm scripts. Read on to learn more details about the new experience.

Templates for TypeScript

In the latest version of the Azure Functions Core Tools and the Azure Functions Extension for VS Code, you’re given the option to use TypeScript when creating functions. To be more precise, on creation of a new function app, you will now see the option to specify TypeScript on language stack selection. This action will opt you into default package.json and .tsconfig files, setting up their app to be TypeScript compatible. After this, when creating a function, you will be able to select from a number of TypeScript specific function templates. Each template represents one possible trigger, and there is an equivalent present in TypeScript for each template supported in JavaScript.

The best part of this new flow is that to transpile and run TypeScript functions, you don’t have to take any actions at all that are unique to Functions. For example, what this means is that when a user hits F5 to start debugging Visual Studio Code, Code will automatically run the required installation tasks, transpile the TypeScript code, and start the Azure Functions host. This local development experience is best in class, and is exactly how a user would start debugging any other app in VS Code.

Learn more about how to get your TypeScript functions up and running in our documentation.

Type definitions for Azure Functions

The @azure/functions package on npm contains type definitions for Azure Functions. Have you ever wondered what’s an Azure Function object is shaped like? Or maybe, the context object that is passed into every JavaScript function? This package helps! To get the most of TypeScript, this should to be imported in every .ts function. JavaScript purists can benefit too – including this package in your code gives you a richer Intellisense experience. Check out the @azure/functions package on npm to learn more!

Npm scripts

Included by default in the TypeScript function apps is a package.json file including a few simple npm scripts. These scripts allow Azure Functions to fit directly into your typical development flow by calling specific Azure Functions Core Tools commands. For instance, ‘npm start’ will automatically run ‘func start’, meaning that after creating a function app you don’t have to treat it differently than any other Node.js project.

To see these in action, check out our example repo!

Try it yourself!

With either the Azure Functions Core Tools or the Azure Functions Extension for VS Code, you can try out the improved experience for TypeScript in Azure Functions on your local machine, even if you don’t have an Azure account.

Next steps

Get started with Azure Functions in VS Code.
Get started with Azure Functions in your CLI with the Azure Functions Core Tools.
Check out a sample TypeScript Function App.
Take a look at the Azure Functions JavaScript Developer Guide for additional details.
Sign up for an Azure free account if you don’t have one, and deploy your serverless apps to the cloud.

As always, feel free to reach out to the team with any feedback on our GitHub or Twitter. Happy coding!
Quelle: Azure

OpenShift Commons Briefing: State of the Operators with Daniel Messer (Red Hat)

OpenShift Commons Briefing Summary In this briefing, Red Hat’s Daniel Messer gives an in-depth look at the state of Kubernetes Operators. He also delves into the Operator Framework, SDK,  Lifecycle Manager and the Operator Hub. Access the slides from this briefing: State of the Operators – Commons Briefing 02-19-2019 Join the Community at the Upcoming OpenShift […]
The post OpenShift Commons Briefing: State of the Operators with Daniel Messer (Red Hat) appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift Commons Briefing: OpenShift 4.0 Release Update with Ali Mobrem

 OpenShift Commons Briefing Summary In this briefing, Red Hat’s Ali Mobrem gives an in-depth look at the release plans for OpenShift 4.0, as well as a general overview of what will be changing in this platform update release.He discussed the use of Operators to deliver cluster management and automation to OpenShift 4.0 and the […]
The post OpenShift Commons Briefing: OpenShift 4.0 Release Update with Ali Mobrem appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Announcing the general availability of Java support in Azure Functions

Azure Functions provides a productive programming model based on triggers and bindings for accelerated development and serverless hosting of event-driven applications. It enables developers to build apps using the programming languages and tools of their choice, with an end-to-end developer experience that spans from building and debugging locally, to deploying and monitoring in the cloud. Today, we’re pleased to announce the general availability of Java support in Azure Functions 2.0!

Ever since we first released the preview of Java in Functions, an increasing number of users and organizations have leveraged the capability to build and host their serverless applications in Azure. With the help of input from a great community of preview users, we’ve steadily improved the feature by adding support for easier authoring experiences and a more robust hosting platform.

What’s in the release?

With this release, Functions is now ready to support Java workloads in production, backed by our 99.95 percent SLA for both the Consumption Plan and the App Service Plan. You can build your functions based on Java SE 8 LTS and the Functions 2.0 runtime, while being able to use the platform (Windows, Mac, or Linux) and tools of your choice. This enables a wide range of options for you to build and run your Java apps in the 50+ regions offered by Azure around the world.

Powerful programming model

Using the unique programming model of Functions, you can easily connect them to cloud scale data sources such as Azure Storage and Cosmos DB, and messaging services such as Service Bus, Event Hubs, and Event Grid. Triggers and bindings enable you to invoke your function based on an HTTP request, or schedule an event in one of the aforementioned source systems. You can also retrieve information or write back to these sources as part of the function logic, without having to worry about the underlying Java SDK.

Easier development and monitoring

Using the Azure Functions Maven plugin you can create, build, and deploy your Functions from any Maven-enabled project. The open source Functions 2.0 runtime will enable you to run and debug your functions locally on any platform. For a complete DevOps experience, you can leverage the integration with Azure Pipelines or setup a Jenkins Pipeline to build your Java project and deploy it to Azure.

What is even more exciting is that popular IDEs and editors such as Eclipse, IntelliJ, and Visual Studio Code can be used to develop and debug your Java Functions.

One of the added benefits of building your serverless applications with Functions is that you automatically get access to rich monitoring experiences thanks to the Azure Application Insights integration for telemetry, querying, and distributed tracing.

Enterprise-grade serverless

Azure Functions also makes it easy to build apps that meet your enterprise requirements. Leverage features like App Service Authentication / Authorization to restrict access to your app, and protect secrets using managed identities and Azure Key Vault. Azure boasts a wide range of compliance certifications, making it a fantastic host for your serverless Java functions.

Next steps

To get started, take a closer look at how the experience of building event-driven Java apps with Azure Functions looks like following the links below:

Build your first serverless Java Function using the instructions our tutorial.
Find the complete Azure Functions Java developer reference.
Follow upcoming features and design discussion on our GitHub repository.
Learn about all the great things you can do with Java on Azure.

With so much being released now and coming soon, we’d sincerely love to hear your feedback. You can reach the team on Twitter and on GitHub. We also actively monitor Stack Overflow and UserVoice, so feel free to ask questions or leave your suggestions. We look forward to hearing from you!
Quelle: Azure