Committed use discounts at a glance: New report shows your Compute Engine usage and commitments

Cloud adoption is fueled by the promise of increased flexibility, lower costs, and simplified pricing. At Google Cloud, we deliver on this promise with innovations like committed use discounts, which offer deep discounts of up to 57% off on-demand prices on VM usage in exchange for a one- or three-year commitment. Today, to help you analyze your Compute Engine resource footprint alongside your commitments, we are pleased to announce the Committed Use Discount Analysis report in beta. With this report, you can visualizeyour commitments from directly within theCloud Console to answer questions such as:  How much are my committed use discounts saving me on my bill?Am I fully utilizing my existing commitments?How much of my eligible usage is covered by commitments?Is there an opportunity to save more by increasing my commitments?Over the last two years, we’ve seen rapid adoption of committed use discounts, and recently expanded support to include local SSDs, GPUs, and Cloud TPU Pods based on customer feedback. Now, with the Committed Use Discount Analysis report, you get even greater transparency into your usage and cost savings so that you can maximize your discounts and minimize the time spent managing your commitments.Early adopters of the Committed Use Discount Analysis report are already seeing the benefits:”With this tool, we better understand our historical usage of eligible compute resources and how that compares to our commitment levels. Our commitment utilization and coverage is automatically calculated, enabling us to know when to purchase more commitments so that we can maximize our discounts. This gives us a higher level of confidence in purchasing commitments, allowing our teams to invest more in the innovations that drive Etsy’s vision.” – Dany Daya, Senior Program Manager, EtsyGoogle Cloud is dedicated to providing you with cost management tools that make it easier to manage and optimize your Google Cloud Platform (GCP) costs. With this new feature and Cloud Billing reports, you can gain greater visibility into your costs and the impact of your discounts at a glance. You can start using the new Committed Use Discount Analysis report in the Cloud Console today.Next stepsTo learn more about committed use discounts and Google Cloud cost management tools, check out the following:Documentation: Committed use discountsDocumentation: Analyze the effectiveness of your committed use discountsWebpage: Google Cloud cost management toolsVideos: Billing & cost managementFeedback: Contact us!
Quelle: Google Cloud Platform

3 cool Cloud Run features that developers love—and that you will too

Earlier this year, we announced Cloud Run, a managed serverless compute platform for stateless containers. Cloud Run abstracts away infrastructure management, and makes it easy to modernize apps. It allows you to easily run your containers either in your Google Kubernetes Engine (GKE) cluster with Cloud Run on GKE or fully managed with Cloud Run.Cloud Run has lots of great features and you can read the full list on the webpage. But in conversations with customers, three key features of the fully managed version of Cloud Run stand out:Pay for what you use pricingOnce above the always free usage limits, Cloud Run charges you only for the exact resources that you use.For a given container instance, you are only charged when:The container instance is starting, andAt least one request or event is being processed by the container instanceDuring that time, Cloud Run bills you only for the allocated CPU and memory, rounded up to the nearest 100 milliseconds. Cloud Run also charges for network egress and number of requests.As shared by Sebastien Morand, Team Lead Solution Architect at Veolia, and Cloud Run developer, this allows you to run any stateless container with a very granular pricing model:”Cloud Run removes the barriers of managed platforms by giving us the freedom to run our custom workloads at lower cost on a fast, scalable and fully managed infrastructure.”Read more about Cloud Run pricing here.Concurrency > 1Cloud Run automatically scales the number of container instances you need to handle all incoming requests or events. However, contrary to other Functions-as-a-Service (FaaS) solutions like Cloud Functions, these instances can receive more than one request or event at the same time.The maximum number of requests that can be processed simultaneously to a given container instance is called concurrency. By default, Cloud Run services have a maximum concurrency of 80.Using a concurrency higher than 1 has a few benefits:You get better performance by reducing the number of cold starts (requests or events that are waiting for a new container instance to be started)Optimized resource consumption, and thus lower costs: If your code often waits for network operations to return (like calling a third-party API), the allocated CPU and memory can be used to process other requests in the meantime.Read more about the concept of concurrency here.Secure event processing with Cloud Pub/Sub and Cloud IAMYour Cloud Run services can receive web requests, but also other kinds of events, like Cloud Pub/Sub messages. We’ve seen customers leverage Cloud Pub/Sub and Cloud Run to achieve the following:Transform data after receiving an event upon a file upload to a Cloud Storage bucketProcess their Stackdriver logs with Cloud Run by exporting them to Cloud Pub/SubPublish and process their own custom events from their Cloud Run services.The messages are pushed to your Cloud Run container instances via the HTTP protocol. And leveraging service accounts and Cloud IAM permissions, you can securely and privately push messages from Cloud Pub/Sub to Cloud Run without having to expose your Cloud Run service publicly. Only the Cloud Pub/Sub subscription that you have set up is able to invoke your service.You can achieve this with the following steps:Deploy a Cloud Run service to receive the messages (it listens for incoming HTTP requests, and returns a success response code when the message is processed)Create a Cloud Pub/Sub topicCreate a new service account and grant it the “Cloud Run Invoker” role on your Cloud Run serviceCreate a push subscription and give it the identity of the service account.Read more about using Cloud Pub/Sub push with Cloud Run here, and follow this tutorial for a step-by-step example.Run, don’t walk, to serverless containersThese are just a few of the neat things that developers appreciate about Cloud Run. To learn about all the other things to love about Cloud Run, check out these Cloud Run Quickstarts.
Quelle: Google Cloud Platform

Getting started with time-series trend predictions using GCP

Today’s financial world is complex, and the old technology used for constructing financial data pipelines isn’t keeping up. With multiple financial exchanges operating around the world and global user demand, these data pipelines have to be fast, reliable and scalable.Currently, using an econometric approach—applying models to financial data to forecast future trends—doesn’t work for real-time financial predictions. And data that’s old, inaccurate or from a single source doesn’t translate into dependable data for financial institutions to use. But building pipelines with Google Cloud Platform (GCP) can help solve some of these key challenges. In this post, we’ll describe how to build a pipeline to predict financial trends in microseconds. We’ll walk through how to set up and configure a pipeline for ingesting real-time, time-series data from various financial exchanges and how to design a suitable data model, which facilitates querying and graphing at scale.You’ll find a tutorial below on setting up and deploying the proposed architecture using GCP, particularly these products:Cloud Dataflow for a scalable data ingestion system that can handle late dataCloud Bigtable, our scalable, low-latency time series database that’s reached 40 million transactions per second on 3,500 nodes. Bonus: A scalable ML pipeline using TensorFlow eXtended, while not part of this tutorial, is a logical next step.The tutorial will explain how to establish a connection to multiple exchanges, subscribe to their trade feeds, and extract and transform these trades into a flexible format to be stored in Cloud Bigtable and be available to be graphed and analyzed.This will also set the foundation for ML online learning predictions at scale. You’ll see how to graph the trades, volume, and time delta from trade execution until it reaches our system (an indicator of how close to real time we can get the data). You can find more details on GitHub too.Before you get started, note that this tutorial uses billable components of GCP, including Cloud Dataflow, Compute Engine, Cloud Storage and Cloud Bigtable. Use the Pricing Calculator to generate a cost estimate based on your projected usage. However, you can try the tutorial for one hour at no charge in this Qwiklab tutorial environment.Getting started building a financial data pipelineFor this tutorial, we’ll use cryptocurrency real-time trade streams, since they are free and available 24/7 with minimum latency. We’ll use this framework that has all the data exchange streams definitions in one place, since every exchange has a different API to access data streams.Here’s a look at the real-time, multi-exchange observer that this tutorial will produce:First, we need to capture as much real-time trading data as possible for analysis. However, the large amount of currency and exchange data available requires a scalable system that can ingest and store such volume while keeping latency low. If the system can’t keep up, it won’t stay in sync with the exchange data stream. Here’s what the overall architecture looks like:The usual requirement for trading systems is low-latency data ingestion. To this, we add the need for near real-time data storage and querying at scale.How the architecture worksFor this tutorial, the source code is written in Java 8, Python 2.7, and JavaScript, and we use Maven and PIP for dependency/build management.There are five main framework units for this code:We’ll use XChange-stream framework to ingest real-time trading data with low latency from globally scattered data sources and exchanges, with the possibility to adopt data ingest worker pipeline location, and easily add more trading pairs and exchanges. This Java library provides a simple and consistent streaming API for interacting with cryptocurrency exchanges via WebSocket protocol. You can subscribe for live updates via reactive streams of RxJava library. This helps connect and configure some exchanges, including BitFinex, Poloniex, BitStamp, OKCoin, Gemini, HitBTC and Binance.For parallel processing, we’ll use Apache Beam for an unbounded streaming source code that works with multiple runners and can manage basic watermarking, checkpointing and record ID for data ingestion. Apache Beam is an open-source unified programming model to define and execute data processing pipelines, including ETL and batch and stream (continuous) processing. It supports Apache Apex, Apache Flink, Apache Gearpump, Apache Samza, Apache Spark, and Cloud Dataflow.To achieve strong consistency, linear scalability, and super low latency for querying the trading data, we’ll use Cloud Bigtable with Beam using the HBase API as the connector and writer to Cloud Bigtable. See how to create a row key and a mutation function prior to writing to Cloud Bigtable.For a real-time API endpoint , we’ll use a Flask web server at port:5000 plus a Cloud Bigtable client to query Cloud Bigtable and serve as an API endpoint. We’ll also use a JavaScript visualization with a Vis.JS Flask template to query the real-time API endpoint every 500ms. The Flask web server will run in the GCP VM instance.For easy and automated setup with project template for orchestration, we’ll use Terraform. Here’s an example of dynamic variable insertion from the Terraform template into the GCP compute instance.Define the pipelineFor every exchange and trading pair, create a different pipeline instance. This consists of three steps:UnboundedStreamingSource that contains ‘UnboundedStreamingSourceReader’Cloud Bigtable pre-writing mutation and key definitionCloud Bigtable write stepMake the Cloud Bigtable row key design decisionsIn this tutorial, our data transport object looks like this:We formulated the row key structure like this: TradingCurrency#Exchange#SystemTimestampEpoch#SystemNanosTime.So a row key might look like this: BTC/USD#Bitfinex#1546547940918#63187358085 with these definitions:BTC/USD: trading PairBitfinex : exchange1546547940918: Epoch timestamp63187358085: System nanotimeWe added nanotime at our key end to help avoid multiple versions per row for different trades. Two DoFn mutations might execute in the same Epoch millisecond time if there is a streaming sequence of TradeLoad DTOs, so adding nanotime at the end will split the millisecond to an additional one million.We also recommend hashing the volume-to-price ratio and attaching the hash at the end of the row key. Row cells will contain an exact schema replica of the exchange TradeLoad DTO (see the table above). This choice helps move from the specific (trading pair to exchange) to the general (timestamp to nanotime), avoiding hotspots when you query the data.Set up the environmentIf you are familiar with Terraform, it can save you a lot of time setting up the environment using Terraform instructions. Otherwise, keep reading.First, you should have a Google Cloud project associated with a billing account (if not, check out the getting started section). Log into the console, and activate a cloud console session.Next, create a VM with the following command:Note that we used the Compute Engine Service Account with Cloud API scope to make it easier to build up the environment.Wait for the VM to come up and SSH into it.Install the necessary tools like Java, Git, Maven, PIP, Python 2.7 and the Cloud Bigtable command line tool using the following command:Next, enable some APIs and create a Cloud Bigtable instance and bucket:In this scenario, we use a one-column family called “market” to simplify the Cloud Bigtable schema design (more on that here):Once that’s ready, clone the repository:Then build the pipeline:If everything worked, you should see this at the end and can start the pipeline:Ignore any illegal thread pool exceptions. After a few minutes, you’ll see the incoming trades in the Cloud Bigtable table:To observe the Cloud Dataflow pipeline, navigate to the Cloud Dataflow console page. Click on the pipeline and you’ll see the job status is “running”:Add a visualization to your dataTo run the Flask front-end server visualization to further explore the data, navigate to the front-end directory inside your VM and build the Python package.Open firewall port 5000 for visualization:Link the VM with the firewall rule:Then, navigate to the front-end directory:Find your external IP in the Google Cloud console and open it in your browser with port 5000 at the end, like this: http://external-ip:5000/streamYou should be able to see the visualization of aggregated BTC/USD pair on several exchanges (without the predictor part). Use your newfound skills to ingest and analyze financial data quickly!Clean up the tutorial environmentWe recommend cleaning up the project after finishing this tutorial to return to the original state and avoid unnecessary costs.You can clean up the pipeline by running the following command:Then empty and delete the bucket:Delete the Cloud Bigtable instance:Exit the VM and delete it from the console.Learn more about Cloud Bigtable schema design for time series data, Correlating thousands of financial time series streams in real time, and check out other Google Cloud tips.Special thanks to contributions from: Daniel De Leo, Morgante Pell, Yonni Chen and Stefan Nastic.Google does not endorse trading or other activity from this post and does not represent or warrant to the accuracy of the data.
Quelle: Google Cloud Platform

Azure Stack IaaS – part ten

This blog is co-authored by Andrew Westgarth, Senior Program Manager, Azure Stack 

Journey to PaaS

One of the best things about running your VMs in Azure or Azure Stack is you can begin to modernize around your virtual machines (VMs) by taking advantage of the services provided by the cloud. Platform as a Service (PaaS) is the term often applied to the capabilities that are available to your application to use without the burden of building and maintaining these capabilities yourself. Actually, cloud-IaaS itself is a PaaS since you do not have to build or maintain the underlying hypervisors, software defined network and storage, or even the self-service API and portal. Furthermore, Azure and Azure Stack gives you PaaS services which you can use to modernize your application.

When you need to modernize your application, you have the option to leverage cloud native technologies and pre-built services delivered to you by the cloud platform. Many teams choose to utilize these capabilities when they add new capabilities or features to their existing app. This way they can keep the mature parts of their application in VMs while tapping onto the convenience of the cloud for new code.

In this article we will explore how you can modernize your application with web apps, serverless functions, blob storage, and Kubernetes as part of your Journey to PaaS.

Web apps with Azure App Services

Back in the virtualization days we had VM sprawl. So many VMs were taking up resources just to perform a single purpose. The most common was an entire VM just to host a website. Using an entire VM not only is wasteful from a resources point of view, but also in its on-going management. Azure App Services gives you a new option. Instead of creating a VM and installing a web server, you can just create a website directly on the platform. It is super easy to create a web app in the Azure Stack portal. Really the main thing you must provide is the website name which becomes part of the overall DNS name just like in Azure, but on the network you have installed your Azure Stack.

Once your web app is deployed you can push your website to it using FTP or web deploy or from a repository. You can access this simply by accessing the deployment options of the web app.

You can even create a custom domain for your web app so that your users can access your website with a more recognizable name. So, a URL like timecard.appservice.midwest.azurestack.corp.contoso.com can become timecard.contoso.com or some other domain entirely.

You can learn more about Azure App Services on Azure Stack:

App Services on Azure Stack (for the operator)
App Service Custom Domains
App Service Documentation
Quickstart guides for App Services

Serverless functions with Azure Functions

If you have something even smaller than a website, why create an entire VM? With Azure Functions you can simply host your code with the platform, no VM required. Example candidates for Azure Functions are scripts that need to be run on a schedule or a script that is triggered by a web request. I’ve seen people using functions to take periodic measurements of resource usage, check if something is responding properly, or notify another system that condition has been met. Azure Stack supports functions written in CSharp, JavaScript, FSharp, PowerShell*, Python*, TypeScript*, PHP*, and Batch* (* indicates experimental languages).

Functions are easy to create directly in the portal. You simply need to provide a name for the function then pick a scenario template to get started:

You can even test and run your function directly in the Azure Stack portal:

Learn more:

Overview of Azure Functions
Create your first function in the Azure Portal

Storage as a service with Azure Storage

Another common but wasteful use of VMs in the virtualization days was storage. Teams set up their own file servers for their web app’s images or for sharing of documents. When you use Azure Stack you can take advantage of the built-in platform storage features. Azure Storage in Azure Stack gives you four storage options:

Blobs for unstructured data like web images, files, and logs that can be accessed by a code-friendly URL instead of a file-system object.
Queues for durable message queuing accessible via a web friendly REST API.
Tables for semi-structured XML data like JSON with OData-base queries.
Disks for persistent storage to support your Azure Virtual Machines.

To get started with Azure Storage you create a storage account in your Azure Stack subscription. Once you create the storage account you can create blobs, tables, or queues. Here I have created a queue of items for my team to bring for a picnic which they can de-queue when they sign up for bringing it:

Using Azure Table Storage is Azure Stack allows you to store and query store semi-structured data for apps that require a flexible data schema. You can access this data through the Azure Storage SDK or use tools like Azure Storage Explorer or Microsoft Excel to view and edit the data. Here is a baseball roster stored in Azure Table Storage viewed from Azure Storage Explorer:

Here is the same data accessed from Microsoft Excel:

Sharing unstructured data is easy as well. Here are all the logs my IoT devices are creating. I can download or even upload logs right in the portal or in code via the blob URL:

Learn more:

Azure Blob Storage
Azure Disk Storage
Azure Table Storage
Azure Queue Storage
Connect Azure Storage Explorer to Azure Stack
Connect Microsoft Excel to Azure Table Storage

Secrets with KeyVault

How did you keep passwords and certificates secret in the virtualization days? Admit it, not always best practices. KeyVault is a built-in Azure platform service that provides a secure place for you to keep your secrets. You can create a VM on Azure Stack with a secret kept in KeyVault. This way you don't need to put passwords or other secrets in clear text in the template you use to deploy the VM.  This is just another example of how you can modernize your VMs by taking advantage of the Azure platform services.

Learn more:

Azure KeyVault Overview
Create a VM in Azure Stack using a secure password in KeyVault

Containers with Kubernetes

Another great way to modernize your application is to take advantage of containers. Azure Stack gives you the option to host your containers in a Kubernetes cluster. This Kubernetes cluster is created using the Azure Kubernetes Service (AKS) engine so that you can easily move your applications between AKS in Azure and your Kubernetes cluster in Azure Stack. Creating a Kubernetes cluster is easy in Azure Stack, simply deploy it from the portal’s marketplace.
Azure Stack Kubernetes Cluster is currently in public preview.

Learn more:

Deploy Kubernetes to use containers with Azure Stack
Add Kubernetes to the Azure Stack Marketplace (for operators)
Azure AKS Engine

The platform for apps not just VMs

The primary focus of virtualization platforms is helping improve the life of the infrastructure team. But with cloud platforms the focus is on improving the life for business units and developers. Virtualization VMs will only take you so far. But when you move your VMs to cloud-IaaS in Azure or Azure Stack, you can modernize your app to stay current with your users who now expect cloud cadence. Tapping into native PaaS services gets you out of the business of infrastructure and into the business of your app.

In this blog series

We hope you come back to read future posts in this blog series. Here are some of our past topics:

Azure Stack at its core is an Infrastructure-as-a-Service (IaaS) platform
Start with what you already have
Fundamentals of IaaS
Protect your stuff
Do it yourself
Pay for what you use
It takes a team
If you do it often, automate it
Build on the success of others

Quelle: Azure

Azure.Source – Volume 87

News and updates

Microsoft FHIR Server for Azure extends to SQL

Since the launch of the open source FHIR Server for Azure on GitHub last November, we have been humbled by the tremendously positive response and surge in the use of FHIR in the healthcare community. There has been great interest in Microsoft expanding capabilities in the FHIR service, and today we are pleased to announce that the open source FHIR Server for Azure now supports both Azure Cosmos DB and SQL backed persistence providers. With the SQL persistence provider, developers will be able to perform complex search queries that join information across multiple FHIR resource types and leverage transactions.

Now available

Azure Shared Image Gallery now generally available

At Microsoft Build 2019, we announced the general availability of Azure Shared Image Gallery, making it easier to manage, share, and globally distribute custom virtual machine (VM) images in Azure. Shared Image Gallery provides a simple way to share your applications with others in your organization, within or across Azure Active Directory (AD) tenants and regions. This enables you to expedite regional expansion or DevOps processes and simplify your cross-region HA/DR setup. This blog explains the key benefits of this feature.

Technical content

Monitoring on Azure HDInsight Part 3: Performance and resource utilization

This is the third blog post in a four-part series on Monitoring on Azure HDInsight. Part 1 is an overview that discusses the three main monitoring categories: cluster health and availability, resource utilization and performance, and job status and logs. Part 2 centered on the first topic, monitoring cluster health and availability. This blog covers the second of those topics, performance and resource utilization, in more depth.

Simplify B2B communications and free your IT staff

Today’s business data ecosystem is a network of customers and partners communicating continuously with each other. The traditional way to do this is by establishing a business-to-business (B2B) relationship. The B2B communication requires a formal agreement between the entities. Then the two sides must agree on the formatting of messages. The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how one Microsoft partner uses Azure to solve a unique problem.

Accelerating smart building solutions with cloud, AI, and IoT

Throughout our Internet of Things (IoT) journey we’ve seen solutions evolve from device-centric models, to spatially-aware solutions that provide real-world context. Last year at Realcomm | IBcon, we announced Azure IoT’s vision for spatial intelligence, diving into scenarios that uniquely join IoT, artificial intelligence (AI), and productivity tools. This year we’ve returned to Realcomm | IBcon, joined by over 30 partners who have delivered innovative solutions using our spatial intelligence and device security services to provide safety to construction sites, operate buildings more efficiently, utilize space more effectively, and boost occupant productivity and satisfaction. Here we’ll tell you more about a selection of these smart building partners who are accelerating digital transformation in their industries.

Taking advantage of the new Azure Application Gateway V2

We recently released Azure Application Gateway V2 and Web Application Firewall (WAF) V2. These SKUs are named Standard_v2 and WAF_v2 respectively and are fully supported with a 99.95 percent SLA. The new SKUs offer significant improvements and additional capabilities to customers. Read the blog for an explanation of the key features.

Virtual machine memory allocation and placement on Azure Stack

Customers have been using Azure Stack in a number of different ways. We continue to see Azure Stack used in connected and disconnected scenarios, as a platform for building applications to deploy both on-premises as well as in Azure. Many customers want to just migrate existing applications over to Azure Stack as a starting point for their hybrid or edge journey. The purpose of this post is to detail how Virtual Machine (VM) placement works in Azure Stack with a focus on the different components that come to play when deciding the available memory for capacity planning.

Three things to know about Azure Machine Learning Notebook VM

Data scientists have a dynamic role. They need environments that are fast and flexible while upholding their organization’s security and compliance policies.

Data scientists working on machine learning projects need a flexible environment to run experiments, train models, iterate models, and innovate in. They want to focus on building, training, and deploying models without getting bogged down in prepping virtual machines (VMs), vigorously entering parameters, and constantly going back to IT to make changes to their environments. Moreover, they need to remain within compliance and security policies outlined by their organizations. This blog looks at three ways the Azure Machine Learning Notebook VM makes life easier for data scientists.

Creating custom VM images in Azure using Packer | Azure Tips and Tricks

Azure provides rich support for open source tools to automate the infrastructure deployments. Some of the tools include Hashicorp's Packer and Terraform . In few simple steps, we'll learn how to create custom VM Linux images in Azure using Packer.

Additional technical content

Is it cost-efficient to run Spring Boot on Azure Functions?
Fast Focus: Serverless Computing – Azure Functions and Xamarin in 20 Minutes
Schedule Recurring Builds in App Center
Home Grown IoT – Solution Design

Azure shows

Troubleshoot resource property changes using Change History in Azure Policy | Azure Friday

Jenny Hunter joins Donovan to showcase a new integration inside Azure Policy that enables you to see recent changes to the properties for non-compliant Azure resources. Public preview of the Resource Change History API is also now available.

Willow is the digital twin for the built world | Internet of Things Show

Willow and Microsoft are partnering together to empower every person and organization to connect with the built world in a whole new way. This digital disruption is happening today, with Digital Twin technology.

Over-the-air software updates for Azure IoT Hub with Mender.io | Internet of Things Show

Introduction to a secure and robust over-the-air (OTA) software update process for Azure IoT Hub with Mender.io, an open source update manager for connected devices. We will cover key considerations for being successful with software updates to connected devices and show a live demo deploying software to a physical device.

Five things you can do with serverless | Five Things

Serverless is like CrossFit, the first rule is to never stop talking about it. In this episode, Eduardo Laureano from the Azure Functions team brings you five things you can do with Serverless that you might not realize are even possible. Also, Burke wears a sweater vest and Eduardo insinuates that there is a better candy than Goo Goo Clusters. The nerve.

Migrating from Entity Framework 6 to Core | On .NET

Entity Framework (EF) Core is a lightweight and cross-platform version of the popular Entity Framework data access technology. In this episode, Diego Vega joins Christos to show us how we can port out Entity Framework 6 code to Entity Framework Core.

How to get started with Azure Machine Learning Service | Azure Tips and Tricks 

In this edition of Azure Tips and Tricks, learn how to get started with the Azure Machine Learning Service and how you can use it from Visual Studio Code.

Xamarin.Forms 4 – Who could ask for anything more? | The Xamarin Podcast

HTML5 audio not supported

Episode 283: .NET and Azure | The Azure Podcast

Sujit and Cynthia talk with VS and .NET Director, Scott Hunter, on how Microsoft is shifting paradigms in the Linux world and .NET development experience with Azure.

HTML5 audio not supported

Industries and partners

Join Microsoft at ISC2019 in Frankfurt

The world of computing goes deep and wide in regards to working on issues related to our environment, economy, energy, and public health systems. These needs require modern, advanced solutions that can be hard to scale, take a long time to deliver, and were traditionally limited to a few organizations. Join us at the world's second-largest supercomputing show, ISC High Performance 2019. Learn how Azure customers combine the flexibility and elasticity of the cloud and how to integrate both our specialized compute virtual machines (VMs), as well as bare-metal offerings from Cray.
Quelle: Azure

What’s new in automation software deployment?

Digital business automation software can help companies scale operations, improve customer experiences and control costs. Most business and IT leaders have moved on from understanding the business value of automation to how best to implement it. Part of implementation success is choosing the right deployment environment. The following three options fit different business needs:

On premises. Some firms and government agencies keep all of their data on site, either because their mission requires it or because they have yet to migrate to cloud-based data.
Hybrid cloud. By using containers, firms can run software in their own cloud or contract with a vendor for cloud hosting across several private or public clouds.
SaaS. Data centers provide cloud hosting of data in a secure environment, with all infrastructure managed by the vendor.

Whether you choose one or any combination of the three, look for flexibility and common automation software, tools and capabilities that work across all hosting environments.
What’s new: Deploying on the cloud of your choice
IBM recently introduced a new hybrid cloud deployment option for its digital business automation platform called IBM Cloud Pak for Automation. This deployment of the digital business automation platform is designed to overcome the challenges that hybrid cloud environments present by delivering several key benefits:

Run on the cloud of your choice. Businesses have the freedom to manage digital business automation solutions consistently within the Kubernetes environment of their choice.
Ensure consistency in virtualization. Enterprises can manage containers and virtual machines consistently in a single operating environment.
Use a common deployment interface. Companies can deploy one or more automation capabilities simultaneously using a single tool.
Gain operational insights. Wherever the platform automates work to increase efficiency, it also aggregates large amounts of data. There’s great business growth potential in that data, but how can businesses make sense of it? The IBM Cloud Pak includes a capability that provides an analytical window into company data by capturing data across the platform. For example, a large bank in South America needed better visibility and insights into its data to further improve operations. By using the analytics capability within the platform, the bank can gain visibility into their operations across various teams. The bank’s data scientists are now in the process of applying machine learning to the operational data in combination with other business data to gather additional insights into business operations.

Watch this intro video to learn more.
The advantages of a strong automation platform
Leading-edge automation uses a platform so users can automate most types of work at scale and get the greatest deployment and purchasing flexibility.
With the IBM automation platform – on which the IBM Cloud Pak for Automation is the latest deployment option – businesses can do the following:

Build business automation services to digitize and scale work. For example, companies can use the tool to build a workflow service that automates an end-to-end process, including a mix of straight-through processing and human interaction.
Create business apps for users that interact with enterprise data, digital agents and business automation services. For example, a financial services company could develop an app for a loan officer to open, manage and progress loans as part of a workflow.
Build and deploy intelligent digital agents to automate human tasks. For example, the IBM Cloud Pak for Automation can help a company create an robotic process automation (RPA) bot that calls a decision service to automate customer onboarding.
Run it everywhere – deploy all or part of the IBM automation platform on-premises, on hybrid cloud, or managed on the IBM cloud (SaaS).
Start with just one market-leading capability, like task or decision automation, and evolve from there.

Learn more about IBM Cloud Pak for Automation.
The post What’s new in automation software deployment? appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Enhanced OpenShift Red Hat AMQ Broker container image for monitoring

Previously, I blogged about how to enhance your JBoss AMQ 6 container image for production: I explained how to externalise configuration and add Prometheus monitoring. While I already covered the topic well, I had to deal with this topic for version 7.2 of Red Hat AMQ Broker recently, and as things have slightly changed for […]
The post Enhanced OpenShift Red Hat AMQ Broker container image for monitoring appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift