Enabling Diagnostic Logging in Azure API for FHIR®

Access to Diagnostic Logs is essential for any healthcare service where being compliant with regulatory requirements (like HIPAA) is a must. The feature in Azure API for FHIR that makes this happen is Diagnostic settings in the Azure Portal UI. For details on how Azure Diagnostic Logs work, please refer to the Azure Diagnostic Log documentation.

At this time, service is emitting the following fields in the Audit Log: 

Field Name 

Type  

Notes

TimeGenerated
DateTime
Date and Time of the event.

OperationName   

String
 

CorrelationId  
String
 

RequestUri  
String
The request URI.

FhirResourceType  
String
The resource type the operation was executed for.

StatusCode  
Int  
The HTTP status code (e.g., 200).

ResultType  
String  
The available value currently are ‘Started’, ‘Succeeded’, or ‘Failed.’

OperationDurationMs
Int  
The milliseconds it took to complete the request.

LogCategory  
String
The log category. We are currently emitting 'AuditLogs' for the value.

CallerIPAddress  
String
The caller's IP address.

CallerIdentityIssuer  
String  
Issuer

CallerIdentityObjectId  
String  
Object_Id

CallerIdentity  
Dynamic  
A generic property bag containing identity information.

Location  
String
The location of the server that processed the request (e.g., South Central US).

How do I get to my Audit Logs?

To enable diagnostic logging in Azure API for FHIR, navigate to Diagnostic settings in the Azure Portal. Here you will see standard UI that all services use for emitting diagnostic logging.

There are three ways to get to the diagnostic:

Archive to the Storage Account for auditing or manual inspection.
Stream to Event Hub for ingestion by third-party service or custom analytics solutions, such as Power BI.
Stream to Log Analytics workspace in Azure Monitor.

Please note, it may take up to 15 minutes for the first Logs to show in Log Analytics.

For more information on how to work with Diagnostic Logs, please refer to Diagnostic Logs documentation.

Conclusion

Having access to Diagnostic Logs is essential for monitoring service and providing compliance reports. Azure API for FHIR allows you to do this through Diagnostic Logs.

FHIR® is the registered trademark of HL7 and is used with the permission of HL7.
Quelle: Azure

Amazon CloudFront erstreckt sich auf 200 Standorte mit neuen Edge-Standorten in Kolumbien, Chile und Argentinien mit um 56 % reduzierten Preisen in Südamerika

Details: Amazon CloudFront kündigt seine ersten Edge-Standorte in Kolumbien, Chile und Argentinien an. Mit diesen Edge-Standorten können Zuschauer in diesen Ländern beim Zugriff auf Inhalte über CloudFront eine durchschnittlich 60 prozentige Verbesserung der Latenz feststellen. Zudem wird CloudFront ab 1. November 2019 die Preise für On-Demand-Datenübertragungen in Südamerika um bis zu 56 Prozent senken. Die neuen Preise für Südamerika finden Sie auf der CloudFront Preisseite. Damit verfügt CloudFront jetzt über 200 Points of Presence in 77 Städten und 37 Ländern. Hier finden Sie einen Blog von Jeff Barr zu dieser Einführung.
Quelle: aws.amazon.com

jcr:content

Wir freuen uns, heute den Support für Amazon Linux 2 und neue Instance-Typen ankündigen zu können. Mit diesem Update wird Folgendes einfacher und kosteneffizienter:
Quelle: aws.amazon.com

AWS Batch stellt neue Zuweisungsstrategien vor

Ab heute können Zuordnungsstrategien in AWS Batch festgelegt werden, sodass Kunden zwei zusätzliche Methoden für AWS Batch auswählen können, um Rechenressourcen zuzuordnen. Mit diesen Strategien können Kunden sowohl den Durchsatz als auch den Preis berücksichtigen, wenn sie festlegen, wie AWS Batch Instances in ihrem Namen skalieren soll.  
Quelle: aws.amazon.com

Amazon FreeRTOS ist jetzt in den Regionen AWS Naher Osten (Bahrain) und Asien-Pazifik (Hongkong) verfügbar

Amazon FreeRTOS ist jetzt in den Regionen AWS Naher Osten (Bahrain) und Asien-Pazifik (Hongkong) verfügbar Amazon FreeRTOS ist ein IoT-Betriebssystem für Microcontroller, das den FreeRTOS-Kernel mit Softwarebibliotheken für Sicherheit, Konnektivität und Updatefähigkeit ergänzt, um kleine Edge-Geräte mit geringer Spannung einfacher zu programmieren, bereitzustellen, zu sichern, verbinden und verwalten. Amazon FreeRTOS ist Open Source, steht kostenlos zum Download und zur Nutzung bereit und bietet alles, was für die einfache Programmierung verbundener Mikrocontroller-basierter Geräte erforderlich ist. Es sammelt Daten dieser Geräte für IoT-Anwendungen und unterstützt Sie bei der Skalierung dieser Anwendungen auf Millionen von Geräten. 
Quelle: aws.amazon.com

Linux-Kernel: Machine-Learning allein findet keine Bugs

Mittels KI sucht der Linux-Kernel-Entwickler Sasha Levin nach Patches für die stabilen Zweige, die Code verbessern. Aber kann er mit dem System auch Patches finden, die Bugs enthalten? Für Levin ist das eine schwer lösbare Aufgabe, aber er hat ein paar Anhaltspunkte dafür, wie das gehen könnte. (Linux-Kernel, Linux)
Quelle: Golem

TensorFlow 2.0 on Azure: Fine-tuning BERT for question tagging

This post is co-authored by Abe Omorogbe, Program Manager, Azure Machine Learning, and John Wu, Program Manager, Azure Machine Learning

Congratulations to the TensorFlow community on the release of TensorFlow 2.0! In this blog, we aim to highlight some of the ways that Azure can streamline the building, training, and deployment of your TensorFlow model. In addition to reading this blog, check out the demo discussed in more detail below, showing how you can use TensorFlow 2.0 in Azure to fine-tune a BERT (Bidirectional Encoder Representations from Transformers) model for automatically tagging questions.

TensorFlow 1.x is a powerful framework that enables practitioners to build and run deep learning models at massive scale. TensorFlow 2.0 builds on the capabilities of TensorFlow 1.x by integrating more tightly with Keras (a library for building neural networks), enabling eager mode by default, and implementing a streamlined API surface.

TensorFlow 2.0 on Azure

We've integrated Tensorflow 2.0 with the Azure Machine Learning service to make bringing your TensorFlow workloads into Azure as seamless as possible. Azure Machine Learning service provides an SDK that lets you write machine learning models in your preferred framework and run them on the compute target of your choice, including a single virtual machine (VM) in Azure, a GPU (graphics processing unit) cluster in Azure, or your local machine. The Azure Machine Learning SDK for Python has a dedicated TensorFlow estimator that makes it easy to run TensorFlow training scripts on any compute target you choose.

In addition, the Azure Machine Learning service Notebook VM comes with TensorFlow 2.0 pre-installed, making it easy to run Jupyter notebooks that use TensorFlow 2.0.

TensorFlow 2.0 on Azure demo: Automated labeling of questions with TF 2.0, Azure, and BERT

As we’ve mentioned, TensorFlow 2.0 makes it easy to get started building deep learning models. Using TensorFlow 2.0 on Azure makes it easy to get the performance benefits of Microsoft’s global, enterprise-grade cloud for whatever your application may be.

To highlight the end-to-end use of TensorFlow 2.0 in Azure, we prepared a workshop that will be delivered at TensorFlow World, on using TensorFlow 2.0 to train a BERT model to suggest tags for questions that are asked online. Check out the full GitHub repository, or go through the higher-level overview below.

Demo Goal

In keeping with Microsoft’s emphasis on customer obsession, Azure engineering teams try to help answer user questions on online forums. Azure teams can only answer questions if we know that they exist, and one of the ways we are alerted to new questions is by watching for user-applied tags. Users might not always know the best tag to apply to a given question, so it would be helpful to have an AI agent to automatically suggest good tags for new questions.

We aim to train an AI agent to automatically tag new Azure-related questions.

Training

First, check out the training notebook. After preparing our data in Azure Databricks, we train a Keras model on an Azure GPU cluster using the Azure Machine Learning service TensorFlow Estimator class. Notice how easy it is to integrate Keras, TensorFlow, and Azure’s compute infrastructure. We can easily monitor the progress of training with the run object.

Inferencing

Next, open up the inferencing notebook. Azure makes it simple to deploy your trained TensorFlow 2.0 model as a REST endpoint in order to get tags associated with new questions.

Machine Learning Operations

Next, open up the Machine Learning Operations instructions. If we intend to use the model in a production setting, we can bring additional robustness to the pipeline with ML Ops, an offering by Microsoft that brings a DevOps mindset to machine learning, enabling multiple data scientists to work on the same model while ensuring that only models that meet certain criteria will be put into production.

Next steps

TensorFlow 2.0 opens up exciting new horizons for practitioners of deep learning, both old and new. If you would like to get started, check out the following resources:

TensorFlow 2.0 announcement
TensorFlow estimator on Azure

Quelle: Azure