Accelerate supercomputing in the cloud with Cray ClusterStor
We’re excited to announce Cray ClusterStor in Azure, a dedicated solution to accelerate data processing of the most complex HPC jobs running in Azure.
Quelle: Azure
We’re excited to announce Cray ClusterStor in Azure, a dedicated solution to accelerate data processing of the most complex HPC jobs running in Azure.
Quelle: Azure
The Jupyter Notebook on HDInsight Spark clusters is useful when you need to quickly explore data sets, perform trend analysis, or try different machine learning models. Not being able to track the status of Spark jobs and intermediate data can make it difficult for data scientists to monitor and optimize what they are doing inside the Jupyter Notebook.
Quelle: Azure
NVIDIA’s T4 GPU, now available in regions around the world, accelerates a variety of cloud workloads, including high performance computing (HPC), machine learning training and inference, data analytics, and graphics. In January of this year, we announced the availability of the NVIDIA T4 GPU in beta, to help customers run inference workloads faster and at lower cost. Earlier this month at Google Next ‘19, we announced the general availability of the NVIDIA T4 in eight regions, making Google Cloud the first major provider to offer it globally.A focus on speed and cost-efficiencyEach T4 GPU has 16 GB of GPU memory onboard, offers a range of precision (or data type) support (FP32, FP16, INT8 and INT4), includes NVIDIA Tensor Cores for faster training and RTX hardware acceleration for faster ray tracing. Customers can create custom VM configurations that best meet their needs with up to four T4 GPUs, 96 vCPUs, 624 GB of host memory and optionally up to 3 TB of in-server local SSD.At time of publication, prices for T4 instances are as low as $0.29 per hour per GPU on preemptible VM instances. On-demand instances start at $0.95 per hour per GPU, with up to a 30% discount with sustained use discounts.Tensor Cores for both training and inferenceNVIDIA’s Turing architecture brings the second generation of Tensor Cores to the T4 GPU. Debuting in the NVIDIA V100 (also available on Google Cloud Platform), Tensor Cores support mixed-precision to accelerate matrix multiplication operations that are so prevalent in ML workloads. If your training workload doesn’t fully utilize the more powerful V100, the T4 offers the acceleration benefits of Tensor Cores, but at a lower price. This is great for large training workloads, especially as you scale up more resources to train faster, or to train larger models.Tensor Cores also accelerate inference, or predictions generated by ML models, for low latency or high throughput. When Tensor Cores are enabled with mixed precision, T4 GPUs on GCP can accelerate inference on ResNet-50 over 10X faster with TensorRT when compared to running only in FP32. Considering its global availability and Google’s high-speed network, the NVIDIA T4 on GCP can effectively serve global services that require fast execution at an efficient price point. For example, Snap Inc. uses the NVIDIA T4 to create more effective algorithms for its global user base, while keeping costs low.“Snap’s monetization algorithms have the single biggest impact to our advertisers and shareholders. NVIDIA T4-powered GPUs for inference on GCP will enable us to increase advertising efficacy while at the same time lower costs when compared to a CPU-only implementation.” —Nima Khajehnouri, Sr. Director, Monetization, Snap Inc.The GCP ML Infrastructure combines the best of Google and NVIDIA across the globeYou can get up and running quickly, training ML models and serving inference workloads on NVIDIA T4 GPUs by using our Deep Learning VM images. These include all the software you’ll need: drivers, CUDA-X AI libraries, and popular AI frameworks like TensorFlow and PyTorch. We handle software updates, compatibility, and performance optimizations, so you don’t have to. Just create a new Compute Engine instance, select your image, click Start, and a few minutes later, you can access your T4-enabled instance. You can also start with our AI Platform, an end-to-end development environment that helps ML developers and data scientists to build, share, and run machine learning applications anywhere. Once you’re ready, you can use Automatic Mixed Precision to speed up your workload via Tensor Cores with only a few lines of code.Performance at scaleNVIDIA T4 GPUs offer value for batch compute HPC and rendering workloads, delivering dramatic performance and efficiency that maximizes the utility of at-scale deployments. A Princeton University neuroscience researcher had this to say about the T4’s unique price and performance:“We are excited to partner with Google Cloud on a landmark achievement for neuroscience: reconstructing the connectome of a cubic millimeter of neocortex. It’s thrilling to wield thousands of T4 GPUs powered by Kubernetes Engine. These computational resources are allowing us to trace 5 km of neuronal wiring, and identify a billion synapses inside the tiny volume.” —Sebastian Seung, Princeton UniversityQuadro Virtual Workstations on GCPT4 GPUs are also a great option for running virtual workstations for engineers and creative professionals. With NVIDIA Quadro Virtual Workstations from the GCP Marketplace, users can run applications built on the NVIDIA RTX platform to experience bring the next generation of computer graphics including real-time ray tracing and AI-enhanced graphics, video and image processing, from anywhere.“Access to NVIDIA Quadro Virtual Workstation on the Google Cloud Platform will empower many of our customers to deploy and start using Autodesk software quickly, from anywhere. For certain workflows, customers leveraging NVIDIA T4 and RTX technology will see a big difference when it comes to rendering scenes and creating realistic 3D models and simulations. We’re excited to continue to collaborate with NVIDIA and Google to bring increased efficiency and speed to artist workflows.” —Eric Bourque, Senior Software Development Manager, AutodeskGet started todayCheck out our GPU page to learn more about how the wide selection of GPUs available on GCP can meet your needs. You can learn about customer use cases and the latest updates to GPUs on GCP in our Google Cloud Next 19 talk, GPU Infrastructure on GCP for ML and HPC Workloads. Once you’re ready to dive in, try running a few TensorFlow inference workloads by reading our blog or our documentation and tutorials.
Quelle: Google Cloud Platform
Our Cloud Storage Transfer Service lets you securely transfer data from Amazon S3 into Google Cloud Storage. Customers use the transfer service to move petabytes of data between S3 and Cloud Storage in order to access GCP services, and we’ve heard that you want to harden this transfer. Using VPC Service Controls, our method of defining security perimeters around sensitive data in Google Cloud Platform (GCP) services, will let you harden the security of this transfer by adding an additional layer or layers to the process. Let’s walk through how to use VPC Service Controls to securely move your data into Cloud Storage. This example will use the simplistic VPC Service Control rule of using a service account, but these rules can become much more granular. The VPC Service Control documentation walks through those advanced rules if you’d like to explore other examples. See some of those implementations here.Along with moving data from S3, the Cloud Storage Transfer Service can move data between Cloud Storage buckets and HTTP/HTTPS servers.This tutorial assumes that you’ve set up a GCP account or the GCP free trial. Access the Cloud Console, then select or create a project and make sure billing is enabled.Let’s move that dataFollow this process to move your S3 data into Cloud Storage.Step 0: Create an AWS IAM user that can perform transfer operations, and make sure that the AWS user can access the S3 bucket for the files to transfer.GCP needs to have access to the data source in Amazon S3. The AWS IAM user you create should have the following roles:List the Amazon S3 bucket.Get the location of the bucket.Read the objects in the bucket.You will also need to create at least one access/secret key pair for the transfer job. You can also choose to create a separate access/secret key pair for each transfer operation, depending on your business needs.Step 1: Create your VPC Service Control perimeterFrom within the GCP console, create your VPC Service Control perimeter and enable all of the APIs that you want enabled within this perimeter.Note that the VPC Service Control page in the Cloud Console is not available by default and the organization admin role does not have these permissions enabled by default. The organization admin will need to grant the role of Access Context Manager Admin via the IAM page to whichever user(s) will be configuring your policies and service controls. Here’s what that looks like:Step 2: Get the name of the service account that will be running the transfer operations.This service account should be in the GCP Project that will be initiating the transfers. This GCP project will not be in your controlled perimeter by design. The name of the service account looks like this: project-[ProjectID]@storage-transfer-service.iam.gserviceaccount.comYou can confirm the name of your service account using the API described here.Step 3: Create an access policy in Access Context Manager.Note: An organization node can only have one access policy. If you create an access level via the console, it will create an access policy for you automatically.Or create a policy via the command line, like this:gcloud access-context-manager policies create –organization ORGANIZATION_ID –title POLICY_TITLEWhen the command is complete, you should see something like this:Create request issuedWaiting for operation [accessPolicies/POLICY_NAME/create/1521580097614100] to complete…done.Created.Step 4: Create an access level based on the access policy that limits you to a user or service account.This is where we create a simple example of an access level based on an access policy. This limits access into the VPC through the service account. Much more complex examples of access level rules can be applied to the VPC. Here, we’ll walk through a simple example that can serve as the “Hello, world” of VPC Service Controls.Step 4.1: Create a .yaml file that contains a condition that lists the members that you want to provide access to.- members: – user:sysadmin@example.com – serviceAccount:service@project.iam.gserviceaccount.comStep 4.2: Save the fileIn this example, the file is named CONDITIONS.yaml. Next, create the access level.gcloud access-context-manager levels create NAME –title TITLE –basic-level-spec CONDITIONS.yaml –combine-function=OR –policy=POLICY_NAMEYou should then see output similar to this:Create request issued for: NAMEWaiting for operation [accessPolicies/POLICY_NAME/accessLevels/NAME/create/1521594488380943] to complete…done.Created level NAME.Step 5: Bind the access level you created to the VPC Service Control This step is to make sure that the access level you just created is applied to the VPC that you are creating the hardened perimeter around, as shown here:Step 6: Initiate the transfer operationInitiate the transfer from a project that is outside of the controlled perimeter into a Cloud Storage Bucket that is in a project within the perimeter. This will only work when you use the service account with the access level you created in the previous steps. Here’s what it looks like:That’s it! Your S3 data is now in Google Cloud Storage for you to manage, modify or move further. Learn more about data transfer into GCP with these resources:Creating an IAM User in your AWS AccountGCS Transfer Service DocumentationVPC Service Controls Documentation
Quelle: Google Cloud Platform
This blog post was co-authored by Liz Yu (Marketing), Bryden Oliver (Architect), Iain Shepard (Senior Software Engineer) at Spotlight Cloud, and Deborah Chen (Program Manager), Sri Chintala (Program Manager) at Azure Cosmos DB.
Spotlight Cloud is the first built on Azure database performance monitoring solution focused on SQL Server customers. Leveraging the scalability, performance, global distribution, high-availability, and built-in security of Microsoft Azure Cosmos DB, Spotlight Cloud combines the best of the cloud with Quest Software’s engineering insights from years of building database performance management tools.
As a tool that delivers database insights that lead customers to higher availability, scalability, and faster resolution of their SQL solutions, Spotlight Cloud needed a database service that provided those exact requirements on the backend as well.
Using Azure Cosmos DB and Azure Functions, Quest was able to build a proof of concept within two months and deploy to production in less than eight months.
“Azure Cosmos DB will allow us to scale as our application scales. As we onboard more customers, we value the predictability in terms of performance, latency, and the availability we get from Azure Cosmos DB.”
– Patrick O’Keeffe, VP of Software Engineering, Quest Software
Spotlight Cloud requirements
The amount of data needed to support a business continually grows. As data scales, so does Spotlight Cloud, as it needs to analyze all that data. Quest’s developers knew they needed a highly available database service with the following requirements and at affordable cost:
Collect and store many different types of data and send it to an Azure-based storage service. The data comes from SQL Server DMVs, OS performance counter statistics, SQL plans, and other useful information. The data collected varies greatly in size (100 bytes to multiple megabytes) and shape.
Accept 1,200 operations/second on the data with the ability to continue to scale as more customers use Spotlight Cloud.
Query and return data to aid in the diagnosis and analysis of SQL Server performance problems quickly.
After a thorough evaluation of many products, Quest chose Azure Functions and Azure Cosmos DB as the backbone of their solution. Spotlight Cloud was able to leverage both Azure Function apps and Azure Cosmos DB to reduce cost, improve performance, and deliver a better service to their customers.
Solution
Part of the core data flow in Spotlight Cloud. Other technologies used, not shown, include Event Hub, Application Insights, Key Vault, Storage, DNS.
The core data processing flow within Spotlight Cloud is built on Azure Functions and Azure Cosmos DB. This technology stack provides Quest with the high scale and performance they need.
Scale
Ingest apps handle >1,000 sets of customer monitoring data per second. To support this, Azure Functions consumption plan auto-scales up to 100s of VMs automatically.
Azure Cosmos DB provides guaranteed throughput for database and containers, measured in Request Units / second (RU/s), and backed by SLAs. By estimating the required throughput of the workload and translating it to RU/s, Quest was able to achieve predictable throughput of reads and writes against Azure Cosmos DB at any scale.
Performance
Azure Cosmos DB handles the write and read operations for Spotlight’s data at < 60 milliseconds. This enables customers’ SQL Server data to be quickly ingested and available for analysis in near real time.
High availability
Azure Cosmos DB provides 99.999% high availability SLA for reads and writes, when using 2+ regions. Availability is crucial for Spotlight Cloud’s customers, as many are in the healthcare, retail, and financial services industries and cannot afford to experience any database downtime or performance degradation. In the event a failover is needed, Azure Cosmos DB does automatic failover with no manual intervention, enabling business continuity.
With turnkey global distribution, Azure Cosmos DB handles automatic and asynchronous replication of data between regions. To take full advantage of their provisioned throughput, Quest designated one region to handle writes (data ingest) and another for reads. As a result, users’ read response times are never impacted by the write volume.
Flexible schema
Azure Cosmos DB accepts JSON data of varying size and schema. This enabled Quest to store a variety of data from diverse sources, such as SQL Server DMVs, OS performance counter statistics, etc., and removed the need to worry about fixed schemas or schema management.
Developer productivity
Azure Functions tooling made the development and coding process very smooth, which enabled developers to be productive immediately. Developers also found Azure Cosmos DB’s SQL query language to be easy to use, reducing the ramp-up time.
Cost
The Azure Functions consumption pricing model charges only for the compute and memory each function invocation uses. Particularly for lower-volume microservices, this lets users operate at low cost. In addition, using Azure Functions on a consumption plan gives Quest the ability to have failover instances on standby at all times, and only incur cost if failover instances are actually used.
From a Total Cost of Ownership (TCO) perspective, Azure Cosmos DB and Azure Functions are both managed solutions, which reduced the amount of time spent on management and operations. This enabled the team to focus on building services that deliver direct value to their customers.
Support
Microsoft engineers are directly available to help with issues, provide guidance and share best practices
With Spotlight Cloud, Quest’s customers have the advantage of storing data in Azure instead of an on-premises SQL Server database. Customers also have access to all the analysis features that Quest provides in the cloud. For example, a customer can investigate the SQL workload and performance on their SQL Server in great detail to optimize the data and queries for their users – all powered by Spotlight Cloud running on top of Azure Cosmos DB.
"We were looking to upgrade our storage solution to better meet our business needs. Azure Cosmos DB gave us built-in high availability and low latency, which allowed us to improve our uptime and performance. I believe Azure Cosmos DB plays an important role in our Spotlight Cloud to enable customers to access real-time data fast."
– Efim Dimenstein, Chief Cloud Architect, Quest Software
Deployment Diagram of Spotlight Cloud’s Ingest and Egress app
Diagram above explained. Data is routed to an available ingest app by the Traffic Manager. The Ingest app writes data into the Azure Cosmos DB write region. Data consumers are routed via Traffic Manager to Egress app, which then reads data from the Azure Cosmos DB read region.
Learnings and best practices
In building Spotlight Cloud, Quest gained a deep understanding into how to use Azure Cosmos DB in the most effective way:
Understand Azure Cosmos DB’s provisioned throughput model (RU/s)
Quest measured the cost of each operation, the number of operations/second, and provisioned the total amount of throughput required in Azure Cosmos DB.
Since Azure Cosmos DB cost is based on storage and provisioned throughput, choosing the right amount of RUs was key to using Azure Cosmos DB in a cost effective manner.
Choose a good partition strategy
Quest chose a partition key for their data that resulted in a balanced distribution of request volume and storage. This is critical because Azure Cosmos DB shards data horizontally and distributes total provisioned RUs evenly among the partitions of data.
During the development stage, Quest experimented with several choices of partition key and measured the impact on the performance. If a partition key strategy was unbalanced, a workload would require more RUs than with a balanced partition strategy.
Quest chose a synthetic partition key that incorporated Server Id and type of data being stored. This gave a high number of distinct values (high cardinality), leading to an even distribution of data – crucial for a write heavy workload.
Tune indexing policy
For Quest’s write-heavy workload, tuning index policy and RU cost on writes was key to achieving good performance. To do this, Quest modified the Azure Cosmos DB indexing policy to explicitly index commonly queried properties in a document and exclude the rest. In addition, Quest included only a few commonly used properties in the body of the document and encoded the rest of the data into a single property.
Scale up and down RUs based on data access pattern
In Spotlight Cloud, customers tend to access recent data more frequently than the older data. At the same time, new data continues to be written in a steady stream, making it a write-heavy workload.
To tune the overall provisioned RUs of the workload, Quest split the data into multiple containers. A new container is created regularly (e.g. every week to a few months) with high RUs, ready to receive writes.
Once the next new container is ready, the previous container’s RUs is reduced to only what is required to serve the expected read operations. Writes are then directed to the new container with high number of RUs.
Tour of Spotlight Cloud’s user interface
About Quest
Quest has provided software solutions for the fast paced world of enterprise IT since 1987. They are a global provider to 130,000 companies across 100 countries, including 95 percent of the Fortune 500 and 90% of the Global 1000.
Find out more about Spotlight Cloud on Twitter, Facebook, and LinkedIn.
Quelle: Azure
Es geht allen nur um das eine: Einen abstrakten Block im 3,6-GHz-Bereich. Seit Runde 109 der 5G-Frequenzauktion der Bundesnetzagentur ist das klar. Nun sind wir bei Runde 261. (Bundesnetzagentur, Telekom)
Quelle: Golem
Spark + AI Summit | Preview | GA | News & updates | Technical content | Azure shows | Events | Customers, partners, and industries
Spark + AI Summit 2019
Spark + AI Summit – Developing for the intelligent cloud and intelligent edge
Last week at Spark + AI Summit 2019, Microsoft announced joining the open source MLflow project as an active contributor. Developers can use the standard MLflow tracking API to track runs and deploy models directly into Azure Machine Learning service. We also announced that managed MLflow is generally available on Azure Databricks and will use Azure Machine Learning to track the full ML lifecycle. The combination of Azure Databricks and Azure Machine Learning makes Azure the best cloud for machine learning. Databricks open sourced Databricks Delta, which Azure Databricks customers get greater reliability, improved performance, and the ability to simplify their data pipelines. Lastly, .NET for Apache Spark is available in preview, which is a free, open-source, .NET Standard compliant, and cross-platform big data analytics framework.
Dear Spark developers: Welcome to Azure Cognitive Services
With only a few lines of code you can start integrating the power of Azure Cognitive Services into your big data workflows on Apache Spark™. The Spark bindings offer high throughput and run anywhere you run Spark. The Cognitive Services on Spark fully integrate with containers for high performance, on premises, or low connectivity scenarios. Finally, we have provided a general framework for working with any web service on Spark. You can start leveraging the Cognitive Services for your project with our open source initiative MMLSpark on Azure Databricks.
Now in preview
Securing Azure SQL Databases with managed identities just got easier
Announcing the second preview release of the Azure Services App Authentication library, version 1.2.0, which release enables simple and seamless authentication to Azure SQL Database for existing .NET applications with no code changes – only configuration changes. Try out the new functionality in existing SQL-backed solutions and gain the security benefits that the App Authentication library and managed identities afford.
Now generally available
Announcing Azure Backup support to move Recovery Services vaults
Announcing general availability support of the move functionality for recovery services vaults, which is an Azure Resource Manager resource to manage your backup and disaster recovery needs natively in the cloud. Migrate a vault between subscriptions and resource groups with a few steps, in minimal downtime and without any data-loss of old backups. Move a Recovery Services vault and retain recovery points of protected virtual machines (VMs) to restore to any point in time later.
Azure SQL Data Warehouse reserved capacity and software plans now generally available
Announcing the general availability of Azure SQL Data Warehouse reserved capacity and software plans for RedHat Enterprise Linux and SUSE. Purchase Reserved Capacity for Azure SQL Data Warehouse and get up to a 65 percent discount over pay-as-you-go rates. Select from 1-year or 3-year pre-commit options. Purchase plans for RedHat Enterprise Linux and save up to 18 percent. Plans are only available for Red Hat Enterprise Linux virtual machines and the discount does not apply to RedHat Enterprise Linux SAP HANA VMs or RedHat Enterprise Linux SAP Business Apps VMs. Save up to 64 percent on your SUSE software costs. SUSE plans get the auto-fit benefit, so you can scale up or down your SUSE VM sizes and the reservations will continue to apply. In addition, there is a new experience to purchase reservations and software plans, including REST APIs to purchase azure reservation and software plans.
Azure Cost Management now generally available for Pay-As-You-Go customers
Announcing the general availability of Azure Cost Management features for all Pay-As-You-Go and Azure Government customers, which will greatly enhance your ability to analyze and proactively manage your cloud costs. These features enable you to analyze your cost data, configure budgets to drive accountability for cloud costs, and export pre-configured reports on a schedule to support deeper data analysis within your own systems. This release for Pay-As-You-Go customers also provides invoice reconciliation support in the Azure portal via a usage csv download of all charges applicable to your invoices.
News and updates
Microsoft container registry unaffected by the recent Docker Hub data exposure
Docker recently announced Docker Hub had a brief security exposure that enabled unauthorized access to a Docker Hub database, exposing 190k Hub accounts and their associated GitHub tokens for automated builds. While initial information led people to believe the hashes of the accounts could lead to image:tags being updated with vulnerabilities, including official and microsoft/ org images, this was not the case. Microsoft has confirmed that the official Microsoft images hosted in Docker Hub have not been compromised. Regardless of which cloud you use, or if you are working on-premises, importing production images to a private registry is a best practice that puts you in control of the authentication, availability, reliability and performance of image pulls.
AI for Good: Developer challenge
Do you have an idea that could improve and empower the lives of everyone in a more accessible way? Or perhaps you have an idea that would help create a sustainable balance between modern society and the environment? Even if it’s just the kernel of an idea, it’s a concept worth exploring with the AI for Good Idea Challenge. If you’re a developer, a data scientist, a student of AI, or even just passionate about AI and machine learning, we encourage you to take part in the AI for Good: Developer challenge and improve the world by sharing your ideas.
Azure Notification Hubs and Google’s Firebase Cloud Messaging Migration
When Google announced its migration from Google Cloud Messaging (GCM) to Firebase Cloud Messaging (FCM), push services like Azure Notification Hubs had to adjust how we send notifications to Android devices to accommodate the change. If your app uses the GCM library, follow Google’s instructions to upgrade to the FCM library in your app. Our SDK is compatible with either. As long as you’re up to date with our SDK version, you won’t have to update anything in your app on our side.
Governance setting for cache refreshes from Azure Analysis Services
Data visualization and consumption tools over Azure Analysis Services (Azure AS) sometimes store data caches to enhance report interactivity for users. The Power BI service, for example, caches dashboard tile data and report data for initial load for Live Connect reports. This post introduces the new governance setting called ClientCacheRefreshPolicy to disable automatic cache refreshes.
Azure Updates
Learn about important Azure product updates, roadmap, and announcements. Subscribe to notifications to stay informed.
Technical content
Best practices in migrating SAP applications to Azure – part 1
This post touches upon the principles outlined in, Pillars of a great Azure architecture, as they pertain to building your SAP on Azure architecture in readiness for your migration.
Best practices in migrating SAP applications to Azure – part 2
Part 2 covers a common scenario where SAP customers can experience the speed and agility of the Azure platform is the ability to migrate from a SAP Business Suite running on-premises to SAP S/4HANA in the cloud.
Use Artificial Intelligence to Suggest 1-5 Star Ratings
When customers are impressed or dissatisfied about a product, they come back to where it was purchased looking for a way to leave feedback. See how to use artificial intelligence (Cognitive Service) to suggest star ratings based on sentiment – detected as customers write positive or negative words in their product reviews. Learn about CogS, Sentiment Analysis, and Azure Functions through a full tutorial – as well as where to go to learn more and setup a database to store and manage submissions.
You should never ever run directly against Node in production. Maybe.
Running against Node might cause your app to crash. To prevent this, you can run Node with a monitoring tool, or you can monitor your applications themselves.
Configure Azure Site Recovery from Windows Admin Center
Learn how to use Windows Admin Center to configure Azure Site Recovery to be able to replicate virtual machines to Azure, which you can use for protection and Disaster Recovery, or even migration.
Connecting Twitter and Twilio with Logic Apps to solve a parking problem
See how Tim Heuer refactored a solution to a common problem NYC residents face—trying to figure out street-side parking rules— using Azure Logic Apps and provided connectors for Twitter and Twilio to accomplish the same thing.
Creating an Image Recognition Solution with Azure IoT Edge and Azure Cognitive Services
Dave Glover demonstrates how one can use Azure Custom Vision and Azure IoT Edge to build a self-service checkout experience for visually impaired people—all without not needing to be a data scientist. The solution is extended with Python Azure Function, SignalR and Static Website Single Page App.
Get Azure Pipeline Build Status with the Azure CLI
For those who prefer the command line, it's possible to interact with Azure DevOps using the Azure CLI. Neil Peterson takes a quick look at the configuration and basic functionality of the CLI extension as related to Azure Pipelines.
dotnet-azure : A .NET Core global tool to deploy an application to Azure in one command
The options for pushing your .NET Core application to the cloud are not lacking depending on what IDE or editor you have in front of you. But what if you just wanted to deploy your application to Azure with a single command? Shayne Boyer shows you how to do just that with the dotnet-azure global tool.
Detecting threats targeting containers with Azure Security Center
More and more services are moving to the cloud and they bring their security challenges with them. In this blog post, we will focus on the security concerns of container environments. This post goes over several security concerns in containerized environments, from the Docker level to the Kubernetes cluster level, and shows how Azure Security Center can help you detect and mitigate threats in the environment as they’re occurring in real time.
Customize your Azure best practice recommendations in Azure Advisor
Cloud optimization is critical to ensuring you get the most out of your Azure investment, especially in complex environments with many Azure subscriptions and resource groups. Learn how Azure Advisor helps you optimize your Azure resources for high availability, security, performance, and cost by providing free, personalized recommendations based on your Azure usage and configurations.
5 tips to get more out of Azure Stream Analytics Visual Studio Tools
Azure Stream Analytics is an on-demand real-time analytics service to power intelligent action. Azure Stream Analytics tools for Visual Studio make it easier for you to develop, manage, and test Stream Analytics jobs. This post introduces capabilities and features to help you improve productivity that were included in two major updates from earlier this year: test partial scripts locally; share inputs, outputs, and functions across multiple scripts; duplicate a job to other regions; local input schema auto-completion; and testing queries against SQL database as reference data.
Azure Tips and Tricks – Become more productive with Azure
Since inception in 2017, the Azure Tips & Tricks collection has grown to over 200+ tips as well as videos, conference talks, and several eBooks spanning the entire breadth of the Azure platform. Featuring a new weekly tip and video it is designed to help you boost your productivity with Azure, and all tips are based off of practical real-world scenarios. This post re-introduces a web resource called Azure Tips and Tricks that helps existing developers using Azure learn something new within a couple of minutes.
Optimize performance using Azure Database for PostgreSQL Recommendations
You no longer have to be a database expert to optimize your database. Make your job easier and start taking advantage of Azure Database for PostgreSQL Recommendation for Microsoft Azure Database for PostgreSQL today. By analyzing the workloads on your server, the recommendations feature gives you daily insights about the Azure Database for PostgreSQL resources that you can optimize for performance. These recommendations are tightly integrated with Azure Advisor to provide you with best practices directly within the Azure portal.
Azure shows
Episode 275 – Azure Foundations | The Azure Podcast
Derek Martin, a Technology Solutions Principal (TSP) at Microsoft talks about his approach to ensuring that customers get the foundational elements of Azure in place first before deploying anything else. He discusses why Microsoft is getting more opinionated, as a company, when advocating for best practices.
HTML5 audio not supported
Code-free modern data warehouse using Azure SQL DW and Data Factory | Azure Friday
Gaurav Malhotra joins Scott Hanselman to show how to build a modern data warehouse solution from ingress of structured, unstructured, semi-structured data to code-free data transformation at scale and finally to extracting business insights into your Azure SQL Data Warehouse.
Serverless automation using PowerShell in Azure Functions | Azure Friday
Eamon O'Reilly joins Scott Hanselman to show how PowerShell in Azure Functions makes it possible for you to automate operational tasks and take advantage of the native Azure integration to deliver and maintain services.
Meet our Azure IoT partners: Accenture | Internet of Things Show
Mukund Ghangurde is part of the Industry x.0 practice at Accenture focused on driving digital transformation and digital reinvention with industry customers. Mukund joined us on the IoT Show to discuss the scenarios he is seeing in the industry where IoT is really transforming businesses and how (and why) Accenture is partnering with Azure IoT to accelerate and scale their IoT solutions.
Doing more with Logic Apps | Block Talk
Integration with smart contracts is a common topic with developers. Whether with apps, data, messaging or services there is a desire to connect the functions and events of smart contracts in an end to end scenario. In this episode, we look at the different types of scenarios and look at the most common use case – how to quickly expose your smart contract functions as microservices with the Ethereum Blockchain Connector for Logic Apps or Flow.
How to get started with Azure API Management | Azure Tips and Tricks
Learn how to get started with Azure API Management, a service that helps protect and manage your APIs.
How to create a load balancer | Azure Portal Series
Learn how to configure load balancers and how to add virtual machines to them in the Azure Portal.
Rockford Lhotka on Software Architecture | The Azure DevOps Podcast
This week, Jeffrey Palermo and Rocky Lhotka are discussing software architecture. They discuss what Rocky is seeing transformation-wise on both the client side and server side, compare and visit the spectrum of Containers vs. virtual machines vs. PaaS vs. Azure Functions, and take a look at microservice architecture. Rocky also gives his tips and recommendations for companies who identify as .NET shops, and whether you should go with Containers or PaaS.
HTML5 audio not supported
Episode 8 – Partners Help The Azure World Go Round | AzureABILITY Podcast
Microsoft's vast partner-ecosystem is a big part of the Azure value proposition. Listen in as Microsoft Enterprise Channel Manager Christine Schanne and Louis Berman delve into the partner experience with Neudesic; a top Gold Microsoft Partner.
HTML5 audio not supported
Events
Get ready for Global Azure Bootcamp 2019
Global Azure Bootcamp is a free, one-day, local event that takes place globally. This annual event, which is run by the Azure community, took place this past Saturday, April 27, 2019. Each year, thousands attend these free events to expand their knowledge about Azure using a variety of formats as chosen by each location. Did you attend?
Connecting Global Azure Bootcampers with a cosmic chat app
We added a little “cosmic touch” to the Global Azure Bootcamp this past weekend by enabling attendees to greet each other with a chat app powered by Azure Cosmos DB. For a chat app, this means low latency in the ingestion and delivery of messages. To achieve that, we deployed our web chat over several Azure regions worldwide and let Azure Traffic Manager route users’ requests to the nearest region where our Cosmos database was deployed to bring data close to the compute and the users being served. That was enough to yield near real-time message delivery performance as we let Azure Cosmos DB replicate new messages to each covered region.
Customers, partners, and industries
Connect IIoT data from disparate systems to unlock manufacturing insights
Extracting insights from multiple data sources is a new goal for manufacturers. Industrial IoT (IIoT) data is the starting point for new solutions, with the potential for giving manufacturers a competitive edge. These systems contain vast and vital kinds of information, but they run in silos. This data is rarely correlated and exchanged. To help solve this problem, Altizon created the Datonis Suite, which is a complete industrial IoT solution for manufacturers to leverage their existing data sources.
Migrating SAP applications to Azure: Introduction and our partnership with SAP
Just over 25 years ago, Bill Gates and Hasso Plattner met to form an alliance between Microsoft and SAP that has become one of our industry’s longest lasting alliances. At the time their conversation was focused on how Windows could be the leading operating system for SAP’s SAPGUI desktop client and when released a few years later, how Windows NT could be a server operating system of choice for running SAP R/3. Ninety percent of today’s Fortune 500 customers use Microsoft Azure and an estimated 80 percent of Fortune customers run SAP solutions, so it makes sense why SAP running on Azure is a key joint initiative between Microsoft and SAP. Over the next three weeks leading up to this year’s SAPPHIRENOW conference in Orlando, we’re publishing an SAP on Azure technical blog series (see Parts 1 & 2 in Technical content above).
Azure Marketplace new offers – Volume 35
The Azure Marketplace is the premier destination for all your software needs – certified and optimized to run on Azure. Find, try, purchase, and provision applications & services from hundreds of leading software providers. You can also connect with Gold and Silver Microsoft Cloud Competency partners to help your adoption of Azure. In the first half of March, we published 68 new offers.
Azure Government Secret Regions, Azure Batch updates & Service Fabric Mesh new additions | Azure This Week – A Cloud Guru
This time on Azure This Week, Lars covers new Azure Government Secret regions and the new updates to Azure Batch. He also talks about new additions to Service Fabric Mesh. Check it out!
Quelle: Azure
Seit einigen Tagen gibt es zumindest auf Steam eine experimentelle Build von No Man’s Sky, welche die Vulkan-Grafikschnittstelle unterstützt. Verglichen mit OpenGL steigt die Bildrate um bis zu zwei Drittel – allerdings nur bei Grafikkarten von AMD, nicht aber auf Geforce-Modellen. Von Marc Sauter (No Man's Sky, Steam)
Quelle: Golem
The post Mirantis Announces New SaaS Portal to Configure On-Premise Cloud Environments in Minutes appeared first on Mirantis | Pure Play Open Cloud.
An online tool will enable IT operators to experience Mirantis Cloud Platform’s flexible, model-driven approach to cloud lifecycle management
Open Infrastructure Summit, Denver, CO — April 29, 2019 – Today, Mirantis announced a web-based SaaS application that enables users to quickly deploy a compact cloud and experience the flexibility and agility of Infrastructure-as-Code. Available next month, Model Designer for Mirantis Cloud Platform (MCP) helps infrastructure operators build customized, curated, exclusively open source configurations for on-premise cloud environments.
Mirantis Cloud Platform employs a unique approach to deployment and management of on-premise cloud environments, where the entire cloud configuration is expressed as code in a highly granular fashion. That configuration is then provided as input to a deployment tool, called DriveTrain, which validates the configuration data and deploys the cloud accordingly.
“Our customers love the flexibility and granular infrastructure control that MCP offers, but for many, the learning curve associated with building an initial cluster model using YAML files is simply too steep,” said Boris Renski, co-founder and CMO, Mirantis. “Model Designer provides the necessary guardrails, making it easier for anyone to get started with MCP, without compromising on the flexibility they may require down the road as they expand their cloud footprint.”
Model Designer will enable users to specify the degree of configurability they require for their on-premise cloud use case. With the basic configurability level (humorously called “I am too young to die”), Model Designer automatically pre-populates most of the cluster settings to default, pre-tested values. While on the other side of the spectrum, Model Designer will offer the “Ultraviolence” configurability setting, where users are able to tweak virtually every aspect of their on-premises cloud. The resulting configuration models, generated by the Model Designer, are then fed into DriveTrain, which combines them with security tested OpenStack and Kubernetes software binary packages to deploy or update end user cloud environment.
“A declarative approach to operating hybrid cloud infrastructure is a pattern that the community contributed to significantly with projects like Airship,” said Jonathan Bryce, executive director of the OpenStack Foundation. “Model Designer aims to make declarative infrastructure operations accessible to the masses, and it’s a positive sign to see vendors driving adoption for design concepts pioneered by open source communities.”
Model Designer is expected to be generally available in May 2019. Businesses interested in participating in a private beta can sign up here.
About Mirantis
Mirantis helps enterprises and telcos address key challenges with running Kubernetes on-premises with pure open source software. The company employs a unique build-operate-transfer delivery model to bring its flagship product, Mirantis Cloud Platform (MCP), to customers. MCP features full-stack enterprise support for Kubernetes and OpenStack and helps companies run optimized hybrid environments supporting traditional and distributed microservices-based applications in production at scale.
To date, Mirantis has helped more than 200 enterprises and service providers build and operate some of the largest open clouds in the world. Its customers include iconic brands such as Adobe, Comcast, Reliance Jio, State Farm, STC, Vodafone, Volkswagen, and Wells Fargo. Learn more at www.mirantis.com.
###
Contact information:
Joseph Eckert for Mirantis
jeckertflak@gmail.com
The post Mirantis Announces New SaaS Portal to Configure On-Premise Cloud Environments in Minutes appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis
As companies of all sizes move their assets and workloads to the cloud, there’s a clear need to provide more powerful ways to manage, govern, and automate their cloud resources. Such automation scenarios require custom logic best expressed in PowerShell. They are also typically executed either on a schedule or when an event happens like an alert on an application, a new resource getting created, or when an approval happens in an external system.
Azure Functions is a perfect match to address such scenarios as it provides an application development model based on triggers and bindings for accelerated development and serverless hosting of applications. PowerShell support in Functions has been a common request from customers, given its event-based capabilities.
Today, we are pleased to announce that we have brought the benefits of this model to automating operational tasks across Azure and on-premises systems with the preview release of PowerShell support in Azure Functions.
Companies all over the world have been using PowerShell to automate their cloud resources in their organization, as well as on-premises, for years. Most of these scenarios are based on events that happen on the infrastructure or application that must be immediately acted upon in order to meet service level agreements and time to recovery.
With the release of PowerShell support in Azure Functions, it is now possible to automate these operational tasks and take advantage of the native Azure integration to modernize the delivering and maintenance of services.
PowerShell support in Azure Functions is built on the 2.x runtime and uses PowerShell Core 6 so your automation can be developed on Windows, macOS, and Linux. It also integrates natively with Azure Application Insights to give full visibility into each function execution. Previously, Azure Functions had experimental PowerShell support in 1.x., and it is highly recommended that customers move their 1.x PowerShell functions to the latest runtime.
PowerShell in Azure Functions has all the benefits of other languages including:
Native bindings to respond to Azure monitoring alerts, resource changes through Event Grid, HTTP or Timer triggers, and more.
Portal and Visual Studio Code integration for authoring and testing of the scripts.
Integrated security to protect HTTP triggered functions.
Support for hybrid connections and VNet to help manage hybrid environments.
Run in an isolated local environment.
Additionally, functions written with PowerShell have the following capabilities to make it easier to manage Azure resources through automation.
Automatic management of Azure modules
Azure modules are natively available for your scripts so you can manage services available in Azure without having to include these modules with each function created. Critical and security updates in these Az modules will be automatically upgraded by the service when new minor versions are released.
You can enable this feature through the host.json file by setting “Enabled” to true for managedDependency and updating Requirements.psd1 to include Az. These are automatically set when you create a new function app using PowerShell.
host.json
{
“version”: “2.0”,
“managedDependency”: {
“Enabled”: “true”
}
}
Requirements.psd1
@{
Az = ‘1.*’
}
Authenticating against Azure services
When enabling a managed identity for the function app, the PowerShell host can automatically authenticate using this identity, giving functions permission to take actions on services that the managed identity has been granted access. The profile.ps1 is processed when a function app is started and enables common commands to be executed. By default, if managed identify is enabled, the function application will authenticate with Connect-AzAccount -Identity.
Common automation scenarios in Azure
PowerShell is a great language for automating tasks, and with the availability in Azure Functions, customers can now seamless author event-based actions across all services and applications running in Azure. Below are some common scenarios:
Integration with Azure Monitor to process alerts generated by Azure services.
React to Azure events captured by Event Grid and apply operational requirements on resources.
Leverage Logic Apps to connect to external systems like IT service management, DevOps, or monitoring systems while processing the payload with a PowerShell function.
Perform scheduled operational tasks on virtual machines, SQL Server, Web Apps, and other Azure resources.
Next steps
PowerShell support in Azure Functions is available in preview today, check out the following resources and start trying it out:
Learn more about using PowerShell in Azure Functions in the documentation, including quick starts and common samples to help get started.
Sign up for an Azure free account if you don’t have one yet, and build your first function using PowerShell.
You can reach the Azure Functions team on Twitter and on GitHub. For specific feedback on the PowerShell language, please review its Azure Functions GitHub repository.
We also actively monitor StackOverflow and UserVoice, so feel free to ask questions or leave your suggestions. We look forward to hearing from you!
Learn more about automation and PowerShell in Functions on Azure Friday and Microsoft Mechanics.
Quelle: Azure