Microsoft’s Azure Cosmos DB is named a leader in the Forrester Wave: Big Data NoSQL

We’re excited to announce that Forrester has named Microsoft as a Leader in The Forrester Wave™: Big Data NoSQL, Q1 2019 based on their evaluation of Azure Cosmos DB. We believe Forrester’s findings validate the exceptional market momentum of Azure Cosmos DB and how happy our customers are with the product.

NoSQL platforms are on the rise

According to Forrester, “half of global data and analytics technology decision makers have either implemented or are implementing NoSQL platforms, taking advantage of the benefits of a flexible database that serves a broad range of use cases…While many organizations are complementing their relational databases with NoSQL, some have started to replace them to support improved performance, scale, and lower their database costs.”

Azure Cosmos DB has market momentum

Azure Cosmos DB is Microsoft's globally distributed, multi-model database service for mission-critical workloads. Azure Cosmos DB provides turnkey global distribution with unlimited endpoint scalability, elastic scaling of throughput (at multiple granularities, e.g., database, key-space, tables and collections) and storage worldwide, single-digit millisecond latencies at the 99th percentile, five well-defined consistency models, and guaranteed high availability, all backed by the industry-leading comprehensive SLAs. Azure Cosmos DB automatically indexes all data without requiring developers to deal with schema or index management. It is a multi-model service, which natively supports document, key-value, graph, and column-family data models. As a natively born in the cloud service, Azure Cosmos DB is carefully engineered with multitenancy and global distribution from the ground up. As a foundational service in Azure, Azure Cosmos DB is ubiquitous, running in all public regions, DoD and sovereign clouds, with industry-leading compliance certification list, enterprise grade security – all without any extra cost.

Azure Cosmos DB’s unique approach of providing wire protocol compatible APIs for the popular open source-databases ensures that you can continue to use Azure Cosmos DB in a cloud-agnostic manner while still leveraging a robust database platform natively designed for the cloud. You get the flexibility to run your Cassandra, Gremlin, MongoDB apps fully managed with no vendor lock-in. While Azure Cosmos DB exposes APIs for the popular open source databases, it does not rely on the implementations of those databases for realizing the semantics of the corresponding APIs.

According to the Forrester report, Azure Cosmos DB is starting to achieve strong traction and “Its simplified database with relaxed consistency levels and low-latency access makes it easier to develop globally distributed apps.” Forrester mentioned specifically that “Customer references like its resilience, low maintenance, cost effectiveness, high scalability, multi-model support, and faster time-to-value.”

Forrester notes Azure Cosmos DB’s global availability across all Azure regions and how customers use it for operational apps, real-time analytics, streaming analytics and Internet-of-Things (IoT) analytics. Azure Cosmos DB powers many worldwide enterprises and Microsoft services such as XBox, Skype, Teams, Azure, Office 365, and LinkedIn.

To fulfill their vision, in addition to operational data processing, organizations using Azure Cosmos DB increasingly invest in artificial intelligence (AI) and machine learning (ML) running on top of globally-distributed data in Azure Cosmos DB. Azure Cosmos DB enables customers to seamlessly build, deploy, and operate low latency machine learning solutions on the planet scale data. The deep integration with Spark and Azure Cosmos DB enables the end-to-end ML workflow – managing, training and inferencing of machine learning models on top of multi-model globally-distributed data for time series forecasting, deep learning, predictive analytics, fraud detection and many other use-cases.

Azure Cosmos DB’s commitment

We are committed to making Azure Cosmos DB the best globally distributed database for all businesses and modern applications. With Azure Cosmos DB, we believe that you will be able to write amazingly powerful, intelligent, modern apps and transform the world.

If you are using our service, please feel free to reach out to us at AskCosmosDB@microsoft.com any time. If you are not yet using Azure Cosmos DB, you can try Azure Cosmos DB for free today, no sign up or credit card is required. If you need any help or have questions or feedback, please reach out to us any time. For the latest Azure Cosmos DB news and features, please stay up-to-date by following us on Twitter #CosmosDB, @AzureCosmosDB. We look forward to see what you will build with Azure Cosmos DB!

Download the full Forrester report and learn more about Azure Cosmos DB.
Quelle: Azure

Azure Stack IaaS – part five

Self-service is core to Infrastructure-as-a-Service (IaaS). Back in the virtualization days, you had to wait for someone to create a VLAN for you, carve out a LUN, and find space on a host. If Microsoft Azure ran that way, we would have needed to hire more and more admins as our cloud business grew.

Do it yourself

A different approach was required, which is why IaaS is important. Azure's IaaS gives the owner of the subscription everything they need to create virtual machines (VMs) and other resources on their own, without involving an administrator. To learn more visit our documentation, “Introduction to Azure Virtual Machines” and “Introduction to Azure Stack virtual machines.”

Let me give you a few examples that show Azure and Azure Stack self-service management of VMs.

Deployment

Creating a VM is as simple as going through a wizard. You can create the VM by specifying everything needed for the VM in the “Create virtual machine” blade. You can include the operating system image or marketplace template, the size (memory, CPUs, number of disks, and NICs), high availability, storage, networking, monitoring, and even in guest configuration.

Learn more by visiting the following resources:

Deploy Azure Linux VM – five minute quickstart
Deploy Azure Windows VM – five minute quickstart
Azure Stack VM Sizes
Azure Stack Marketplace
Azure Stack Supported Guest OSes
Azure Stack VM Considerations
Azure Stack Networking Considerations

Daily operations

That’s great for deployment, but what about later down the road when you need to quickly change the VM? Azure and Azure Stack have you covered there too. The settings section of the VM allows you to make changes to networking, disks, size CPUs, memory, and more, in-guest configuration extensions and high availability.

One thing that was always a pain in the virtualization days was getting the right firewall ports open. Now you can manage this on your own without waiting on the networking team. In Azure and Azure Stack firewall rules are called network security groups. This can all be configured in a self-service manner as shown below.

Learn more about managing Azure VMs firewall ports by visiting our documentation, “How to open ports to a virtual machine with the Azure portal.”

Disks and image self-service is important too. In the virtualization days this was also a big pain point. I had to give these to my admin to get them into the system for usage. Fortunately, storage is self-service in Azure and Azure Stack. Your IaaS subscription includes access to both storage accounts and managed disks from where you can upload and download your disks and images.

You can learn more by visiting our documentation, “Upload a generalized VHD an use it to create new VMs in Azure” and “Download a Linux VHD from Azure.”

Managed disks also give you the option to create and export snapshots.

Find more information by visiting the following resources:

Azure Managed Disks Overview
Managed Disks Snapshots
Azure Stack Managed Disks Considerations
Attach a managed data disk to an Azure VM

Other resources a VM owner can manage include load balancer configuration, DNS, VPN gateways, subnets, attach/detach disks, scale up/down, scale in/out, and so many other things it is astounding.

Support and troubleshooting

When there is a problem, no one wants to wait for someone else to help. The more tools you have to correct the situation the better. While operating one of the largest public clouds, the Azure IaaS team has learned what the top issues are facing customers and their support needs. To empower VM owners to solve these issues themselves, they have created a number of self-service support and troubleshooting features. Perhaps the most widely used is the Reset Password feature. Why wasn’t this feature around in the virtualization days?

Learn more by visiting our documentation, for re-setting access on an Azure Windows VM and re-setting access on an Azure Linux VM.

I need to mention a setting that has prevented me from creating a support problem because of my absentmindedness. It is the Lock feature. A lock can prevent any change or deletion on a VM or any other resource.

Learn more about locking VMs and other Azure resources by visiting our documentation, “Locking resources tp prevent unexpected changes.”

Other useful troubleshooting and support features include re-deploying your VM to another host if you suspect your VM is having problems on the host it is currently on, checking boot diagnostics to see the state of the VM before it fully boots and is ready for connections, and reviewing performance diagnostics. As we learn and build these features in Azure, they eventually find their way to Azure Stack so that your admins don’t have to work so hard to support you.

Learn more by visit our documentation, “Troubleshooting Azure Virtual Machines.”

Happy infrastructure admins

When you can take care of yourself, your admins can manage the underlying infrastructure without being interrupted by you. This means they can work on the things important to them and you can focus on what is important to you.

In this blog series

We hope you come back to read future posts in this series. Here are some of our planned upcoming topics:

Azure Stack at its core is an Infrastructure-as-a-Service (IaaS) platform
Start with what you already have
Foundation of Azure Stack IaaS
Protect your stuff
Pay for what you use
It takes a team
If you do it often, automate it
Build on the success of others
Journey to PaaS

Quelle: Azure

Breaking the wall between data scientists and app developers with Azure DevOps

As data scientists, we are used to developing and training machine learning models in our favorite Python notebook or an integrated development environment (IDE), like Visual Studio Code (VSCode). Then, we hand off the resultant model to an app developer who integrates it into the larger application and deploys it. Often times, any bugs or performance issues go undiscovered until the application has already been deployed. The resulting friction between app developers and data scientists to identify and fix the root cause can be a slow, frustrating, and expensive process.

As AI is infused into more business-critical applications, it is increasingly clear that we need to collaborate closely with our app developer colleagues to build and deploy AI-powered applications more efficiently. As data scientists, we are focused on the data science lifecycle, namely data ingestion and preparation, model development, and deployment. We are also interested in periodically retraining and redeploying the model to adjust for freshly labeled data, data drift, user feedback, or changes in model inputs.

The app developer is focused on the application lifecycle – building, maintaining, and continuously updating the larger business application that the model is part of. Both parties are motivated to make the business application and model work well together to meet end-to-end performance, quality, and reliability goals.

What is needed is a way to bridge the data science and application lifecycles more effectively. This is where Azure Machine Learning and Azure DevOps come in. Together, these platform features enable data scientists and app developers to collaborate more efficiently while continuing to use the tools and languages we are already familiar and comfortable with.

The data science lifecycle or “inner loop” for (re)training your model, including data ingestion, preparation, and machine learning experimentation, can be automated with the Azure Machine Learning pipeline. Likewise, the application lifecycle or “outer loop”, including unit and integration testing of the model and the larger business application, can also be automated with the Azure DevOps pipeline. In short, the data science process is now part of the enterprise application’s Continuous Integration (CI) and Continuous Delivery (CD) pipeline. No more finger pointing when there are unexpected delays in deploying apps, or when bugs are discovered after the app has been deployed in production. 

Azure DevOps: Integrating the data science and app development cycles

Let’s walk through the diagram below to understand how this integration between the data science cycle and the app development cycle is achieved.

A starting assumption is that both the data scientists and app developers in your enterprise use Git as your code repository. As a data scientist, any changes you make to training code will trigger the Azure DevOps CI/CD pipeline to orchestrate and execute multiple steps including unit tests, training, integration tests, and a code deployment push. Likewise, any changes the app developer or you make to application or inferencing code will trigger integration tests followed by a code deployment push. You can also set specific triggers on your data lake to execute both model retraining and code deployment steps. Your model is also registered in the model store, which lets you look up the exact experiment run that generated the deployed model.

With this approach, you as the data scientist retain full control over model training. You can continue to write and train models in your favorite Python environment. You get to decide when to execute a new ETL / ELT run to refresh the data to retrain your model. Likewise, you continue to own the Azure Machine Learning pipeline definition including the specifics for each of its data wrangling, feature extraction, and experimentation steps, such as compute target, framework, and algorithm. At the same time, your app developer counterpart can sleep comfortably knowing that any changes you commit will pass through the required unit, integration testing, and human approval steps for the overall application.

With the soon to be released Data Prep Services (box in bottom left of above diagram), you will also be able to set thresholds for data drift and automate the retraining of your models! 

In subsequent blog posts, we will cover in detail more topics related to CI/CD, including the following:

Best practices to manage compute costs with Azure DevOps for Machine Learning
Managing model drift with Azure Machine Learning Data Prep Services
Best practices for controlled rollout and A/B testing of deployed models

Learn more

Azure CI/CD Pipeline documentation
Azure Machine Learning Pipeline documentation
Learn more about the Azure Machine Learning service.
Get started with a trial of Azure Machine Learning service.

Quelle: Azure

The Value of IoT-Enabled Intelligent Manufacturing

As the manufacturing industry tackles some significant challenges including an aging workforce, compliance issues, and declining revenue, the Internet of Things (IoT) is helping reinvent factories and key processes. At the heart of this transformation journey is the design and use of IoT-enabled machines that help lead to reduced downtime, increased productivity, and optimized equipment performance.

Learn how you can apply insights from real-world use cases of IoT-enabled intelligent manufacturing when you attend the Manufacturing IoT webinar on March 28th. For additional hands-on, actionable insights around intelligent edge and intelligent cloud IoT solutions, join us on April 19th for the Houston Solution Builder Conference.

Using IoT solutions to move from a reactive to predictive model

In the past, factory managers often had no way of knowing when a machine might begin to perform poorly or completely shut down. When something went wrong, getting the equipment back up and running was often time consuming and based on trial-and-error troubleshooting. And for the company, any unplanned downtime meant slowed or halted production, resulting in lower productivity and higher costs.

The development of IoT-enabled machines with sensors allows companies to improve overall efficiency, performance, and profitability. Rockwell Automation found it time consuming and challenging to monitor its equipment in remote locations. Using Microsoft Azure to connect them, Rockwell Automation now sees real-time performance information and can proactively maintain equipment before an incident occurs.

Kontron S&T, a Microsoft partner, also recently developed the SUSiEtec platform, an end-to-end IoT solution that enables companies to build scalable edge computing solutions using Microsoft Azure IoT Edge integration and customization services. With SUSiEtec, companies can dynamically decide where data analysis will take place and manage distributed IoT devices regardless of where they’re located or how many devices are used. Join the Manufacturing IoT webinar to learn more about SUSiEtec and how to develop secure, manageable IoT solutions for manufacturing.

Keeping IoT data secure with Azure Sphere

Using IoT to create the factory of the future also means additional access points into the factory network and systems, so creating a secure network is top priority. Factory managers typically access IoT data using mobile devices, which creates even more access points. For a true connected IoT experience and factory, security is foundational.

Azure Sphere provides a foundation of security and connectivity that starts in the silicon and extends to the cloud. Together, Azure Sphere microcontrollers (MCUs), secured OS, and turnkey cloud security service guard every Azure Sphere device accessing IoT data, IoT sensors, and IoT-enabled machines. By adding useful software to Edge hardware, factories are protected with IT-proven standards as well as new Operational Technology (OT) network security.

Getting ready to develop IoT solutions

Moving to a factory of the future starts with determining what you want to achieve through the IoT-enabled machine. If predictive maintenance is the end goal, start by conducting an inventory of data sources. Identify all potential sources and types of relevant data to determine what is most essential. Then you’ll need to lay the groundwork for a robust predictive model by pulling in data that includes both expected behavior and failure logs.

With the initial logistics determined, the next step is to create a model and test and iterate to figure out which model is best at forecasting the timing of unit failures. By moving to a live operational setting, you can apply the model to live, streaming data to observe how it works in real-world conditions. After adjusting your maintenance processes, systems, and resources to act on the new insights, the final step is to integrate the model with Azure IoT Central into operations.

Of course, not all companies have the skillset or resources to develop an IoT solution from scratch. To accelerate the design, development, and implementation process, partners can utilize the Microsoft Accelerator program. By using open-source code or leveraging proven architectures, companies can create a fully customizable solution and quickly connect devices to existing systems in minutes. For instance, the Predictive Maintenance solution accelerator combines key Azure IoT services like IoT Hub and Stream analytics to proactively optimize maintenance and create automatic alerts and actions for remote diagnostics, maintenance requests, and other workflows.

Digitally transforming your own business and building or deploying IoT solutions that are highly scalable and economical to manage takes partnerships. Join Microsoft and Kontron S&T on March 28th for the webinar, Go from Reaction to Prediction – IoT in Manufacturing, and discover new approaches for achieving your business goals.
Quelle: Azure

Massive Entertainment hosts Tom Clancy’s The Division 2 on Google Cloud Platform

As multiplayer games continue to increase in popularity, game developers need a reliable cloud provider with a flexible global infrastructure to support real time AAA gaming experiences. At Google Cloud we’ve spent many years building a world class infrastructure and easy to use solutions so that gaming companies and development studios can focus on what they’re most passionate about—building great games.With the recent release of Tom Clancy’s The Division 2 by Massive Entertainment, a Ubisoft studio, we’re excited to share that Google Cloud was selected as the public cloud provider to host game servers globally for the highly anticipated sequel. Massive and Google Cloud worked together to deliver a smooth online experience and services for all players at launch.”Google Cloud performed beautifully in our early tests and private beta, and we are thrilled with its ability to scale in the early days of our launch,” said Fredrik Brönjemark, Online & Live Operations Director at Massive. “But more importantly, we were looking for a partner to trust with our game. Google Cloud’s team of engineers and gaming experts get it; they’ve played our games, and were always available to us with deep technical expertise, from when we initially designed the game infrastructure to private beta and now launch.”Massive Entertainment studio was looking for reliable and scalable cloud services that could keep pace with global player demand. Google Cloud provides Massive with the ease, flexibility and scalability to ensure consistently high game performance.Google Cloud’s secure, global high speed fiber network allows for consistent high performance experiences for players across regions. The scalable infrastructure also supports game data and core services required for game play including matchmaking, high scores, stats, and inventory.You can learn more about how game developers are using Google Cloud for game server hosting, platform services, and machine learning and analytics here. And for more information about game development on Google Cloud, visit our website.
Quelle: Google Cloud Platform