Google Cloud named a leader in the Forrester Wave: Big Data NoSQL

We’re pleased to announce that Forrester has named Google Cloud as a leader in The Forrester Wave™: Big Data NoSQL, Q1 2019. We believe the findings reflect Google Cloud’s market momentum, and what we hear from our satisfied enterprise customers using Cloud Bigtable and Cloud Firestore.  According to Forrester, half of global data and analytics technology decision makers either have implemented or are implementing NoSQL platforms, taking advantage of the benefits of a flexible database that serves a broad range of use cases. The report evaluates the top 15 vendors against 26 rigorous criteria for NoSQL databases to help enterprise IT teams understand their options and make informed choices for their organizations. Google scored 5 out of 5 in Forrester’s report evaluation criteria of data consistency, self-service and automation, performance, scalability, high availability/disaster recovery, and the ability to address a breadth of customer use cases. Google also scored 5 out of 5 in the ability to execute criterion.How Cloud Firestore and Cloud Bigtable work for usersWe’re especially pleased that our recognition as a Leader in the Forrester Wave: Big Data NoSQL mirrors what we hear from our customers: Databases have an essential role to play in a cloud infrastructure. The best ones can make application development easier, make user experience better, and allow for massive scalability. Both Cloud Firestore and Cloud Bigtable include recently added features and updates to continue our mission of providing flexible database options.Cloud Firestore is our fully managed, serverless document database that recently became generally available. It’s designed and built for accelerating web, mobile and IoT apps, since it allows for live synchronization and offline support. Cloud Firestore also brings a strong consistency guarantee and a global set of locations, plus support for automatic sharding, high availability, ACID transactions and more. We’ve heard from Cloud Firestore users that they’ve been able to serve more users and move apps into production faster using the database as a powerful back end.Cloud Bigtable is our fast, globally distributed, wide-column NoSQL database service that can scale to handle massive workloads. It scales data storage from gigabytes to petabytes, while maintaining high-performance throughput and low-latency response times. It is the same database that powers many Google services such as Search, Analytics, Maps, and Gmail. Customers running apps with Cloud Bigtable can provide users with data updates in multiple global regions thanks to multi-region replication. We hear from Cloud Bigtable users that it lets them provide real-time analytics with availability and durability guarantees to their users and customers. Use cases often include IoT, user analytics, advertising tech and financial data analysis.Download the full Forrester report here, and learn more about GCP database services here.
Quelle: Google Cloud Platform

Hardware innovation for data growth challenges at cloud-scale

The Open Compute Project (OCP) Global Summit 2019 kicks off today in San Jose where a vibrant and growing community is sharing the latest in innovation to make hardware more efficient, flexible, and scalable.

For Microsoft, our journey with OCP began in 2014 when we joined the foundation and contributed the very same server and datacenter designs that power our global Azure cloud, but it didn’t stop there. Each year at the OCP summit, we contribute innovation that addresses the most pressing challenges for our industry, including a modular and globally compatible server design and universal motherboard with Project Olympus to enabling hardware security with Project Cerberus to a next generation specification for SSD storage with Project Denali.

This year we’re turning our attention to the exploding volume of data being created daily. Data is at the heart of digital transformation and companies are leveraging data to improve customer experiences, open new markets, make employees and processes more productive, and create new sources of competitive advantage as they work toward the future of tomorrow.

Data – the engine of Digital Transformation

The Global Datasphere* which quantifies and analyzes the amount of data created, captured, and replicated in any given year across the world is growing exponentially and the growth is seemingly never-ending. IDC predicts* that the Global Datasphere will grow from 33 zettabytes (ZB) in 2018 to 175 ZB by 2025. To keep up with the storage demands stemming from all this data creation, IDC forecasts* that over 22 ZB o storage capacity must ship across all media types from 2018 to 2025, with nearly 59 percent of that capacity supplied from the HDD industry.

With this challenge on the horizon, the enterprise is fast becoming the world's data steward once again. In the recent past, consumers were responsible for much of their own data, but their reliance on and trust of today’s cloud services, especially from connectivity, performance, and convenience perspectives, continues to increase and the desire to store and manage data locally continues to decrease.

Moreover, businesses are looking to centralize data management and delivery (e.g., online video streaming, data analytics, data security, and privacy) as well as to leverage data to control their businesses and the user experience (e.g., machine-to-machine communication, IoT, and persistent personalization profiling). The responsibility to maintain and manage all this consumer and business data is driving the growth of cloud provider datacenters. As a result, the enterprise’s role as a data steward continues to grow, and consumers are not just allowing this, but expecting it. Beginning in 2019, more data will be stored in the enterprise core than in all the world's existing endpoints.

The demand for data storage

A few years ago, we started looking at scale challenges in the cloud regarding the growth of data and the future of data storage needs. The amount of data created in the Global Datasphere is the focus of the storage industry. Even with the amount of data that is discarded, overwritten, or sensed and never stored longer than milliseconds, there still exists a growing demand for storage capacity across industries, governments, enterprises, and consumers.

To live in a digitized world where artificial intelligence drives business processes, customer engagements, and autonomous infrastructure or where consumers' lives are hyper-personalized in nearly every aspect of behavior – including what time we'll be awakened based on the previous day's activities, overnight sleep patterns, and the next day's calendar – will require creating and storing more data than ever before.

IDC currently calculates Data Age 2025* storage capacity shipments across all media types (HDD, SSD, NVM-flash/other, tape, and optical) over the next 4 years (2018–2021) will need to exceed the 6.9 ZB shipped across all media types over the past 20 years. IDC forecasts* that over 22 ZB of storage capacity must ship across all media types from 2018 to 2025 to keep up with storage demands. Around 59 percent of the capacity will need to come from the HDD industry and 26 percent from flash technology over that same time frame, with optical storage the only medium to show signs of fatigue as consumers continue to abandon DVDs in favor of streaming video and audio.

Introducing Microsoft’s Project Zipline

The ability to store and process data extremely efficiently is core to the cloud's value proposition and Azure continues to grow dramatically as does the amount of data that Azure stores with many very data-intensive workloads. To address this, we’ve developed a cutting-edge compression algorithm and optimized the hardware implementation for the types of data we see in our cloud storage workloads. By engineering innovation at the systems level, we've been able to simultaneously achieve higher compression ratios, higher throughput, and lower latency than the other algorithms that are currently available. This enables compression without compromise, allowing always-on data processing for various industry usage models ranging from the cloud to the edge.

Microsoft’s Project Zipline compression algorithm yields dramatically better results, up to 2X high compression ratios versus the commonly used Zlib-L4 64KB model. Enhancements like this can lead to direct customer benefits in the potential for cost savings, for instance, and indirectly, access to petabytes or exabytes of capacity in a cost-effective way could enable new scenarios for our customers.

We are open sourcing Project Zipline compression algorithms, hardware design specifications, and Verilog source code for register transfer language (RTL) with initial content available today and more coming soon. This contribution will provide collateral for integration into a variety of silicon components (e.g. edge devices, networking, offload accelerators etc.) across the industry for this new high-performance compression standard. Contributing RTL at this level of detail as open source to OCP is industry leading. It sets a new precedent for driving frictionless collaboration in the OCP ecosystem for new technologies and opening the doors for hardware innovation at the silicon level. Over time, we anticipate Project Zipline compression technology will make its way into several market segments and usage models such as network data processing, smart SSDs, archival systems, cloud appliances, general purpose microprocessor, IoT, and edge devices.

Project Zipline is a cutting-edge compression technology optimized for a large variety of datasets, and our release of RTL allows hardware vendors to use the reference design to produce hardware chips to allow the highest compression, lowest cost, and lowest power out of the algorithm. It's available to the OCP ecosystem, so they can contribute to it, and create further benefit for the entire ecosystem, including Azure and our customers.

Project Zipline partners and ecosystem

As a leader in the cloud storage space, I'm particularly proud that we're able to take all the investment and innovation we've created and share it through OCP so that our partners can provide better solutions for their customers as well.

I look forward to seeing more of the industry joining OCP and collaborating so their customers can also see the benefit.

You can follow these links to learn more about Microsoft’s Project Zipline from our GitHub specification and more about our open source hardware development.

** Source: Data Age 2025, sponsored by Seagate with data from IDC Global DataSphere, Nov 2018
Quelle: Azure

Let the queries begin: How we built our analytics pipeline for NCAA March Madness

It’s hard to believe, but a whole year has passed since last year’s epic March Madness®. As a result of our first year of partnership with the NCAA®, we used data analytics on Google Cloud to produce six live predictive television ads during the Men’s Final Four® and Championship games (all proven true, for the record), as well as a slew of additional game and data analysis throughout the tournament. And while we were waiting for March to return, we also built a basketball court to better understand the finer mechanics of solid jump shot.This year we’re back with even more gametime analysis, with the help of 30 or so new friends (more on that later). Now that Selection Sunday™ 2019 is upon us, we wanted to share a technical view of what we’ve been up to as we head into the tournament, the architectural flow that powers aspects of the NCAA’s data pipelining, and what you can look forward to from Google Cloud as we follow the road to Minneapolis in April. We’ve also put together online Google Cloud training focused on analyzing basketball and whipped up a few Data Studio dashboards to get a feel for the data (Q: Since 2003, what year has had the highest average margin of victory in the Men’s Sweet 16®? A: 2009).ETL for basketballOur architecture is similar to last year’s, with a few new players in the mix: Cloud Composer, Cloud Scheduler, Cloud Functions, and Deep Learning VMs. Collectively, the tools used and the resulting architecture is very similar to traditional enterprise ETL and data warehousing, except that this is all running on fully managed and serverless infrastructure.    The first step was to get new game data into our historical dataset. We’re using Cloud Scheduler to automate a Cloud Function to ingest raw game log data from Genius Sports every night. This fetches all of the latest game results and stores them in our data lake of decades of boxscore and play-by-play data sitting in Google Cloud Storage. The historical data corpus contains tens of thousands of files with varying formats and schema. The files are the source of truth for any auditing.As new data is ingested into Cloud Storage, an automated Cloud Composer orchestration renders several state check queries to identify changes in data, then executes a collection of Cloud Dataflow templates of Python-based Apache Beam graphs. These Apache Beam graphs then do the heavy lifting of extracting, transforming, and loading the raw NCAA and Genius Sports data into BigQuery. The beauty here is that we can run these jobs for one game for testing, or every game for complete re-load, or a slice of games (e.g. mens/2017/post_season) for targeted backfill. Cloud Dataflow can scale from one event to millions.Data warehousing with BigQueryWith BigQuery as the center of gravity of our data, we can take advantage of views, which are virtual tables built with SQL. We’ve aggregated all of the team, game, player, and play-by-play data into various views, and have nested the majority of them into uber-views.Note: You can now hack on your data (or any of our public datasets) for free with BigQuery providing 10GB of free storage and 1TB of analysis per month. Additionally, you can always take advantage of the Google Cloud Platform free tier if you want to build out beyond BigQuery.Below is a snippet of a sample SQL view that builds a team’s averaged incoming metrics over their previous seven games. We’ve applied BigQuery’s analytical functions to partitioning, which lets our analysis team create thousands of features inline in SQL instead of having to build the aggregations downstream in Python or R. And, as data is ingested via the ETL processes outlined above, the view data is immediately up-to-date.We use layered views to build complex aggregations, like this one below. This query comes in about 182 lines of SQL. Here, we are looking at scoring time durations between every event in the play-by-play table in order to answer questions such as, how long has it been since the last score? How many shots were attempted within that window? Was a time-out called? Granted, 1.7GB is not ‘big data’ by any means; however, performing windowed-row scans can be very time- or memory-intensive.Not in BigQuery.If you were to run this on your laptop, you’d burn 2GB of memory in the process. In BigQuery, you can simply run a query that is not only always up to date, but can also scale without any additional operational investment as your dataset grows (say, from one season to five). Plus, as team members finish new views, everyone in your project benefits.Data visualization with Data StudioBigQuery is powerful for rapid interactive analysis and honing SQL, but it can be a lonely place if you want to visualize data. With BigQuery’s Data Studio integration however, you can create visualizations straight from BigQuery with a few clicks. The following visualization is based on the view above which calculates in-game metrics such as percentage of time tied, percentage of time leading, percentage of time up by 10 or more. This helps answer questions around how much a team is controlling the score of a game.It doesn’t take a data scientist (or basketball expert) to find win-loss records for NCAA basketball teams or notice Gonzaga is having a great year (Tuesday’s loss notwithstanding). But with Data Studio, it’s easy to see more detail— that Gonzaga on average spent 28.8% of their minutes played being up by at least 20 points, and 50.4% of the time up by at least 10. (To be fair, Gonzaga’s scoring dominance is in part a function of their conference and resulting schedule, but still.) Once we get into the tournament, you could imagine that these numbers might move a bit. If we only had a view for schedule-adjusted metrics. (Spoiler alert: we will!)This is the kind of data you can’t glean from a box score. It requires deeper analysis, which BigQuery lets you perform easily, and Data Studio lets you bring to life without charge. Check out the Data Studio dashboard collection for more details.Exploratory data analysis and feature engineeringBeyond our ETL processes with Cloud Dataflow, interactive analysis with BigQuery and dashboarding with Data Studio, we also have tools for exploratory data analysis (EDA). For our EDA, we use two Google Cloud optimized data science environments: Colab and Deep Learning VM images.Colab is a free Jupyter environment that is optimized for Google Cloud and also provides GPU support. It also has versioning, in-line collaborative editing, and a fully configured Python environment. It’s like Google Docs for data nerds!We use Colab to drive analysis that requires processing logic or processing primitives that can’t be easily accomplished in SQL. One use case is the development of schedule adjusted metrics for every team for every calendar date for every season.Below is a snippet of our schedule-adjusted metrics notebook. We’re approaching each game-level metric as a function of three things: the team’s ability, the opponent’s ability, and home-court advantage. To get the game-level data we need to do schedule adjustment, we rely on views in BigQuery.The %%bigquery magic cell provides the ability to insert a query in-line and pump the results to a Pandas DataFrame. From there, we can flow this data through Pandas transformations, normalization, and then to scikit-learn to use ridge regression (with team season dummy variables) to get schedule-adjusted versions of our metrics.After a bit more Pandas wrangling, we can then create this informative scatter plot mapping raw and adjusted efficiency for all 353 Division I teams during the 2018-2019 season.We end this particular journey with one last step: by using the Pandas function pandas_gbq.to_gbq(adjusted_metrics, “adjusted_metrics” if_exists = “replace”) to pump this data back into BigQuery for use for model development and visualization.You can read more about how we built schedule adjusted metrics on our Medium collection, as well as additional Colabs we’ll be publishing during the tournament (or better yet, publish your own!)More predictions, More BigQuery, More madnessWith our ETL pipeline in place and a solid workflow for data and feature engineering, we can get to the fun and maddening part—predictions. In addition to revamping some of our predictions from last year, such as three-point shooting, turnover rates, and rebound estimations, we’re looking at some new targets to the mix, including dunks, scoring runs, and player contribution.We’re using a bit of scikit-learn, but we’re mainly relying on BigQuery ML to train, evaluate, and serve our primary models. BigQueryML enables you to train models in-line, and hands the training and serving off to underlying managed infrastructure. Consider the simple model below. In our friendly BigQuery editor, we can control model type, data splitting, regularization, learning rate, and override class weights—in a nutshell, machine learning.There are lots of great tools for machine learning, and there are lots of ways to solve a machine learning problem. The key is using the right tool for the right situation. While Scikit-learn, TensorFlow, Keras, and PyTorch all have their merits, for this case, BigQuery ML’s ease and speed can’t be beat.Not convinced? Try this Qwiklab designed for basketball analysis and you’ll see what we mean!The teamSince we didn’t have to design our architecture from scratch, we wanted to expand and collaborate with more basketball enthusiasts. College students were a natural fit. We started by hosting the first-ever Google Cloud and NCAA Hackathon at MIT this past January, and after seeing some impressive work, we recruited about 30 students from across the country to join our data analyst ranks.The students have split into two teams, looking at the concepts of ‘explosiveness’ and ‘competitiveness,’ each hoping to build a viable metric to evaluate college basketball teams. By iterating over Google Docs, BigQuery, and Colab, the students have been homing in on ways to use data and quantitative analysis to create definition around previously qualitative ideas.For example, sportscasters often mention how ‘explosive’ a team is at various points in a game. But aside from watching endless hours of basketball footage, how might you go about determining if a team was, in fact, playing explosively? Our student analysts considered the various factors that come into explosive play, like dunks and scoring runs. By pulling up play-by-play data in BigQuery, they could easily find boxscore data with timestamps of all historical games, yielding a score differential. Using %%bigquery magic, they pivoted to Colab, and explored the pace of play of games, creating time boundaries that isolated when teams went on a run in a game. From there, they created an expression of explosiveness, which will be used for game analysis during the tournament. You can read more about their analysis and insight at g.co/marchmadness.Still not enough March Madness and data analytics for you? We understand. While we wait for the first round of the tournament to begin, check in with the 2019 Kaggle competition, and keep an eye on g.co/marchmadness for gametime insights and predictions about key matchups (three-pointers, rebounds, close games, and more)—we’ll be covering both the men’s and women’s tournaments this year.See you on the court, and let the queries begin.
Quelle: Google Cloud Platform

Take Mobile Gaming to the Next Level with Location

Take Mobile Gaming to the Next Level with LocationGames are all about creating worlds and stories.  The richer the world, the more immersive the game. The earliest video games (think Pong, 1972) were limited to flat, two-dimensional screen. But even so, Pong was awesome. Believe us, we played a lot of Pong (not to mention the several decades of games that followed). But gamers today want more. Detailed storytelling and immersive world building are now a standard in games. This means there’s increasing expectation for game worlds to be realistic 3D environments on larger scales.At the same time this shift was happening in gaming, smartphones, big data and machine learning have propelled maps from a flat image on paper to a highly-personalized, living model of the world. And it was at the intersection of these two things that we saw the chance to build something to enable developers to create a whole new class of gaming experiences. This is why we launched Google Maps Platform’s gaming solution last year.In the last year, five games launched on our platform and we’ve learned a lot about real-world games.Location unlocks AR and social gameplayRich, dynamic, and contextual location data allows game developers to augment and enhance social and AR gaming experiences. This is why three of the top 10 ARCore games(1) in the last year were built on Google Maps Platform.  When it comes to location-driven social, players can not only team up, but also have their unique location enrich multiplayer gaming. Next Games learned how powerful this can be in The Walking Dead: Our World. In the game, players form groups, known as guilds, and are able to send flares to allow other players in the same guild to virtually join them at their location to complete missions around the flare.  When we asked about the impact location has had on the social experience of their game, Director Riku Suomela said, “If we didn’t have geolocation, the current system with social wouldn’t work.” In fact, ninety percent of the game’s daily active users are in a guild, and three out of every four players play the game with friends, so social engagement is high.  Location increases player engagement and retentionToday, gamers are people of every age and walk of life. They are rushing commuters, busy shoppers, and people just going about their daily lives. Incorporating location into a mobile game helps developers make game play more immersive and more personal. Every new location gives players a chance to engage with a game differently.For example, players hunting monsters could find toothier ones near their dentist office or hungrier ones around restaurants.  A real example of this is Ludia’s Jurassic World Alive. They found that players opened the game twice as often as Ludia’s non-location-based games. Similarly, Next Games’ The Walking Dead: Our World achieved a 54% higher seven day retention rate compared to the Top 50 US games average(2). When games connect with players where they are, in-game experiences become more immersive and this translates to a drastic increase in engagement and retention.  Location can add new life to an existing mobile gameWhen we started building Google Maps Platform’s gaming offer, we had a simple idea in mind: give developers the tools to build brand new real-world games. But thanks to creative partners, we realized the possibilities are even broader than we expected. Real-world games don’t need to be built from scratch––we’ve seen location intelligence bring new life to existing games, as well.mixi recently added a map mode to Monster Strike(3). In 2018, it became the highest-grossing mobile app of all time(4). Monster Strike was already a popular game but when mixi began re-engaging their user base with location-based in-game features, they saw a 30% increase in daily sessions per user, plus 50% of users who engaged with the location component played the game for 5 or more consecutive days. With mixi, we learned that game developers don’t need to wait for their next game release to start incorporating location into their gameplay. It can be a powerful new dimension to an existing game.  Location-driven features and real-world gameplay do a lot more than just add to the experience of a game. They redefine it. We think this has incredible potential even beyond what we’ve already seen, and we’re excited to work with developers around the world to bring more real-world gaming experiences to life with Google Maps Platform.Whether you’re looking to get people racing across the real streets of Los Angeles, rescue survivors from zombies, or battle in futuristic landscapes located right in their own neighborhoods, the opportunities are vast.  We can’t wait to work with you to build something awesome. Ready to learn more? Come to our Google booth or listen to our talk Tuesday, March 19th, 3:00-3:30 pm Room 2016 at West Hall at GDC or visit us at g.co/mapsplatform/gaming.(1) Source: Internal Google data(2) Source: AppAnnie 2018(3) MONSTER STRIKE, XFLAG, and mixi are trademarks or registered trademarks of mixi, Inc. ©️XFLAG(4) Source: Sensor Tower 2018
Quelle: Google Cloud Platform

Turning data into NCAA March Madness insights

Whether we’re collecting it, storing it, analyzing it, or just trying to make sense of it all, nearly all organizations wrangle with data. And this is particularly true for an organization like the NCAA®, with more than 80 years worth of data on everything from student-athlete performance to March Madness® tournament results.Last year, we teamed up with the NCAA to help them bring together their wealth of historical data so they could better support students and schools, as well as delight fans. During the 2018 March Madness tournament, we used data analytics on Google Cloud to help us better understand the game, and build some fun predictions for what might happen. We turned these real time predictions in TV commercials during the Final Four—and we weren’t far off the mark!In connection with this year’s March Madness tournament, we’re extending our NCAA campaign to developers everywhere with training that enables anyone with an interest in basketball and data analytics to dive in.  More and more developers want to use Google Cloud, and we are ready to meet that demand. (In fact, a recent study by Indeed found that Google Cloud skills are the fastest cloud skills growing in demand.)We’ve published a new series of Qwiklabs training to teach you how to use BigQuery to analyze NCAA basketball data with SQL and build a machine learning model to make your own predictions. At Google Cloud Next on April 9-11 (right after the Final Four), we’ll be hosting two bootcamps (Sunday and Monday) that use NCAA data to show you how to build a data science environment covering ingest, exploration, training, evaluation, deployment, and prediction. We’re co-hosting a predictive modeling competition with Kaggle that lets data scientists show their chops (and compete to win $10,000!). And we’ve published a technical blog post and a whitepaper to give you a deeper look under the hood.We’re also demonstrating our platform’s accessibility and ease of use by recruiting 30 college students from all over the country to expand our all-star predictions team. Using the same Google Cloud services that any organization would use to perform data analysis at scale, our team of student developers will be delivering data-driven predictions and insights throughout the tournament. You can see it all in action at g.co/marchmadness—as well as find links to all our training, certifications, resources, and more.Although our campaign is about college basketball, the NCAA’s challenge in gaining insights from data reflects the same kind of data challenges faced by most enterprises—and many are struggling to find the right skilled workforce to help. We hope this campaign shows how easy and accessible Google Cloud can be for developers everywhere. And we hope that by providing a fun and engaging way to learn our data platform, we can train millions of new Google Cloud developers and help organizations all over the world.To learn more about analytics on Google Cloud, visit our website.
Quelle: Google Cloud Platform

WD Blue SN500: Western Digital macht blaue SSD schneller

Bisher gab es die WD Blue nur mit Sata-Anbindung, die SN500-Variante wechselt zu PCIe samt NVMe-Protokoll. Dadurch wird die SSD in den meisten Fällen drastisch schneller, wenngleich sich Western Digital bei Lanes und Cache zugunsten des Preises deutlich zurückhält. (Solid State Drive, Speichermedien)
Quelle: Golem

Now available for preview: Workload importance for Azure SQL Data Warehouse

Azure SQL Data Warehouse is a fast, flexible and secure analytics platform for enterprises of all sizes. Today we are announcing the preview availability of workload importance on the Gen2 platform to help customers manage resources more efficiently. Workload importance gives data engineers the ability to use importance to classify requests. Requests with higher importance are guaranteed quicker access to resources which helps meet SLAs.

“More with less” is often the motto when it comes to operating data warehousing solutions. The ability to easily scale up compute resources gives data engineers tremendous flexibility. However, when there is budget pressure and scaling down is required, problems can arise.  Workload importance allows high business value work to meet SLAs in a shared environment with fewer resources.

An example of workload importance is shown below. The CEO’s request was submitted last and classified with high importance. Because the CEO’s request has high importance, it is granted access to resources before the Analyst requests allowing it to complete sooner.

Get started now classifying requests with importance

Classifying requests is done with the new CREATE WORKLOAD CLASSIFIER syntax. Below is an example that maps the login for the ExecutiveReports role to ABOVE_NORMAL importance and the AdhocUsers role to BELOW_NORMAL importance. With this configuration, members of the ExecutiveReports role have their queries complete sooner because they get access to resources before members of the AdhocUsers role.

CREATE WORKLOAD CLASSIFIER ExecReportsClassifier
WITH (WORKLOAD_GROUP = 'mediumrc'
,MEMBERNAME = 'ExecutiveReports'
,IMPORTANCE = above_normal);

CREATE WORKLOAD CLASSIFIER AdhocClassifier
WITH (WORKLOAD_GROUP = 'smallrc'
,MEMBERNAME = 'AdhocUsers'
,IMPORTANCE = below_normal);

For more information on workload importance refer to the Classification and Importance overview topics in the documentation. Check out the CREATE WORKLOAD CLASSIFIER doc as well.

See workload importance in action in the below videos:

Workload Importance concepts
Workload Importance scenarios

Next Steps

To get started today, create an Azure SQL Data Warehouse.
For feature requests, please vote on our UserVoice.
To stay up-to-date on the latest Azure SQL Data Warehouse news and features, follow us on Twitter @AzureSQLDW.

Quelle: Azure

Achieve more with Microsoft Game Stack

This blog post was authored by Kareem Choudhry, Corporate Vice President, Microsoft Gaming Cloud.

Microsoft is built on the belief of empowering people and organizations to achieve more – it is the DNA of our company. Today we are announcing a new initiative, Microsoft Game Stack, in which we commit to bringing together Microsoft tools and services that will empower game developers like yourself, whether you’re an indie developer just starting out or a AAA studio, to achieve more.

This is the start of a new journey, and today we are only taking the first steps. We believe Microsoft is uniquely suited to deliver on that commitment. Our company has a long legacy in games – and in building developer-focused platforms.

There are 2 billion gamers in the world today, playing a broad range of games, on a broad range of devices. There is as much focus on video streaming, watching, and sharing within a community as there is on playing or competing. As game creators, you strive every day to continuously engage your players, to spark their imaginations, and inspire them, regardless of where they are, or what device they’re using. Today, we’re introducing Microsoft Game Stack, to help you do exactly that.

What exactly is Microsoft Game Stack?

Game Stack brings together all of our game-development platforms, tools, and services—such as Azure, PlayFab, DirectX, Visual Studio, Xbox Live, App Center, and Havok—into a robust ecosystem that any game developer can use. The goal of Game Stack is to help you easily discover the tools and services you need to create and operate your game.

The cloud plays a critical role in Game Stack, and Azure fills this vital need. Azure provides the building blocks like compute and storage, as well as cloud-native services from machine learning and AI, to push notifications and mixed reality spatial anchors. Azure is already available in 54 regions globally, including China, and continues to invest in building highly secure and sustainable cloud infrastructure and additional services for game developers. Azure’s global scale is what will give Project xCloud streaming technology the scale to deliver a great gaming experience for players worldwide, regardless of their device and location.

Already with Azure, companies like Rare, Ubisoft, and Wizards of the Coast are hosting multiplayer game servers, safely and securely storing player data, analyzing game telemetry, protecting their games from DDOS attacks, and training AI to create more immersive gameplay.

While Azure is part of Game Stack, it’s important to call out that Game Stack is cloud, network, and device agnostic. And we’re not stopping here.

What’s new?

The next piece of Game Stack is PlayFab, a complete backend service for building and operating live games. A year ago, we welcomed PlayFab into Microsoft through an acquisition. Today we’re excited to announce we are bringing PlayFab into the Azure family. Together, Azure and PlayFab are a powerful combination: Azure brings reliability, global scale, and enterprise-level security; PlayFab provides Game Stack with managed game-development services, real-time analytics, and LiveOps capabilities. Last fall, we saw what these two platforms can do together with PlayFab Multiplayer Servers, which allows you to safely launch and scale up multiplayer games by dynamically hosting your servers with Azure cloud compute.

To quote PlayFab’s co-founder James Gwertzman, “Modern game creators are less like movie directors, and more like cruise directors. Long-term success requires engaging players in a continuous cycle of creation, experimentation, and operation. It’s no longer possible to just ship your game and move on.” This is why a year ago, we welcomed PlayFab into Microsoft through an acquisition. PlayFab supports all major devices, from iOS and Android, to PC and Web, to Xbox, Sony PlayStation, and Nintendo Switch; and all major game engines, including Unity and Unreal. PlayFab will also continue to support all major clouds going forward.

Today we’re also excited to announce five new PlayFab services in preview.

In public preview today:

PlayFab Matchmaking: Powerful matchmaking for multiplayer games, adapted from Xbox Live matchmaking, but now available to all games and all devices.

In private preview today (contact us to join the preview):

PlayFab Party: Voice and chat services, adapted from Xbox Party Chat, but now available to all games and for all devices. Party leverages Azure Cognitive Services for real-time translation and transcription to make games accessible to more players.
PlayFab Game Insights: Combines robust real-time game telemetry with game data from multiple other sources to measure your game’s performance and create actionable insights. Powered by Azure Data Explorer, Game Insights will offer connectors to existing first- and third-party data sources including Xbox Live.
PlayFab Pub Sub: Subscribe your game client to messages pushed from PlayFab’s servers via a persistent connection, powered by Azure SignalR. This enables scenarios such as real-time content updates, matchmaking notifications, and simple multiplayer gameplay.
PlayFab User Generated Content: Engage your community by allowing players to create and safely share user generated content with other players. This technology was originally built to support the Minecraft marketplace.

Growing the Xbox Live community

Another major component of Game Stack is Xbox Live. Over the past 16 years, Xbox Live has become one of the most vibrant and engaged gaming communities in the world. It is also a safe and inclusive network that has broken down boundaries in how gamers connect across devices.

Today, we’re excited for Xbox Live to become part of Microsoft Game Stack, providing identity and community services. Under Game Stack, Xbox Live will expand its cross-platform capabilities, as we introduce a new SDK that brings this community to iOS and Android devices.

Mobile developers will now be able to reach some of the most highly engaged and passionate gamers on the planet with Xbox Live. These are just a few of the benefits for mobile developers:

Trusted Game Identity: With the new Xbox Live SDK, developers can focus on creating great games and leverage Microsoft‘s trusted identity network to support log-in, privacy, online safety, and child accounts. 
Frictionless Integration: New a la carte service offerings and no Xbox Live certification pass give mobile developers flexibility in how they build and update their games. Developers just use the services that best fit their needs.
Vibrant Gaming Community: Reach Xbox Live’s growing community and connect gamers across a multitude of platforms. Find creative ways to enable achievements, Gamerscore, and “hero” stats, which have their own out-of-game experience, to keep gamers engaged.

Other Game Stack components

Other components of Game Stack include Visual Studio, Mixer, DirectX, Azure App Center, Visual Studio, Visual Studio Code, and Havok. In the coming months, as we work to improve and grow Game Stack, you’ll see deeper connections between these services as we unify them to work more seamlessly together.

As an example of how this integration is already underway, today we’re bringing together PlayFab and these Game Stack components:

App Center: Crash log data from App Center is now connected to PlayFab, allowing you to better understand and respond to problems in your game in real-time by tying crash logs back to individual player profiles.
Visual Studio Code: With PlayFab’s new plug-in for Visual Studio Code, editing and updating Cloud Script just got a lot easier.

Create your world today and achieve more

As we expand our focus to the cloud, the nature of the platform may be changing, but our commitment to empower game developers like yourself is unwavering, and we’re looking forward to the journey ahead with Microsoft Game Stack. Our teams are inspired and excited by the possibilities as we start to pull together all these great services and technologies. Please be sure to share your feedback with us as we go, so we can help you achieve more. If you’re at GDC, stop by the Microsoft booth in the South Hall of the Moscone Center to try out many of the new services, and to learn more about the exciting opportunities ahead.
Quelle: Azure