Identity now available in SQL Data Warehouse

Azure SQL Data Warehouse (SQL DW) is a SQL-based, fully managed, petabyte-scale cloud solution for data warehousing. SQL DW is highly elastic, you can provision in minutes and scale capacity in seconds. You can scale compute and storage independently, allowing you to burst compute for complex analytical workloads or scale down your warehouse for archival scenarios, and pay based off what you're using instead of being locked into predefined cluster configurations.

IDENTITY has been a long standing customer ask for SQL Data Warehouse. We’re excited to announce that Azure SQL Data Warehouse now supports an IDENTITY column property as well as SET IDENTITY_INSERT syntax and generating IDENTITY on load. In data warehousing, IDENTITY functionality is particularly important as it makes easier the creation of surrogate keys.

Surrogate keys are fundamental to dimensional modelling because they often uniquely identify a row. Since they are typically integer values, they also tend to compress and compare with better performance. While UUIDs can often be used for similar purposes, they are harder to manage, don’t intrinsically contain temporal information, and are non-performant. For large data warehouses, the 4x size of UUIDs compared with a traditional 4-byte IDENTITY value really adds up. The previous method of assigning monotonically increasing surrogate keys involved using left outer joins from a staging table combined with the application of getting the max id on the surrogate key column with a ROW_NUMBER function. This solution was clunky and invoked a costly broadcast data move.

We hope that by adding this feature, we’ve made data management in SQL DW easier and better for our customers.

Keep in mind, this IDENTITY property is not synonymous with uniqueness constraints which are often imposed on IDENTITY columns!

Next steps

Get started today by creating IDENTITY columns in a table today. It’s as simple as:

CREATE TABLE dbo.T1
( C1 INT IDENTITY(1,1) NOT NULL
, C2 INT NULL
)
WITH
( DISTRIBUTION = HASH(C2)
, CLUSTERED COLUMNSTORE INDEX
)
;

Bear in mind that the IDENTITY property cannot be used in the following scenarios:

Where the column data type is not INT or BIGINT
Where the column is also the distribution key
Where the table is an external table

Learn more about adding IDENTITY functionality to your tables today by visiting our documentation or our T-SQL syntax page.

Learn more

What is Azure SQL Data Warehouse?
SQL Data Warehouse best practices
Video library
MSDN forum
Stack Overflow forum

Quelle: Azure

Event Hubs Capture (formally Archive) is now Generally Available

Today we are announcing Azure Event Hubs Capture, released in public preview in September 2016 as Azure Event Hubs Archive, is now Generally Available.

This capability adds an important dimension to Azure Event Hubs, which is a highly scalable data streaming platform and event ingestion service capable of receiving and processing millions of events per second. Event Hubs Capture makes it easy to send this data to persistent storage without using code or configuring other compute services. You can currently use it to push data directly from Event Hubs to Azure Storage as blobs. In the near future, we will also support Azure Data Lake Store. Other benefits include:

Simple setup: You can use either the Azure portal or an Azure Resource Manager template to configure Event Hubs to take advantage of Capture capability.

Reduced total cost of ownership: Event Hubs handles all the management, so there is minimal overhead involved in setting up and tracking your custom job processing mechanisms.

Integrated with your destination: Just choose your Azure Storage account, and soon Azure Data Lake Store, and Event Hubs Capture will automatically push the data into your repositories.

Near-Real time batch analytics: Event data is available within minutes of ingress into Event Hubs. This enables the most common scenarios of near-real time analytics without having to construct separate data pipelines.

Perfect on-ramp for Big Data: When the Capture feature is enabled, Event Hubs allows a single stream to support real-time and batch based pipeline, making it easy to compose your Big Data solutions with Event Hubs.

With the move to General Availability, beginning August 1, 2017 Event Hubs Capture will be charged at an hourly rate of $0.10/hr. For detailed pricing, please refer to Event Hubs pricing. 

Next steps

We have a few additional resources that can jump-start your usage of Events Hubs. After learning more about this new feature and Event Hubs more generally, you can explore how to use templates to enable Capture capability on your Event Hub. Lastly, we hope you’ll let us know what you think about newer sinks and newer serialization formats.

If you have any questions or suggestions, leave us a comment below.
Quelle: Azure

Text Analytics API now supports analyzing sentiment in 16 languages

You can now analyze the sentiment of your text in 12 new languages. With this release, you will now be able to get a more complete view of your customer’s voice with an understanding of how your customers feel about your product or service, an international event, or news topic.

Sentiment analysis is now supported in Danish, Dutch, English, Finnish, French, German, Greek, Italian, Japanese, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, and Turkish. For details on the languages supported across all the capabilities, see the Text Analytics API documentation.

Text Analytics is easy to get started with – try it yourself through the demo experience. Customers are already using the capabilities around the world, and it’s incredibly easy to do so. One such way is through Microsoft Flow, where you can start analyzing tweets and visualize the analytics in Power BI with a few clicks. See the Flow template to see this in action.

The Text Analytics API is one of Microsoft’s Cognitive Services, which let you build apps with powerful algorithms using just a few lines of code. You can get started for free with a trial account today.
Quelle: Azure

Announcing Docker 17.06 Community Edition (CE)

Today we released Docker CE 17.06  with new features, improvements, and bug fixes. Docker CE 17.06 is the first Docker version built entirely on the Moby Project, which  we announced in April at DockerCon. You can see the complete list of changes in the changelog, but let’s take a look at some of the new features.
We also created a video version of this post here:

Multi-stage builds
The biggest feature in 17.06 CE is that multi-stage builds, announced in April at DockerCon, have come to the stable release. Multi-stage builds allow you to build cleaner, smaller Docker images using a single Dockerfile.
Multi-stage builds work by building intermediate images that produce an output. That way you can compile code in an intermediate image and use only the output in the final image. So for instance, Java developers commonly use Apache Maven to compile their apps, but Maven isn’t required to run their app. Multi-stage builds can result in a substantial image size savings:
REPOSITORY          TAG                 IMAGE ID                CREATED              SIZE

maven               latest              66091267e43d         2 weeks ago         620MB

java                8-jdk-alpine     3fd9dd82815c         3 months ago       145MB
Let’s take a look at our AtSea sample app which creates a sample storefront application.

AtSea uses multi-stage build with two intermediate stages: a node.js base image to build a ReactJS app, and a Maven base image to compile a Spring Boot app into a single image.
.gist table { margin-bottom: 0; }

The final image is only 209MB, and doesn’t have Maven or node.js.
There are other builder improvements as well, including the –build-arg flag on docker build, which lets you set build-time variables. The ARG instruction lets Dockerfile authors define values that users can set at build-time using the –build-arg flag.
Logs and Metrics
 
Metrics
We currently support metrics through an API endpoint in the daemon. You can now expose docker’s /metrics endpoint to plugins.

$ docker plugin install –grant-all-permissions cpuguy83/docker-metrics-plugin-test:latest

$ curl http://127.0.0.1:19393/metrics

This plugin is for example only. It runs reverse proxy on the host’s network which forwards requests to the local metrics socket in the plugin. In real scenarios you would likely either push the collected metrics to an external service or make the metrics available for collection by a service such as Prometheus.
Note that while metrics plugins are available on non-experimental daemons, the metric labels are still considered experimental and may change in future versions of Docker.
 
Log Driver Plugins
We have added support for log driver plugins.
 
Service logs
Docker service logs has moved out of the Edge release and into Stable, so you can easily get consolidated logs for an entire service running on a Swarm. We’ve added an endpoint for logs from individual tasks within a service as well.

Networking
 
Node-local network support for Services
Docker supports a variety of networking options. With Docker 17.06 CE, you can now attach services to node-local networks. This includes networks like Host, Macvlan, IPVlan, Bridge, and local-scope plugins. So for instance for a Macvlan network you can create a node specific network configurations on the worker nodes and then create a network on a manager node that brings in those configurations:
[Wrk-node1]$ docker network create —config-only —subnet=10.1.0.0/16 local-config

[Wrk-node2]$ docker network create —config-only —subnet=10.2.0.0/16 local-config

[Mgr-node2]$ docker network create —scope=swarm —config-from=local-config -d macvlan

mynet

[Mgr-node2]$ docker service create —network=mynet my_new_service

Swarm mode
We have a number of new features in swarm mode. Here’s just a few of them:

Configuration Objects
We’ve created a new configuration object for swarm mode that allows you to securely pass along configuration information in the same way you pass along secrets.
$ echo “This is a config” | docker config create test_config –

$ docker service create –name=my-srv —config=test_config …

$ docker exec -it 37d7cfdff6d5 cat test_config

This is a config

Certificate Rotation Improvements
The swarm mode public key infrastructure (PKI) system built into Docker makes it simple to securely deploy a container orchestration system. The nodes in a swarm use mutual Transport Layer Security (TLS) to authenticate, authorize, and encrypt the communications between themselves and other nodes in the swarm. Since this relies on certificates, it’s important to rotate those frequently. Since swarm mode launched with Docker 1.12, you’ve been able to schedule certificate rotation as frequently as every hour. With Docker CE 17.06 we’ve added the ability to immediately force certificate rotation on a one-time basis.
docker swarm ca –rotate
Swarm Mode Events
You can use docker events to get real-time event information from Docker. This is really useful when writing automation and monitoring applications that work with Docker. But until Docker CE 17.06 CE we didn’t have support for events for swarm mode. Now you docker events will return information on services, nodes, networks, and secrets.

Dedicated Datapath
The new –datapath-addr flag on docker swarm init allows you to isolate the swarm mode management tasks from the data passed around by the application. That helps save the cluster from IO greedy applications. For instance in you initiate your cluster:
docker swarm init —advertise-addr=eth0 —datapath-addr=eth1
Cluster management traffic (Raft, grpc & gossip) will travel over eth0 and services will communicate with each other over eth1.

Desktop Editions
We’ve got three new features in Docker for Mac and Windows.

GUI option to reset docker data without loosing all settings
Now you can reset your data without resetting your settings

Add an experimental DNS name for the host
If you’re running containers on Docker for Mac or Docker for Windows, and you want to access other containers you can use a new experimental host: docker.for.mac.localhost and docker.for.win.localhost to access open ports. For instance:
.gist table { margin-bottom: 0; }

Login certificates for authenticating registry access
You can now add certificates to Docker for Mac and Docker for Windows that allow you to access registries, not just your username and password. This will make accessing Docker Trusted Registry, as well as the open source Registry and any other registry application fast and easy.

Cloud Editions
 
Our Cloudstor volume plugin is available both on Docker for AWS and Docker for Azure. In Docker for AWS, support for persistent volumes (both global EFS-based and attachable EBS-based) are now available in stable. And we support EBS volumes across Availability Zones.
For Docker for Azure, we now support deploying to Azure Gov. Support for persistent volumes through cloudstor backed by Azure File Storage is now available in Stable for both Azure Public and Azure Gov
 
Deprecated
 
In the dockerd commandline, we long ago deprecated the –api-enable-cors flag in favor of –api-cors-header. We’re not removing –api-enable-cors entirely.
Ubuntu 12.04 “precise pangolin” has been end-of-lifed, so it is now no longer a supported OS for Docker. Later versions of Ubuntu are still supported.
 
What’s next
 
To find out more about these features and more:

Download the latest version of Docker CE
Check out the Docker Documentation
Play with these features on Play with Docker
Ask questions in our forums and in the Docker Community Slack
RSVP for the CE 17.06 Online Meetup on June 28th

The post Announcing Docker 17.06 Community Edition (CE) appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Creating enterprise grade BI models with Azure Analysis Services

In April we announced the general availability of Azure Analysis Services, which evolved from the proven analytics engine in Microsoft SQL Server Analysis Services. The success of any modern data-driven organization requires that information is available at the fingertips of every business user, not just IT professionals and data scientists, to guide their day-to-day decisions. Self-service BI tools have made huge strides in making data accessible to business users. However, most business users don’t have the expertise or desire to do the heavy lifting that is typically required, including finding the right sources of data, importing the raw data, transforming it into the right shape, and adding business logic and metrics, before they can explore the data to derive insights. With Azure Analysis Services, a BI professional can create a semantic model over the raw data and share it with business users so that all they need to do is connect to the model from any BI tool and immediately explore the data and gain insights. Azure Analysis Services uses a highly optimized in-memory engine to provide responses to user queries at the speed of thought.

In this video, Christian Wade demonstrates how you can leverage Azure Analysis Services to build enterprise grade BI models. In this video you will learn how to import Power BI Desktop files using the new web designer (coming soon) and how to use other tools like SQL Server Data tools (SSDT) and BISM Normalizer.

Learn more about Azure Analysis Services.
Quelle: Azure

Google App Engine standard now supports Java 8

By Amir Rouzrokh, Product Manager

Java 8 support has been one of the top requests from the App Engine developer community. Today, we’re excited to announce the beta availability of Java 8 on App Engine standard environment. Supporting Java 8 on App Engine standard environment is a significant milestone. In addition to support for an updated JDK and Jetty 9 with Servlet 3.1 specs, this launch enables enhanced application performance. Further, this release improves the developer experience with full gRPC and Google Cloud Java Library support, and we have finally removed the class whitelist.

App Engine standard now fully supports off-the-shelf frameworks such as Spring Boot and alternative languages like Kotlin or Apache Groovy. At the same time, the new runtime environment still provides all the great benefits developers have come to depend on and love about App Engine standard, including rapid deployments in seconds, near instantaneous scale up and scale down (including to zero instances when no traffic is detected), native microservices and versioning support, traffic splitting between any two languages (including Java 7 and Java 8), local development tooling and App Engine APIs.

Developer tooling is critical to the Java community. The new runtime supports Stackdriver, Cloud SDK, Maven, Gradle, IntelliJ and Eclipse plugins. In particular, the IntelliJ and Eclipse plugins provide a modern experience optimized for developer flow. Watch the Google Cloud Next 2017 session “Power your Java Workloads on Google Cloud Platform” to learn more about the new IntelliJ plugin, Stackdriver Debugger, traffic splitting, auto scaling and other App Engine features.

As always, developers can choose between App Engine standard and flexible environments — deploy your application to one environment now, and another environment later. Or deploy to both simultaneously, mixing and matching environments as well as languages. (Here’s a guide on how to choose between App Engine flexible and standard environments.)

Below is a one-minute video that demonstrates how easy it is to deploy your first application to App Engine.

To get started with Java 8 for App Engine standard, follow this quickstart. Or if you’re a current App Engine standard Java 7 user, upgrade to the new runtime by adding java8 to your appengine-web.xml file, as described in the video above. Be sure to deploy a new version of your service and direct a small portion of your traffic to this instance and monitor for errors.

You can find samples of all the code in the documentation here. For sample applications running Kotlin, Spring-Boot and SparkJava, check out this repository.

We’ve been investing heavily in language and infrastructure updates for both App Engine environments (we recently announced the general availability of Java 8 on App Engine flexible and Python upgrades), with many more to come. We’d love to hear from you during the Java 8 beta period and beyond. Submit your feedback on the Maven, Gradle, IntelliJ and Eclipse plugins, as well as the Google Cloud Java Libraries on their respective GitHub repositories.

Happy Coding!
Quelle: Google Cloud Platform

Gain business insights using Power BI reports for Azure Backup

Azure Backup announced the support for alerting and monitoring in August 2016. Taking it a step further, we are excited to announce preview of Azure Backup Reports using Power BI. Azure Backup Reports provide the ability to gauge backup health, view restore trends, and understand storage usage patterns across subscriptions and across vaults. More importantly, this feature provides complete power to you to generate your own reports and build customizations using Power BI.

Key Benefits

This feature provides the following capabilities and gives complete control to the customers for building reports:

Cloud based reports – You do not need to setup a reporting server, database, or any other infrastructure since everything is completely managed on the cloud. All you need is a storage account and Power BI subscription. The Power BI free tier supports reports for backup reporting data under 1 GB per user.
Cross subscription and cross vault reports – You can view data across subscriptions and vaults to get a big picture view, track organization SLAs, meet compliance requirements across departments, etc.
Open data model –  You can now create your own reports and customize existing reports since the Azure Backup management data model is publicly available.
Data visualization –  You can take advantage of Power BI’s data visualization capabilities to perform business analytics and share rich data insights.
Access control – Power BI also provides the capability to create organizational content packs, which can be used for sharing selected reports inside the organization and restricting access to reports based on their requirements.
Export to Event Hub and Log Analytics – Besides the ability to export the data to storage account and connect to it using Azure Backup content pack in Power BI, you can also export reporting data to Event Hub and Log Analytics to leverage it in OMS and other tools for further analysis.

Analyzing data using Power BI

You can configure Azure Backup reports using recovery services vault and import Azure Backup content pack in just a few steps. Once done, use the Azure Backup dashboard in Power BI to create new reports and customize existing ones using filters like vault, protected server, backup items, and time.

1. Storage

Storage reports track backup storage and protected instances over time. You can answer the following business questions using these reports:

Which protected servers had the highest protected instances last month?
Which protected servers use the most backup storage and have highest impact on billing?

2. Job Health

Job Health reports provide trends about backups and restores impacted by job failures, cause of these failures, and statistics about failed and successful jobs. You can answer the following business questions using these reports:

Was backup job failure higher than 10% last week?
What were the top causes of job failures yesterday?
Which backup items were impacted by these job failures?

In the image below, Backup has been selected in Distribution of Failed Jobs donut chart; this updates all other visuals on this tab with backup failures related data.

3. Backup Items

Backup Item reports provide details about backup schedule, data transferred, failure percentage, and last successful backup time. You can answer the following business questions using these reports:

Which backup items had maximum data transferred last week?
Are these items backed up during the same time period in a day?
Which backup items had no successful backups yesterday?

4. Job Duration

Job Duration reports provide insights into backup and restore job duration. You can answer the following business questions using these reports:

Which virtual machines had longest running jobs last week?
Which folders take longest time to restore?

5. Alerts

Alert reports provide distribution of active alerts, backup items that generate these alerts, and trends about alert resolution time. You can answer the following business questions using these reports:

Which data sources generated the most critical alerts in last week?
What is the trend of alert resolution since last few months?
Has the critical alert count reduced after latest version update?

Related links and additional content

Getting started with Azure Backup Reports.
View Azure Backup data model to create custom reports.
New to Azure Backup, sign up for a free Azure trial subscription.
Need help? Reach out to Azure Backup forum for support.
Tell us how we can improve Azure Backup by contributing new ideas and voting up existing ones.
Follow us on Twitter @AzureBackup for latest news and updates.

Quelle: Azure

Azure Site Recovery now supports Ubuntu

Azure Site Recovery makes business continuity accessible for all your IT applications by letting you use Azure as your recovery site. This offers a solution where you only pay for the resources you consume, alleviating the need to spend on upfront capital investments for a recovery location or resources.

We recognize our customer’s need to have flexibility in the choice of platforms and application stacks they use. That is why Azure Site Recovery supports a wide variety of platforms and operating systems. We’ve now added support for another very popular Linux distribution. Azure Site Recovery now supports disaster recovery and migration to Azure for servers running Ubuntu on Azure virtual machines or in a VMware virtualized environment. Azure Site Recovery currently supports disaster recovery and migration to Azure for applications on Ubuntu Server 14.04 LTS.

Let’s see how easy it is to achieve business continuity objectives for your Ubuntu workloads in the context of the fictional Bellows College.

A business continuity plan for Bellows College

Bellows College’s Moodle learning management system(LMS) is configured in a standard two-tier deployment, with a web server and a MySQL database running on VMware virtual machines running Ubuntu server 14.04 LTS.

Last year, a faulty surge protector in their datacenter caused an outage to their learning management system. Bellows College’s application and infrastructure administrators scampered to bring the system back up on an alternate storage unit by restoring data from their database backup. This experience taught them a costly lesson and left them with the realization that periodic backups are not a replacement for a business continuity plan.

Realizing they needed a reliable business continuity plan, Bellows College’s CIO decided to use Azure Site Recovery. Going to Azure was an easy choice for them, as they were already planning on migrating some of their applications to Azure to consolidate their datacenter costs.

With a few simple steps, Bellows College setup Azure Site Recovery and got their learning management system protected to Azure.

Bellows College built a recovery plan to sequence the order in which the various application tiers are brought up during a failover. For example, they specified that the database tier would be brought up before the web tier so that the web server could start serving requests immediately post failover. Within the recovery plan, Bellows College used Azure Automation runbooks to automate some of the common post-failover steps, like assigning an IP address to the failed over web server. By using automation, they were able to achieve a better RTO by avoiding the need to perform this step manually.

With their Moodle servers protected and the recovery plan setup, it was time to test their recovery plan. They did this using the test failover feature of ASR that lets them test failing over their applications without impacting production workloads or end users.

The test failover brought the application up in a test network in Azure with all the latest changes, and let them connect to the application in the test environment and validate that the application was working in a few minutes.

Being able to test the failover of the application to Azure without impacting production gave Bellows College the confidence that their business continuity plan gives them the necessary protection from unplanned events.

Having experienced how simple and cost-effective it is to use Azure Site Recovery to achieve business continuity, Bellows College is now planning to onboard some of their other supporting applications running on Ubuntu.

Azure Site Recovery is an all-encompassing service for your migration and disaster recovery needs. Our mission is to democratize disaster recovery with the power of Microsoft Azure so that you have a disaster recovery plan that covers all of you organization's IT applications.

Check out the list of configurations supported with ASR and get started today.
Quelle: Azure