Introducing the AWS Service Delivery Program

Today, the AWS Partner Network (APN) team announced the launch of the AWS Service Delivery Program, a new program to help AWS customers quickly locate APN Partners who have proven expertise delivering specific AWS services such as Amazon Aurora, or AWS Lambda. Attaining an AWS Service Delivery Distinction allows APN Partners to differentiate themselves by showcasing to AWS customers areas of specialization.
Quelle: aws.amazon.com

The AWS Partner Network (APN) makes it easier to find the right APN Partner to engage through the Partner Solutions Finder

The AWS Partner Network (APN) is making it easier to explore the APN Partner ecosystem by launching a new site called the AWS Partner Solutions Finder. Visit the AWS Partner Solutions Finder to discover APN Partners by Use Case, Industry, Product, or Location. Interested in learning more about a specific APN Partner? You can now search APN Partners of any tier within the APN. The Partner Solutions Finder gives you more visibility into APN Partners who are members of APN Programs including the AWS Competency, Service Delivery, and Managed Service Provider Programs. The new Partner Profile pages offer a new user experience that enables you to explore APN Partners either at a glance, or in detail. On the Partner Profile page, you can view the APN Partner’s skillset (validated by AWS), partner solutions and/or AWS-created case studies, and office locations. How can you engage with an APN Partner you identify through the Partner Solutions Finder? Simply click “connect” on the Partner Profile page, and the APN Partner will receive your inquiry.

Interested in accelerating your AWS Business? Find APN Partner Solutions today.
Quelle: aws.amazon.com

Maven Multi-Module Projects and OpenShift

There is no need to move away from Maven’s multi-module approach to building and deploying application when working with OpenShift, if that is a process you’re familiar with. It can become a powerful tool for helping break apart existing applications into more consumable microservices as it goes some way to enabling each component to have its own lifecycle, regardless of how the source code repository is managed. Sometimes it may require a little bit of customisation to give you the behaviour you need, and hopefully you’ll get some insight into how that customisation is achieved through this post.
Quelle: OpenShift

One PowerShell cmdlet to manage both Windows and Linux resources — no kidding!

Posted by Quoc Truong, Software Engineer

If you’re managing Google Cloud Platform (GCP) resources from the command line on Windows, chances are you’re using our Cloud Tools for PowerShell. Thanks to PowerShell’s powerful scripting environment, including its ability to pipeline objects, you can efficiently author complex scripts to automate and manipulate your GCP resources.

However, PowerShell has historically only been available on Windows. So even though you had an uber-sophisticated PowerShell script to set up and monitor multiple Google Compute Engines and Google Cloud SQL instances, if you wanted to run it on Linux, you would have had to rewrite it in bash!

Fortunately, Microsoft recently released an alpha version of PowerShell that works on both OS X and Ubuntu, and we built a .NET Core version of our Tools on top of it. Thanks to that, you don’t have to rewrite your Google Cloud PowerShell scripts anymore just to make them work on Mac or Linux machines.

To preview the bits, you’ll have to:

Install Google Cloud SDK and initialize it.
Install PowerShell.
Download and unzip Cross-Platform Cloud Tools for PowerShell bits.

Now, from your Linux or OS X terminal, check out the following commands:

# Fire up PowerShell.
powershell

# Import the Cloud Tools for PowerShell module on OS X.
PS > Import-Module ~/Downloads/osx.10.11-x64/Google.PowerShell.dll

# List all of the images in a GCS bucket.
Get-GcsObject -Bucket “quoct-photos” | Select Name, Size | Format-Table

If running GCP PowerShell cmdlets on Linux interests you, be sure to check out the post on how to run an ASP.NET Core app on Linux using Docker and Kubernetes. Because one thing is for certain — Google Cloud Platform is rapidly becoming a great place to run — and manage — Linux as well as Windows apps.

Happy scripting!
Quelle: Google Cloud Platform

Transitioning your StorSimple Virtual Array to the new Azure portal

Today we are announcing the availability of StorSimple Virtual Device Series in the new Azure portal. This release features significant improvement in the user experience. Our customers can now use the new Azure portal to manage the StorSimple Virtual Array configured as a NAS (SMB) or a SAN (iSCSI) in a remote office/branch office.

If you are using the StorSimple Virtual Device Series, you will be seamlessly transitioned into the new Azure portal with no downtime. We&;ll reach out to you via email regarding the specifics of the dates of the transition. After the transition is complete, you can no longer manage your transitioned virtual array from the classic Azure portal.

If you are using StorSimple Physical Device Series, you can continue to manage your devices via the classic Azure portal.

Learn how to use the new Azure portal in just a few steps as detailed below.

Navigate the new Azure portal

Everything about the StorSimple Virtual Device Series experience in new Azure portal is designed to be easy. In the new Azure portal, you will find your service as StorSimple Device Manager.

The Quick start gives a concise summary of how to setup a new virtual array. This is now available as an option in the left-pane of your StorSimple Device Manager blade.

The StorSimple Device Manager service summary blade is redesigned to make it simple. Use the Overview from the left-pane to navigate to your service summary.

Click on Devices in your summary blade to navigate to all the devices registered to your service. For specific monitoring requirements, your can even customize your dashboards.

Click on a device to go to the Device summary blade. Use the commands in the top menu bar to provision a new share, take a backup, fail over, deactivate, or delete a device. You can also right-click and use the context menu to perform the same operations.

The Jobs, Alerts, Backup catalogs, Device Configuration blades are all redesigned to ensure ease of access.

For more information, go to StorSimple product documentation. Visit StorSimple MSDN forum to find answers, ask questions and connect with the StorSimple community. Your feedback is important to us, so send all your feedback or any feature requests using the StorSimple User Voice. And don’t worry – if you need any assistance, Microsoft Support is there to help you along the way!
Quelle: Azure

From “A PC on every desktop” to “Deep Learning in every software”

Deep learning is behind many recent breakthroughs in Artificial Intelligence, including speech recognition, language understanding and computer vision. At Microsoft, it is changing customer experience in many of our applications and services, including Cortana, Bing, Office 365, SwiftKey, Skype Translate, Dynamics 365, and HoloLens. Deep learning-based language translation in Skype was recently named one of the 7 greatest software innovations of the year by Popular Science, and the technology helped us achieve human-level parity in conversational speech recognition. Deep learning is now a core feature of development platforms such as the Microsoft Cognitive Toolkit, Cortana Intelligence Suite, Microsoft Cognitive Services, Azure Machine Learning, Bot Framework, and the Azure Bot Service. I believe that the applications of this technology are so far reaching  that “Deep Learning in every software” will be a reality within this decade.

We’re working very hard to empower developers with AI and Deep Learning, so that they can make smarter products and solve some of the most challenging computing tasks. By vigorously improving algorithms, infrastructure and collaborating closely with our partners like NVIDIA, OpenAI and others and harnessing the power of GPU-accelerated systems, we’re making Microsoft Azure the fastest, most versatile AI platform – a truly intelligent cloud.

Production-Ready Deep Learning Toolkit for Anyone

The Microsoft Cognitive Toolkit (formerly CNTK) is our open-source, cross-platform toolkit for learning and evaluating deep neural networks. The Cognitive Toolkit expresses arbitrary neural networks by composing simple building blocks into complex computational networks, supporting all relevant network types and applications. With the state-of-the art accuracy and efficiency, it scales to multi-GPU/multi-server environments. According to both internal and external benchmarks, the Cognitive Toolkit continues to outperform other Deep Learning frameworks in most tests and unsurprisingly, the latest version is faster than the previous releases, especially when working on massively big data sets and on Pascal GPUs from NVIDIA. That’s true for single-GPU performance, but what really matters is that Cognitive Toolkit can already scale up to using a massive number of GPUs. In the latest release, we’ve extended Cognitive Toolkit to natively support Python in addition to C++. Furthermore, the Cognitive Toolkit also now allows developers to use reinforcement learning to train their models. Finally, Cognitive Toolkit isn’t bound to the cloud in any way. You can train models on the cloud but run them on premises or with other hosters. Our goal is to empower anyone to take advantage of this powerful technology.

To quickly get up to speed on the Toolkit, we’ve published Azure Notebooks with numerous  tutorials and we’ve also assembled a DNN Model Gallery with dozens of code samples, recipes and tutorials across scenarios working with a variety of datasets: images, numeric, speech and text.

What Others Are Saying

In the “Benchmarking State-of-the-Art Deep Learning Software Tools” paper published in September 2016, academic researchers have run a comparative study of the state-of-the-art GPU-accelerated deep learning software tools, including Caffe, Cognitive Toolkit (CNTK), TensorFlow, and Torch. They’ve benchmarked the running performance of these tools with three popular types of neural networks on two CPU platforms and three GPU platforms. Our Cognitive Toolkit outperformed other deep learning toolkits on nearly every workload.

Furthermore, Nvidia recently has also run a benchmark comparing all the popular Deep Learning toolkits with their latest hardware. The results show that the Cognitive Toolkit trains and evaluates deep learning algorithms faster than other available toolkits, scaling efficiently in a range of environments—from a CPU, to GPUs, to multiple machines—while maintaining accuracy. Specifically, it’s 1.7 times faster than our previous release and 3x faster than TensorFlow on Pascal GPUs (as presented at SuperComputing’16 conference).

End users of deep learning software tools can use these benchmarking results as a guide to selecting appropriate hardware platforms and software tools. Second, for developers of deep learning software tools, the in-depth analysis points out possible future directions to further optimize performance.

Real-world Deep Learning Workloads

We at Microsoft use Deep Learning and the Cognitive Toolkit in many of our internal services, ranging from digital agents to core infrastructure in Azure.

1. Agents (Cortana): Cortana is a digital agent that knows who you are and knows your work and life preferences across all your devices. Cortana has more 133 million users and has intelligently answered more than 12 billion questions. From speech recognition to computer vision in Cortana – all these capabilities are powered by Deep Learning and Cognitive Toolkit. We have recently made a major breakthrough in speech recognition, creating a technology that recognizes the words in a conversation and makes the same or fewer errors than professional transcriptionists. The researchers reported a word error rate (WER) of 5.9 percent, down from the 6.3 percent, the lowest error rate ever recorded against the industry standard Switchboard speech recognition task. Reaching human parity using Deep Learning is a truly historic achievement.

Our approach to image recognition also placed first in several major categories of the ImageNet and the Microsoft Common Objects in Context challenges. The DNNs built with our tools won first place in all three categories we competed in: classification, localization and detection. The system won by a strong margin, because we were able to accurately train extremely deep neural nets, 152 layers – much more than in the past – and it used a new “residual learning” principle. Residual learning reformulates the learning procedure and redirects the information flow in deep neural networks. That helped solve the accuracy problem that has traditionally dogged attempts to build extremely deep neural networks.

2. Applications: Our applications, from Office 365, Outlook, PowerPoint, Word and Dynamics 365 can use deep learning to provide new customer experiences. One excellent example of a deep learning application the bot used by Microsoft Customer Support and Services. Using Deep Neural Nets and the Cognitive Toolkit, it can intelligently understand the problems that a customer is asking about, and recommend the best solution to resolve those problems. The bot provides a quick self-service experience for many common customer problems and helps our technical staff focus on the harder and more challenging customer issues.

Another example of an application using Deep Learning is the Connected Drone application built for powerline inspection by one of our customers eSmart (to see the Connected Drone in action, please watch this video). eSmart Systems began developing the Connected Drone out of a strong conviction that drones combined with cloud intelligence could bring great efficiencies to the power industry. The objective of the Connected Drone is to support and automate the inspection and monitoring of power grid infrastructure instead of the currently expensive, risky, and extremely time consuming inspections performed by ground crews and helicopters. To do this, they use Deep Learning to analyze video data feeds streamed from the drones. Their analytics software recognizes individual objects, such as insulators on power poles, and directly links the new information with the component registry, so that inspectors can quickly become aware of potential problems. eSmart applies a range of deep learning technologies to analyze data from the Connected Drone, from the very deep Faster R-CNN to Single Shot Multibox Detectors and more.

3. Cloud Services (Cortana Intelligence Suite): On Azure, we offer a suite for Machine Learning and Advanced Analytics, including Cognitive Services (Vision, Speech, Language, Knowledge, Search, etc.), Bot Framework, Azure Machine Learning, Azure Data Lake, Azure SQL Data Warehouse and PowerBI, called the Cortana Intelligence Suite. You can use these services along with the Cognitive Toolkit or any other deep learning framework of your choice to deploy intelligent applications. For instance, you can now massively parallelize scoring using a pre-trained DNN machine learning model on an HDInsight Apache Spark cluster in Azure. We are seeing a growing number of scenarios that involve the scoring of pre-trained DNNs on a large number of images, such as our customer Liebherr that runs DNNs to visually recognize objects inside a refrigerator. Developers can implement such a processing architecture with just a few steps (see instructions here).

A typical large-scale image scoring scenario may require very high I/O throughput and/or large file storage capacity, for which the Azure Data Lake Store (ADLS) provides a high performance and scalable analytical storage. Furthermore, ADLS imposes data schema on read, which allows the user to not worry about the schema until the data is needed. From the user’s perspective, ADLS functions like any other HDFS storage account through the supplied HDFS connector. Training can take place on an Azure N-series NC24 GPU-enabled Virtual Machine or using recipes from the Azure Batch Shipyard, which allows training of our DNNs with bare-metal GPU hardware acceleration in the public cloud using as many as four NVIDIA Tesla K80 GPUs. For scoring, one can use HDInsight Spark Cluster or Azure Data Lake Analytics to massively parallelize the scoring of a large collection of images with the rxExec function in Microsoft R Server (MRS) by distributing the workload across the worker nodes. The scoring workload is orchestrated from a single instance of MRS and each worker node can read and write data to ADLS independently, in parallel.

SQL Server, our premier database engine, is “becoming deep” as well. This is now possible for the first time with R and ML built into SQL Server. Pushing deep learning models inside SQL Server, our customers now get throughput, parallelism, security, reliability, compliance certifications and manageability, all in one. It’s a big win for data scientists and developers – you don’t have to separately build the management layer for operational deployment of ML models. Furthermore, just like data in databases can be shared across multiple applications, you can now share the deep learning models.  Models and intelligence become “yet another type of data”, managed by the SQL Server 2016. With these capabilities, developers can now build a new breed of applications that marry the latest transaction processing advancements in databases with deep learning.

4. Infrastructure (Azure): Deep Learning requires a new breed of high performance infrastructure that is able to support the computationally intensive nature of deep learning training. Azure now enables these scenarios with its N-Series Virtual machines that are powered by NVIDIA&;s Tesla K80 GPUs that are best in class for single and double precision workloads in the public cloud today. These GPUs are exposed via a hardware pass-through mechanism called Discreet Device Assignment that allows us to provide near bare-metal performance. Additionally, as data grows for these workloads, data scientists have the need to distribute the training not just across multiple GPUs in a single server, but to a number of GPUs across nodes. To enable this distributed learning need across tens or hundreds of GPUs, Azure has invested in high-end networking infrastructure for the N-Series using a Mellanox&039;s InfiniBand fabric which allows for high bandwidth communication between VMs with less than 2 microseconds latency. This networking capability allows for libraries such as Microsoft&039;s own Cognitive Toolkit (CNTK) to use MPI for communication between nodes and efficiently train with a larger number of layers and great performance.

We are also working with NVIDIA on a best in class roadmap for Azure with the current N-Series as the first iteration of that roadmap. These Virtual Machines are currently in preview and recently announced General Availability of this offering starting on 1st of December.

It is easy to get started with deep learning on Azure. The Data Science Virtual Machine (DSVM) is available in the Azure Marketplace, and comes pre-loaded with a range of deep learning frameworks and tools for Linux and Windows. To easily run many training jobs in parallel or launch a distributed job across more than one server, Azure Batch “Shipyard” templates are available for the top frameworks. Shipyard takes care of configuring the GPU and InfiniBand drivers, and uses Docker containers to setup your software environment.

Lastly, our team of our engineers and researchers has created a system that uses a reprogrammable computer chip called a field programmable gate array, or FPGA, to accelerate Bing and Azure. Utilizing the FPGA chips, we can now write Deep Learning algorithms directly onto the hardware, instead of using potentially less efficient software as the middle man. What’s more, an FPGA can be reprogrammed at a moment’s notice to respond to new advances in AI/Deep Learning or meet another type of unexpected need in a datacenter. Traditionally, engineers might wait two years or longer for hardware with different specifications to be designed and deployed. This is a moonshot project that’s succeeded and we are bringing this now to our customers.

Join Us in Shaping the Future of AI

Our focus on innovation in Deep Learning is across the entire stack of infrastructure, development tools, PaaS services and end user applications. Here are a few of the benefits our products bring:

Greater versatility: The Cognitive Toolkit lets customers use one framework to train models on premises with the NVIDIA DGX-1 or with NVIDIA GPU-based systems, and then run those models in the cloud on Azure. This scalable, hybrid approach lets enterprises rapidly prototype and deploy intelligent features.
Faster performance: When compared to running on CPUs, the GPU-accelerated Cognitive Toolkit performs deep learning training and inference much faster on NVIDIA GPUs available in Azure N-Series servers and on premises. For example, NVIDIA DGX-1 with Pascal and NVLink interconnect technology is 170x faster than CPU servers with the Cognitive Toolkit.
Wider availability: Azure N-Series virtual machines powered by NVIDIA GPUs are currently in preview to Azure customers, and will be generally available in December. Azure GPUs can be used to accelerate both training and model evaluation. With thousands of customers already part of the preview, businesses of all sizes are already running workloads on Tesla GPUs in Azure N-Series VMs.
Native integration with the entire data stack: We strongly believe in pushing intelligence close to where the data lives. While a few years ago running Deep Learning inside a database engine or a Big Data engine might have seemed like a science fiction, this has now become real. You can run deep learning models on massive amounts of data, e.g., images, videos, speech and text, and you can do it in bulk. This is the sort of capability brought to you by Azure Data Lake, HDInsight and SQL Server. You can also now join now the results of deep learning with any other type of data you have and do incredibly powerful analytics and intelligence over it (which we now call “Big Cognition”). It’s not just extracting one piece of cognitive information at a time, but rather joining and integrating all the extracted cognitive data with other types of data, so you can create seemingly magical “know-it-all” cognitive applications.

Let me invite all developers to come and join us in this exciting journey into AI applications.

@josephsirosh
Quelle: Azure

Digital Transformation with SAP HANA on Azure Large Instances

At Ignite 2016, Jason Zander announced a plethora of Azure Services and features. One among them that has got people excited is the announcement that SAP HANA on Azure (Large Instances) is generally available. For those who might have missed it in the blitz, here are the key things you should know:

Transform, migrate, innovate at your own pace: Azure has a purpose-built approach to provide organizations cloud benefits for their SAP estate – both traditional as well as SAP HANA OLAP and OLTP production deployments. Read more about the SAP certifications. You will always find something new.

No Compromises: The approach is to marry the benefits of bare-metal SKUs that are unencumbered by virtualization in terms of ability to scale, provide superior and consistent performance while surpassing expectations on availability, business continuity and development and operational agility of the cloud.

The proof is in the pudding. After supporting the largest scale SAP HANA deployments up to 3 TB in October, we are now announcing general availability of an even larger scale – 4TB Scale up and 32 TB Scale out on Dec 1, 2016 , proving that we will continue on this blistering pace. We are the first Hyper-Scale cloud vendor delivering Intel Broadwell based solutions, scaling to 192 threads. For more information, read more about scale.

SAP HANA Large Instances offer an availability SLA of 99.99% for an HA pair, the highest among all hyper-scale public cloud vendors. These instances provide built-in infrastructure support for backup and restore, high availability (HA) and disaster recovery (DR) scenarios. Additionally, these instances have integrated support with partners, including SUSE Linux Enterprise, Red Hat Enterprise Linux and SAP, so you can confidently bring your production workloads to Azure.

Customer Confidence: We were pleasantly surprised at how aggressively our customers like Coats plc are taking advantage and realizing results:

By moving SAP HANA to Azure we have been able to speed up planning cycles and accelerate delivery of finished goods to our customers.  We have also activated real time reporting to monitor and improve process productivity across our global supply chain.

Richard Cammish, Global CIO, Coats plc

The potential for using data in smarter ways to operate more efficiently, save money, and satisfy customers is immense.  Azure gives us integrated tools that let us fully interrogate and exploit our data.

Harold Groothedde, Chief Technology Officer, Coats plc

SAP Partnership: After seeing Satya Nadella and Jim McDermott on stage at Sapphire 2016 one does not need any more proof that it is a decades long strategic partnership that is critical to enterprises. But more proof arrived within 60 days – SAP chose Azure as the cloud to run their fastest growing and most exciting SAP HANA based SaaS platform, SuccessFactors HCM Suite.

Digital Transformation with Azure and SAP HANA: It is important to not lose sight of why all of this is so crucial in the first place. In the world defined by Uber, Netflix, online retail it is clear to CEOs that their survival depends on Digital transformation. And for the 200,000 organizations that manage their LOB applications with SAP, their ability to transform is gated by them traversing two journeys. Destination – Cloud and SAP HANA. And since the two are inextricably related, they need a single strategic partner, who is a premier public cloud vendor, has a long-term relationship with SAP and who understands how to work with enterprises.

After talking to a few CIOs and service managers, it was clear to me that they are faced with a series of conflicting demands where neither choice made at the cost of other is palatable. The diagram below is my consolidated view of that dilemma.

 

 

Watch this 8 minute Video that provides more detail on how Microsoft Azure approaches this problem in a unique way, so that you don’t have to make compromises and why you need to contact your account team to set up a design workshop.  If you are not sure who to contact, visit request information and someone from Microsoft will reach out.To learn more technical details, visit Getting Started with SAP on Azure.

Quelle: Azure