Five habits of highly effective capital markets firms who run in the cloud

Every time I meet with our customers in the capital markets, they share new ways they are reinventing their businesses. Recently, I met with a CIO from a large investment bank looking to take the next step in the bank’s cloud adoption journey. We talked about everything from creating a plan for public cloud migration of mission-critical workloads and communicating it to regional regulators, to developing a roadmap for adopting engineering-driven software operations methodologies across the organization. The CIO repeatedly emphasized the bank’s collective commitment to creating a culture of innovation. What would it take to achieve this evolutionary transformation?IT leaders in capital markets are asking the same question. Google Cloud recently contracted Aite Group, an independent research and advisory firm focused on business, technology, and regulatory issues and their impact on the financial services industry. Aite surveyed 19 capital markets firms regarding their respective public cloud adoption journeys. Here are valuable insights into what these firms do to bring metamorphic change:1. They learn from the tech industry.Technology is becoming more and more vital to non-tech companies, but innovation can stall if you don’t fundamentally change how you build software. Successful capital markets firms have taken cues from traditional tech companies, adopting software operations methodologies such as continuous integration and continuous delivery (CI/CD), code reviews, unit and integration testing, incremental rollout, blameless post-mortems, and more. These practices accelerate ROI and support innovation, and are a significant reason why the tech industry builds software more effectively than other industries. Even though following these practices may slow down new code development in the short-term, it significantly reduces time spent on code maintenance down the road, freeing developers to innovate.Most importantly, innovative capital markets firms adopt a “lifelong learning” attitude within the organization, emphasizing “training first” to reduce ramp-up times and respond in a fast-changing capital markets environment. They recognize that every employee can be a cloud worker, connected 24/7; security and workplace policies support this reality.2. They foster a front-office culture of “everyone is a programmer” and bring AI to the middle and back office.By democratizing the ability to build solutions across the business rather than isolating those capabilities in innovation labs, firms can build better products for their clients. Especially because code is easier to follow, audit and test than with traditional tools such as spreadsheets. The front office may finally be less wedded to management via spreadsheet, if the tools are more fit for purpose.In the middle and back office, machine learning (ML) and artificial intelligence (AI) may bring much needed relief in areas such as trade surveillance, where sophisticated malicious attacks make identifying breaches increasingly challenging. Moving from a rules-based review of electronic communications and compliance data to natural language processing refines data results. It allows firms to more seamlessly integrate electronic communications flags within the overall surveillance infrastructure. Similarly, cybersecurity could also benefit from more comprehensive and proactive activity monitoring by way of ML- and AI-based tools.3. They use data openly with strong controls and security.One CIO at a tier-1 global bank predicts that in the future, regulations such as GDPR will require data access to be granted by the end client—whether a retail investor or a large pension fund. Storing data in a manner where access can be granted or revoked by users easily across service providers—from large custodians through small service providers—will be essential to retaining business moving forward. Cloud-based services that incorporate tools for data loss prevention, obfuscation, tokenization, encryption and logging can help firms meet the security, privacy and data lineage requirements of emerging data-related regulations and user preferences.4. They adopt production ML systems.There’s more to ML than implementing an algorithm. Production ML systems equipped for data collection, verification, machine resource management, analysis and other functions enable firms to improve monitoring, prediction scaling, error diagnosis, reporting and other tasks that support trading operations. For example, a proprietary trading firm in Singapore uses TensorFlow, an open-source machine learning library for numerical computation, with the Google Cloud Bigtable NoSQL database service, to “listen” to live market data and make trading decisions.5. They commit to open-source code with serverless applications.Using open-source code rather than starting all software projects from scratch also speeds up innovation, provides tighter security and offers freedom from vendor lock-in. Plus publicly sharing changes to open-source software permits a richness of thought and a continuous feedback loop with users. Numerous capital markets firms have begun to champion open-source development and participate in related industry groups, such as the Fintech Open Source Foundation (FINOS).To learn more about how these innovators are transforming their firms for greater efficiency and competitive differentiation using cloud-based thinking, check out our latest white paper, “Cloud as an Innovation Platform in Capital Markets.”
Quelle: Google Cloud Platform

Microsoft and NVIDIA bring GPU-accelerated machine learning to more developers

With ever-increasing data volume and latency requirements, GPUs have become an indispensable tool for doing machine learning (ML) at scale. This week, we are excited to announce two integrations that Microsoft and NVIDIA have built together to unlock industry-leading GPU acceleration for more developers and data scientists.

Azure Machine Learning service is the first major cloud ML service to integrate RAPIDS, an open source software library from NVIDIA that allows traditional machine learning practitioners to easily accelerate their pipelines with NVIDIA GPUs
ONNX Runtime has integrated the NVIDIA TensorRT acceleration library, enabling deep learning practitioners to achieve lightning-fast inferencing regardless of their choice of framework.

These integrations build on an already-rich infusion of NVIDIA GPU technology on Azure to speed up the entire ML pipeline.

“NVIDIA and Microsoft are committed to accelerating the end-to-end data science pipeline for developers and data scientists regardless of their choice of framework,” says Kari Briski, Senior Director of Product Management for Accelerated Computing Software at NVIDIA. “By integrating NVIDIA TensorRT with ONNX Runtime and RAPIDS with Azure Machine Learning service, we’ve made it easier for machine learning practitioners to leverage NVIDIA GPUs across their data science workflows.”

Azure Machine Learning service integration with NVIDIA RAPIDS

Azure Machine Learning service is the first major cloud ML service to integrate RAPIDS, providing up to 20x speedup for traditional machine learning pipelines. RAPIDS is a suite of libraries built on NVIDIA CUDA for doing GPU-accelerated machine learning, enabling faster data preparation and model training. RAPIDS dramatically accelerates common data science tasks by leveraging the power of NVIDIA GPUs.

Exposed on Azure Machine Learning service as a simple Jupyter Notebook, RAPIDS uses NVIDIA CUDA for high-performance GPU execution, exposing GPU parallelism and high memory bandwidth through a user-friendly Python interface. It includes a dataframe library called cuDF which will be familiar to Pandas users, as well as an ML library called cuML that provides GPU versions of all machine learning algorithms available in Scikit-learn. And with DASK, RAPIDS can take advantage of multi-node, multi-GPU configurations on Azure.

Learn more about RAPIDS on Azure Machine Learning service or attend the RAPIDS on Azure session at NVIDIA GTC.

ONNX Runtime integration with NVIDIA TensorRT in preview

We are excited to open source the preview of the NVIDIA TensorRT execution provider in ONNX Runtime. With this release, we are taking another step towards open and interoperable AI by enabling developers to easily leverage industry-leading GPU acceleration regardless of their choice of framework. Developers can now tap into the power of TensorRT through ONNX Runtime to accelerate inferencing of ONNX models, which can be exported or converted from PyTorch, TensorFlow, MXNet and many other popular frameworks. Today, ONNX Runtime powers core scenarios that serve billions of users in Bing, Office, and more.

With the TensorRT execution provider, ONNX Runtime delivers better inferencing performance on the same hardware compared to generic GPU acceleration. We have seen up to 2X improved performance using the TensorRT execution provider on internal workloads from Bing MultiMedia services.

To learn more, check out our in-depth blog on the ONNX Runtime and TensorRT integration or attend the ONNX session at NVIDIA GTC.

Accelerating machine learning for all

Our collaboration with NVIDIA marks another milestone in our venture to help developers and data scientists deliver innovation faster. We are committed to accelerating the productivity of all machine learning practitioners regardless of their choice of framework, tool, and application. We hope these new integrations make it easier to drive AI innovation and strongly encourage the community to try it out. Looking forward to your feedback!
Quelle: Azure

Microsoft Azure for the Gaming Industry

This blog post was co-authored by Patrick Mendenall, Principal Program Manager, Azure. 

We are excited to join the Game Developers Conference (GDC) this week to learn what’s new and share our work in Azure focused on enabling modern, global games via cloud and cloud-native technologies.

Cloud computing is increasingly important for today’s global gaming ecosystem, empowering developers of any size to reach gamers in any part of the world. Azure’s 54 datacenter regions, and its robust global network, provides globally available, high performance services, as well as a platform that is secure, reliable, and scalable to meet current and emerging infrastructure needs. For example, earlier this month we announced the availability of Azure South Africa regions. Azure services enable every phase of the game development lifecycle from designing, building, testing, publishing, monetizing, measurement, engagement, and growth, providing:

Compute: Gaming services rely on a robust, reliable, and scalable compute platform. Azure customers can choose from a range of compute- and memory-optimized Linux and Windows VMs to run their workloads, services, and servers, including auto-scaling, microservices, and functions for modern, cloud-native games.
Data: The cloud is changing the way applications are designed, including how data is processed and stored. Azure provides high availability, global data, and analytics solutions based on both relational databases as well as big data solutions.
Networking: Azure operates one of the largest dedicated long-haul network infrastructures worldwide, with over 70,000 miles of fiber and sub-sea cable, and over 130+ edge sites. Azure offers customizable networking options to allow for fast, scalable, and secure network connectivity between customer premises and global Azure regions.
Scalability: Azure offers nearly unlimited scalability. Given the cyclical usage patterns of many games, using Azure enables organizations to rapidly increase and/or decrease the number of cores needed, while only having to pay for the resources that are used.
Security: Azure offers a wide array of security tools and capabilities, to enable customers to secure their platform, maintain privacy and controls, meet compliance requirements (including GDPR), and ensure transparency.
Global presence: Azure has more regions globally than any other cloud provider, offering the scale needed to bring games and data closer to users around the world, preserving data residency, and providing comprehensive compliance and resiliency options for customers. Using Azure’s footprint, the cost, the time, and the complexity of operating a game at global scale can be reduced.
Open: with Azure you can use the software you choose whether it be operating systems, engines, database solutions, or open source – run it on Azure.

We’re also excited to bring PlayFab into the Azure family. Together, Azure and PlayFab are a powerful combination for game developers. Azure brings reliability, global scale, and enterprise-level security, while PlayFab provides Game Stack with managed game services, real-time analytics, and comprehensive LiveOps capabilities.

We look forward to meeting many of you at GDC 2019 to learn about your ideas in gaming, discussing where cloud and cloud-native technologies can enable your vision, and sharing more details on Azure for gaming. Join us at the conference or contact our gaming industry team at azuregaming@microsoft.com.

Details on all of these are available via links below.

Learn more about Microsoft Game Stack.
Talks at GDC:

Thursday, March 21, 2019 at 11:30 AM: Best Practices for Building Resilient, Scalable, Game Services in Microsoft Azure
Thursday, March 21, 2019 at 12:45 PM: Save Time for Creativity: Unlocking the Potential for Your Game's Data with Microsoft Azure

Azure Gaming Reference Architectures: Landing Page

Multiplayer/Game Servers
Analytics
Leaderboards
Cognitive Services

GDC Booth demos for Azure:

AI Training with Containers – Use Azure and Kubernetes to power Unity ML Agents
Game Telemetry – Build better game balance and design
Build NoSQL Data Platforms – Azure Cosmos DB: a globally distributed, massively scalable NoSQL database service
Cross Realms with SQL – Build powerful databases with Azure SQL

Quelle: Azure

Trello: Nur noch 10 Teamboards für kostenlose Konten

Das neue Update der Organisationssoftware Trello, einer Tochter von Atlassian, bringt ein paar Einschränkungen für kostenlose Konten. Diese können nur noch 10 Teamboards gleichzeitig nutzen. Dafür erhalten zahlende Kunden viele neue Funktionen – etwa bessere Administratorkontrolle und einen Bot. (Atlassian, Applikationen)
Quelle: Golem