Alphabet CEO Sundar Pichai has compared the potential impact of artificial intelligence (AI) to the impact of electricity—so it may be no surprise that at Google Cloud, we expect to see increased AI and machine learning (ML) momentum across the spectrum of users and use cases.Some of the momentum is more foundational, such as the hundreds of academic citations that Google AI researchers earn each year, or products like Google Cloud Vertex AI accelerating ML development and experimentation by 5x, with 80% fewer lines of code required. Some are more concrete, like mortgage servicer Mr. Cooper using Google Cloud Document AI to process documents 75% faster with 40% cost savings; Ford leveraging Google Cloud AI services for predictive maintenance and other manufacturing modernizations; and customers across a wide range of industries deploying ML platforms atop Google Cloud. Together, these proof points reflect our belief that AI is for everyone, and that it should be easy to harness in workflows of all kinds and for people of all levels of technical expertise. We see our customers’ accomplishments as validation of this philosophy and a sign that we are taking away the right things from our conversations with business leaders. Likewise, we see validation in recognition from analysts, which recently includes Google being named a Leader byGartner® in the 2022 Magic Quadrant™ for Cloud AI Developer Services reportForrester in the Forrester Wave™: AI Infrastructure, Q4 2021 report, the Forrester Wave™: Document-Oriented Text Analytics Platforms, Q2 2022 report, and The Forrester Wave™: People-Oriented Text Analytics Platforms, Q2 2022 report In June, we talked about four pillars that guide our approach to creating products for MLOps and to accelerate development of ML models and their deployment into product. In this article, we’ll look more broadly at our AI and ML philosophy, and what it means to create “AI for everyone.” AI should be for everyoneOne of the pillars we discussed in June was “meeting users where they are,” and this idea extends far beyond products for data scientists. Technical expertise should not be a barrier to implementing AI—otherwise, use cases where AI can help will languish without modernization, and enterprises without well-developed AI practices will risk falling behind their competitors. To this end, we focus on creating AI and ML services for all kinds of users, e.g.: DocumentAI, Contact Center AI, and other solutions that inject AI and ML into business workflows without imposing heavy technical requirements or retraining on users; Pre-trained APIs, ranging from Speech to Fleet Optimization, that let developers leverage pre-trained ML models and free them from having to develop core AI technologies from scratch; BigQuery ML to unite data analysis tasks with ML;AutoML for abstracted and low-code ML production without requiring ML expertise; Vertex AI to speed up ML experimentation and deployment, with every tool you need to build deploy and the lifecycle of ML projectsAI Infrastructure options for training deep learning and machine learning models cost effectively. Including Deep Learning VMs optimized for data science and machine learning tasks and AI accelerators for every use case, from low-cost inference to high-performance training. It’s important to provide not only leading tools for advanced AI practitioners, but also leading AI services for users of all kinds. Some of this involves abstracting or automating parts of the ML workflow to meet the needs of the job and technical aptitude of the user. Some of it involves integrating our AI and ML services with our broader range of enterprise products, whether that means smarter language models invisibly integrated into Google Docs or BigQuery making ML easily accessible to data analysts. Regardless of any particular angle, AI is turning into a multi-faceted, pervasive technology for businesses and users the world over, so we feel technology providers should reflect this by building platforms that help users harness the power of AI by meeting them wherever they are. How we’re powering the next generation of AICreating products that help bring AI to everyone requires large research investments, including in areas where the path to productization may not be clear for years. We feel a foundation in research combines with our focus on business needs and users to inform sustainable AI products that are in keeping with our AI principles and encourages responsible use of AI. Many of our recent updates to our AI and ML platforms began as Google research projects. Just consider how DeepMind’s breakthrough AlphaFold project has led to the ability to run protein prediction models in Vertex AI. Or how research into neural networks helped create Vertex AI NAS, which lets data science teams train models more accurately with lower latency and power requirements. Research is crucial, but also only one way of validating an AI strategy. Products have to speak for themselves when they reach customers, and customers need to see their feedback reflected as products are iterated and updated. This reinforces the importance of seeing customer adoption and success across a range of industries, use cases, and user types. In this regard, we feel very fortunate to work with so many great customers, and very proud of the work we help them accomplish. I’ve already mentioned Ford and Mr. Cooper, but those are just a small sampling. For example, Vodafone Commercial’s “AI Booster” platform uses the latest Google technology to enable cutting-edge AI use cases such as optimizing customer experiences, customer loyalty, and product recommendations. Our conversational AI technologies are used by companies ranging from Embodied, whose Moxie robot helps children overcome developmental challenges, to HubSpot connecting meeting notes to CRM data. Across our products and across industries around the world, customer stories grow by the day. We also see validation in our partner network. As we noted in the pillars discussed in June, partners like Nvidia help us to ensure customers have freedom of choice when building their AI stacks, and partners like Neo4j help our customers to expand our services into areas like graph structures. Partners support our mission to bring AI to everyone, helping more customers use our services for new and expanded use cases.Accelerating the momentumOverall, to create products that reflect AI’s potential and likely future ubiquity, we have to take all of the preceding factors, from research to customer and analyst conversations to working with partners, and turn them into products and product updates. We’ve been very active over the last year, from the launch of Call Center AI Platform in March, to the new Speech model we released in May, to a range of announcements at the Google Cloud Applied ML Summit in June. We have much more planned in coming months, and we’re excited to work with customers not just to maintain the pace of AI momentum, but to accelerate it. To learn more about Google Cloud’s AI and ML services, visit this link orbrowse recent AI and ML articles on the Google Cloud Blog. GARTNER and MAGIC QUADRANT are registered trademarks and service marks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s Research & Advisory organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.Related ArticleCloud TPU v4 records fastest training times on five MLPerf 2.0 benchmarksCloud TPU v4 ML supercomputers set performance records on five MLPerf 2.0 benchmarks.Read Article
Quelle: Google Cloud Platform
Published by