How to lower costs and improve innovation with cloud computing

There is no one way to recover. The past year has seen unprecedented challenges for business across the world, with social distancing and quarantine measures forcing many organizations to quickly adapt to remote working. A new report from BCG Platinion, titled “Finding a New Normal in the Cloud”, points out that while companies have done well to survive so far, the overall business environment is still a tough one with contracted economies and lowered revenues. CIOs have to find the right balance between technological innovation and lowering their cash burn rate. The report identifies five key ways in which companies can use cloud computing to optimize their operations and reduce overall IT costs by as much as 10%. 1. Go beyond VPNThe BCG Platinion report states that companies need “rapid, efficient, highly scalable, and device agnostic solutions”. Traditionally, when employees had to work offsite, companies provided them with VPNs. But the sheer scale of the COVID-19 pandemic has shown these solutions to be expensive, slow, inconvenient, and hard to manage when entire workforces are working remotely.Instead, BCG Platinion emphasizes the need for cloud-based solutions such as BeyondCorp Enterprise for more secure, more effective, lower cost remote access at scale. BeyondCorp Enterprise offers customers a zero trust platform for simple and secure access with continuous end-to-end protection that can be used on any device at any time. Deliveroo, a global food delivery company headquartered in the UK, uses BeyondCorp Enterprise to bring the zero trust model to its distributed workforce. “Having secure access to applications and associated data is critical for our business,” says Vaughn Washington, VP of Engineering at Deliveroo. “With BeyondCorp Enterprise, we manage security at the app level, which removes the need for traditional VPNs and associated risks.”1With a low cost cloud subscription model, BeyondCorp Enterprise eliminates the need for hardware, operating, and maintenance costs that come with VPN solutions and can also enable organizations to offer protection to the extended workforce at a fraction of the cost. BCG Platinion estimates that solutions like BeyondCorp Enterprise can save companies as much as 50% versus the cost of a traditional VPN. 2. Use SaaS to keep productivity up With so much of the workforce fragmented due to social distancing measures, the need for seamless, efficient collaboration is greater than ever. According to BCG Platinion, a fully functional Software-as-a-Service productivity solution such as Google Workspace helps to “alleviate the traditional costs and burdens around availability, backup, and maintenance of on-premise collaboration infrastructure.” BCG Platinion analysts estimate that adopting a SaaS solution can lower computing costs for end users by up to 35%. But working in the cloud does more than lower costs. The BCG Platinion report cites a 2020 study from Forrester that found adopting Google Workspace boosted revenue growth by 1.5%, reduced the need for on-demand tech support by 20%, and cut the risk of data breaches by more than 95%. Over the last year, companies of all sizes have looked on the conditions of the pandemic as an opportunity to change the way they work. “Airbus has spent the past year thinking about what it actually means to return to work and we’re looking to support greater flexibility with Google Workspace in a leading role,” says Andrew Plunkett, Airbus Vice President Digital Workplace. “In 2020, we held 5.6M Google Meet sessions and we now have more than 70,000 shared Drives where people collaborate. Google Workspace has changed the way people work at Airbus and that will continue as the solution empowers the hybrid work reality.”23. Reduce IT overhead and management costs with cloud-first devicesWorking in the cloud can be made even more effective with devices specifically designed for the cloud, says the BCG Platinion report. “Cloud-native devices such as Google’s Chromebooks and Chromeboxes are cost-effective, easy to deploy, simple to use, and highly secure,” says BCG Platinion. Additionally, with “thin client” devices, companies can save on hardware costs compared with traditional laptops and desktops.The report suggests that a thin client approach can produce savings of up to 25% in end-user technology and support. Organizations and businesses of all sizes have found that Chrome OS and devices have greatly enhanced their capacity to work together in even the most challenging circumstances. Chrome OS provides employees with a modern experience and devices that stay fast, have built-in security, deploy quickly, and reduce the total cost of ownership. “Chromebooks are simple-to-use and cost-effective devices that do everything that our staff need them to do, which is mainly accessing Google Workspace online,” says Henry Lewis, Head of Platform for the London Borough of Hackney. “As soon as the Grab and Go Chromebooks were available, they were well used every day.”34. Lift and shift for easier transitionsWhile migrating to the cloud is a priority for many organizations to succeed, it does not have to happen all at once and in the same way. A total cloud migration can be a huge undertaking, involving redesigns of architecture and refactoring of applications. For many organizations, this process can take several months, even years. But in times like these, CIOs need to make decisions quickly and reduce cash burn as much as possible. When resources are stretched and time is tight, BCG Platinion recommends a “lift-and-shift” approach for minimal disruption. With Google Compute Engine, for example, organizations can simply rehost their existing workloads on virtual machines without transformation. BCG Platinion reports that moving non-critical workloads to the cloud with lift-and-shift approaches can reduce IT spend by as much as 4% in just three months. Additionally, a quick, effective migration paves the way for more advanced IT infrastructure changes, creating effects that ripple long into the future. 5. Make data work for you with the cloudData is central to any industry and making the most out of it is more important now than ever before. With business as usual upended by the pandemic, organizations have turned to data for urgent tasks like demand forecasting or predicting supply-chain disruptions. A McKinsey report from last year points out that while many organisations had already started to engage with analytics and AI technologies, their progress was dramatically accelerated by the urgency of the pandemic: “Analytics capabilities that once might have taken these organizations months or years to build came to life in a matter of weeks.”With the right setup a company can use data to drive efficiencies, respond quickly to its customers and open new markets. But big wins require huge amounts of information and powerful analytics, which means that effective data handling can be very difficult and expensive to do with on-premises architecture. Moving to a cloud-based infrastructure minimizes infrastructure costs while opening up cutting edge techniques like machine learning for greater insight. The BCG Platinion report found that using a cloud-based data platform was not only cheaper and more efficient for businesses, but could result in a 70% increase in “effectiveness” which it defined as increased sales, lower costs of procured goods, and reduced inventory holding costs. “Moving to cloud provides competitive advantage as you scale innovation, accelerating the creation of new services to keep ahead of the competition,” explains Norbert Faure, Managing Director, Platinion Western Europe at Boston Consulting Group (BCG).The key mission of any data platform is to make sure that the right people have access to the right information at the right time. Google Cloud helps to unify data across the entire business, increase agility, and innovate faster with a range of products. BigQuery runs complex analytics at infinite scale while helping organizations save up to 34% on the total cost of ownership compared with alternative cloud-based data warehouse solutions.4 Cloud Spanner provides a fully managed relational database that operates at 99.999% availability. Meanwhile, with Vertex AI, businesses can access the same groundbreaking machine learning tools that Google uses itself for unprecedented insight on a unified AI platform. “The whole Google Cloud suite of products accelerates the process of getting established and up and running, so we can perform the ‘bread-and-butter’ work of data science,” says David Herberich, VP, Head of Data at fintech startup Branch.5Minimize costs today, innovate for tomorrowAs the world recovers from the pandemic, continued success depends on staying nimble and adaptive, argues the BCG Platinion report. Cloud computing can make companies more resilient by helping them to keep costs low, reducing overall IT spend by as much as 10% overall according to BCG Platinion. “For CIOs, cloud migrations can offer important near-term savings and benefits, while also reinvigorating progress toward the long-term goal of digital transformation.“To learn more, read the full report from BCG Platinion.1. BeyondCorp Enterprise: Introducing a safer era of computing2. Building the future of work with Google Workspace3. Hackney Council: Empowering 4,000 staff to keep serving their community from home4. The Economic Advantages of Google BigQuery versus Alternative Cloud-based EDW Solutions5. Fintech startup, Branch makes data analytics easy with BigQueryRelated ArticleNew research on how COVID-19 changed the nature of ITTo learn more about the impact of COVID-19 and the resulting implications to IT, Google commissioned a study by IDG to better understand …Read Article
Quelle: Google Cloud Platform

New to ML: Learning path on Vertex AI

At Google I/O this year, we announced Vertex AI: Google Cloud’s end-to-end ML platform. There’s a lot included here, as you can see from the diagram below. Figure 1. Vertex AI overviewIf you’re new to ML, or new to Vertex AI, this post will walk through a few example ML scenarios to help you understand when to use which tool, going from ML APIs all the way to custom models and MLOps for taking them into a production system. Our demo scenarioI moved recently, and as I was preparing boxes I started thinking about all the ways ML might streamline this process. What if I had an application that takes the measurements of a box and allows me to store the contents of each box virtually? Or have a service that automatically detects the type of room shown in an image and creates a digital inventory for me?Now imagine you are a data scientist at a moving company and the business team would like you to use machine learning to solve the following challenges:Recognizing items in pictures from customers’ homesClassifying items in two different categories based on some criteria (for example, location or fragility)Estimating box dimensions to save customer’s time during packagingFirst, you need to decide how to approach the problem. And then, which tool is right for you. Before we dive in, let’s discuss the ML tooling on Google Cloud we can use to solve this. Machine learning on Google CloudGoogle Cloud Platform provides several tools to support an entire machine learning workflow, across different model types and varying levels of ML expertise. When you start a machine learning project like the one we’re building, there are several factors to evaluate before choosing one of these tools as you can see in the diagram below.Figure 2. Choosing the right tool – DiagramFor instance, we have:Pre-trained APIs for access to machine learning models for common tasks like analyzing images, video and text via APIAutoML on Vertex AI for training, testing and deploying a machine learning model without codeCustom Model Tooling on Vertex AI for training, testing, optimizing and deploying models using a framework of your choice.These are the tools I’ll focus on today. And, at the end, I will show how MLOps fits in this process and allows you to create a reliable and reproducible machine learning pipeline that integrates all services needed to deploy your models. Before moving on, it is also important to say that, regardless of which approach you use, you can deploy your model to an endpoint and serve predictions to end users via SDK methods. And the following representation shows how Vertex AI fits into a typical application development workflow.Figure 3. Vertex AI and ApplicationWith that, we are now ready to solve our business challenge with Vertex AI. So let’s start! Pre-trained APIs for object classificationWhen applying the diagram in Figure 1 to our demo scenario, the first step is to determine whether we have enough images of common household objects to use for training. For this case, let’s assume we don’t yet have training data. This is where the Google Cloud Vision API comes in handy. Indeed, the Vision API can detect and extract multiple objects in an image using the Vision API’s Object Localization functionality. Also, Vision API can detect and extract information about entities in an image, across a broad group of categories (for example, a sofa could be classified as both furniture and living room item) and many others. Below you can see results of APIs with a kitchen’s cupboard.Figure 4. Pre-trained APIs for object classificationWith those capabilities, we can build an application that uses the extracted labels and create a virtual catalog of items and boxes  the customer needs to package.Of course, APIs are not just for images. On Google Cloud, there are also ML APIs that give you access to pre-trained machine learning models for common tasks like analyzing video, audio, and text.In general, we suggest using pre-trained APIs whenyou don’t have enough data your prediction task is general enough to fit into the type of labels provided by the service. you want to combine the pre-trained APIs with AutoML (and custom) models. Then, it is just a matter of integrating these APIs into your own application with a REST API request.Now we are able to recognize some items from pictures of customers’ homes and classify them in several categories. But what if we would like to have a higher level of detail with a model which is able to recognize items in images collected in various conditions? Or define our own labels? For example, in our case, we could decide to classify items based on their fragility. So, assuming that you have image data, it’s worth experimenting with a custom classifier to see if it fits your needs. In particular, if you’ve never trained a deep learning model yourself, you might prefer a no-code approach rather than building a custom model from scratch. Time is another important factor in evaluating different tooling.  For simplicity, let’s assume that in our scenario the business team wants to see results right away. In this case,  we’ll use AutoML in Vertex AI to build an initial model. AutoML for custom-label image classification modelWith AutoML, you can train models in Vertex AI on image, video, text, and tabular datasets without writing code. Vertex AI finds the optimal model architecture by searching state-of-the-art machine learning models. In our image scenario, all we need to do is supply labeled examples of images you would like to classify, and the labels you want the ML systems to predict. Then, we can start the model training. Once it finishes,  we can get access to detailed model evaluation metrics and feature attributions which are powered by Vertex Explainable AI. And if it is validated, you can deploy it using managed Vertex AI Prediction. Here’s what this would look like for our image dataset:Figure 5. Classifying a fragile item with AutoMLAs you can see, it seems that choosing AutoML was the right move. In the first example, I take a picture of coffee cups, it classifies them as fragile items with 92% of probability. But when I pass a picture of a stuffed animal, it is able to classify as a non-fragile item with no uncertainty. Just what we need!Figure 6. Classifying a non-fragile item with AutoMLNotice that, using images to solve this challenge requires strong assumptions such as geometric properties of items significantly affects item fragility. Consequently, we will face several corner cases. But, in the end, we would be able to manage by letting customers label them in the application. Again, as general criterias, consider using AutoML when: you don’t have specific requirements about your underlying model architectureyou want to develop a quick initial model to use as a baseline, which could end up being your production modelFinally the last task. We would build a model to estimate at least three dimensions of boxes (width x depth x height) moving jobs will require (the fourth would be the weight). There are several possible approaches for solving this challenge.  One of them could approximate the package’s size using 3D Object Detection. Below you can see the ML Pipeline.Figure 7. Network architecture and post=processing for 3D object detection.Based on the Google Research paper, you can build a single-stage model where the backbone model has an encoder-decoder architecture, built upon MobileNetv2. Then you can employ a multi-task learning approach, jointly predicting an object’s shape with detection and regression. In particular, It is possible to obtain the 3D coordinates for the item’s bounding box using a pose estimation algorithm (EPnP). Then, given the 3D bounding box, you can easily compute the size  (and the pose) of the object. I’m not going to cover the model in this article (because you need training data and, in that case, videos) but, as you can imagine, you will end up training a custom model, which you can do in Vertex AI. So, let me point out how. Custom model training on Vertex AIIn general, you get access to a series of services for custom model development in Vertex AI such as:Notebooks and Deep Learning VM Images with a preinstalled JupyterLab environment powered by most common deep learning frameworks and libraries and the best-in-class compute power (GPU, TPU)Vertex Experiments and Vertex Tensorboard, Vertex Training, Vertex Vizier to visualize model experiments, train your model in a managed and customizable fashion thanks to container technology and optimize hyperparameters for maximum predictive accuracy.Figure 8. Vertex Notebooks and Vertex Training UIsAs before, once you’ve trained your model, you can use managed Vertex AI Prediction to deploy it into production for online and batch serving scenarios when it is needed. Also, with Vertex AI Edge Manager, edge deployment is supported too. For example, suppose you need to deploy your model to customers who live in places with limited networks. With that service, you can serve and manage ML models on remote edge devices using Kubernetes. As a consequence, you will reduce response time and save bandwidth. Of course, wherever the case, you will track each model endpoint from a unified UI. So, you will be able to implement model maintenance like A/B tests or multi-armed bandits and, at the end, build a more solid backend logic compared to your moving app.We’ve covered many different products, so let’s do a recap. By now, you have models that are able to recognize items, classify them into categories and estimate box dimensions. And thanks to Vertex AI Endpoints, they are ready to be consumed by your end users. Are we missing something? Perhaps we are. Let me explain why. MLOps: Putting all together and make it replicableSo far, we focus on how you as a machine learning practitioner can use Vertex AI to build and deploy a model to solve a business challenge. But what happens when an entire team is working on the same project? How can you foster collaboration and guarantee reproducibility at the same time? Also, how can you automate tasks like training and deployment each time new training images are available?This is when MLOps comes into play. With MLOps, you can standardize your machine learning process and make it more reliable. And in Vertex AI you have all you need to embrace this productive paradigm. In fact, the platform provides a robust MLOps architecture using managed services such as Vertex Feature Store, Vertex Pipelines, Vertex ML Metadata and Vertex Model Monitoring.Figure 9. MLOps with Vertex AIWithout further ado, let’s conclude with how MLOps fits into our AutoML use case.One of my favorite enhancements of Vertex AI is the new  python SDK. With it, you can get access to all Vertex AI services programmatically which means that you can express each task of your machine learning process via code and make it shareable and reproducible using a DevOps framework. In our case, you could decide to automate the entire process, from the creation of the dataset to the model deployment, with a pipeline. In particular, with Vertex Pipelines, you can break your process into components. Each component can produce its own artifact and have other metadata (input and output) associated with it. These elements (artifacts, lineage, and execution tracking) can be easily accessible from the Vertex console and you can analyze all of them with the Vertex Metadata service.Below the Vertex Pipelines I created for our AutoML modelFigure 10. Vertex Pipeline for AutoML Image ModelNotice that you can also implement conditional logic in Vertex Pipelines. For example, in our case, we could set a threshold such that, when the model starts underperforming on new data, we would run a training job, check the new accuracy, and decide whether to redeploy it. And, if this is the case, as a data scientist or machine learning engineer you would like to be alerted when model performance falls below a certain threshold. That’s why Vertex AI Model Monitoring was introduced. It automates alerts when events like data drift, concept drift, or other model performance issues happen. So whoever is in charge of model maintenance can act quickly and assess the incidents. SummaryWe have reached the end of this journey in the new Vertex AI platform. I started writing this article with a question in mind: how can I help ML practitioners and developers who approach the Vertex AI platform today?  In order to answer the question, I went through a possible real life example where I imagined leveraging ML with a moving application. And I provided a quick overview of the ML toolbox in Vertex AI with some criterias about when to use which tool. You can find them summarized in the table below. Figure 11. Choosing the right tool – CriteriasIn the end, I also introduced you to MLOps and I showed how Vertex AI would help standardizing machine learning processes and putting them into production. If you want to know more about MLOps and the recommended capabilities based on your use cases, here is an article I recently collaborated on.At that point, you would get a better understanding of Vertex AI and how you can approach it. Then it’s your turn. While I’m thinking about the next blog post, check out our getting started guide and tutorials to start getting your hands dirty. And remember…Always have fun!Thanks to Sara Robinson for her support and all the other Googlers for great feedback on this post.Related ArticleGoogle Cloud unveils Vertex AI, one platform, every ML tool you needGoogle Cloud launches Vertex AI, a managed platform for experimentation, versioning and deploying ML models into production.Read Article
Quelle: Google Cloud Platform

Easy data blending, intelligent reporting leveraging Google Cloud to extend Anaplan Planning

All around the globe, organizations have had to re-evaluate their enterprise planning needs to keep up with changing demands while dealing with point solutions and data scattered across disjointed systems and sources. This re-evaluation has become even more pressing with the unprecedented levels of disruption over the past 18 months.Connected planning makes enterprises agile and resilient by delivering continuous models and forecasts. Anaplan’s enterprise planning platform runs unlimited multi-dimensional scenarios to identify ideal plans of action that pivot business strategy from reactive to proactive. Any function within an organization can model the impact of change and easily visualize its effects. For example, identifying the impact of sales headcount on quota and compensation. In a nutshell enterprise planning allows enterprises to have a living shareable representation on how the business actually works.Google Cloud and Anaplan are redefining how planning practitioners will use Enterprise Planning within these organizations to unify and align planning across all departments, and address the following challenges: Predicting what customers will need, when and where requires planners to blend external data — such as search, POS, macroeconomic factors, and social indicators — with their internal data — such as volume forecast, shipments, and orders. This data blending can create, speed up reconciliations to increase velocity, decrease inventory days on hand and improve service levels. However, blending internal and external data across disparate data sources has been hard, especially in real time.In order for financial planners to capture monthly budgets, they need to submit vendor and spending plan information to their financial analyst as part of their financial planning and budgeting process. Having to manually copy and paste data into Anaplan, as opposed to seamlessly uploading information from familiar spreadsheet and productivity tools, introduces errors, challenges auditability, and reduces productivity.Anaplan Connected Planning Platform provides cloud-based technology that fundamentally transforms planning by connecting all the necessary people, data, and plans to accelerate business value through real-time planning and decision-making in rapidly changing environments. Google Cloud is extending the core capabilities of Anaplan Connected Platform to address these challenges for planners.Deeper integrations, greater capabilitiesSince announcing the partnership between Google Cloud and Anaplan, our two teams, including the ISV Center of Excellence, have successfully completed the implementation of Anaplan Connected Planning Platform on Google Cloud.In parallel, our product and technical teams have collaborated to deliver additional capabilities to extend Anaplan with Google Cloud, including:Google BigQuery integration with Anaplan CloudWorks Anaplan CloudWorks now provides a self-service capability for planning practitioners to easily import/export data into/from Google Cloud’s BigQuery to blend first-party data within Anaplan with third-party data outside Anaplan without needing technical expertise. BigQuery is a highly scalable and cost-effective multicloud data warehouse designed for business analysis. Enriching the Anaplan first-party data enables end-to-end visibility across critical business processes such as those in the supply chain.The BigQuery integration makes it easy to aggregate data from multiple sources and automate processes within a single source of truth. Analyzing data, modeling dynamic scenarios, and developing projections and forward-looking strategies with large amounts of data becomes faster and easier. BigQuery integration with CloudWorks connects to the customer’s BigQuery environment in their Google Cloud project to export/import data into their Anaplan Model. Customers configure a connection to their BigQuery environment by specifying service account details, Google Cloud project and BigQuery data set information.In addition, planners can easily leverage Google’s AI capabilities with their planning information in BigQuery. For example, companies in the consumer goods domain can reduce inventory and increase revenue by using Google Vertex AI to improve the accuracy of their demand forecasting at a more granular level by analyzing every SKU, combined with other data streams and demand signals (e.g. marketing and sales plans, consumer insights, weather, etc.). CloudWorks lets business planners schedule data-flows from BigQuery, amplifying the power of their forecasts across channels, geographies, and product lines. Customers can save time by scheduling import/export integrations to run hourly, daily, weekly, or monthly. The BigQuery integration with Anaplan CloudWorks will be available in Q3 CY 2021.Anaplan Add-on for Google Sheets within Google WorkspaceAnaplan users can avoid spending countless hours every month/quarter exporting Anaplan dashboards into Google Sheets by using the Anaplan Add-on for Google Sheets. This seamless connection between Google Sheets and Anaplan enables planners to keep Anaplan as the planning source of truth.Bidirectional integration of Anaplan and Google Sheets enables bringing source data from Anaplan into Google Sheets, and updating analysis from Google Sheets into Anaplan. In addition to enabling ad hoc analysis and executive reporting, this add-on also enables bulk data load and data reconciliation. It dynamically tracks inputs and adjustments, allowing quick, iterative analysis of planning data. Accuracy and auditability are systematic, rather than manual. During critical strategy meetings such as performance, headcount and financial planning, enterprises can extract high level summaries to make agile decisions. Planners can access this add-on via the Add-ons menu in Google Sheets. The add-on prompts users to sign-in using their Anaplan login details or Single Sign-on. Users can then choose to create a “read-only” or “read-write” connection to link data from an Anaplan module to a Google worksheet. This video provides a short demonstration of using the Anaplan Add-on for Google Sheets Add-on. Expanded reach and scale with global deployment. Anaplan’s Connected Planning platform is being deployed in additional regions across Google Cloud’s global network. Anaplan customers are benefiting from improved data proximity and in-country model data storage.Regional banks in countries with specific requirements for storing model data in-country can take advantage of Anaplan on Google Cloud for finance, human resources, and sales use cases while meeting requirements for storing model data in-country. Expanded partnership = Exceptional possibilities for planningOur expanded partnership brings exceptional possibilities for how Anaplan and Google Cloud customers can plan. Merging the enterprise planning capabilities of Anaplan with Google Cloud, and seamless integrations with BigQuery and AI/ML capabilities, gives business leaders more ways to blend their data sources for dynamic, highly informed, and real-time business insight. A fully managed SaaS service, Anaplan on Google Cloud enables elastic scale in new regions and drives greater productivity and performance by bringing data closer to where organizations reside. Anaplan on Google Cloud is now available for the United States region. Learn more about the technology partnership by visiting our Google Cloud partner page, reaching out to your Google Cloud and Anaplan account teams, or emailing GoogleCloudPartnership@anaplan.com.Related ArticleDeploying Anaplan at enterprise scale on Google CloudWe’re excited to partner with Anaplan to help more organizations scale Anaplan globally by running on Google Cloud’s hyperscale platform.Read Article
Quelle: Google Cloud Platform