Coop reduces food waste by forecasting with Google’s AI and Data Cloud

Although Coop has a rich history spanning nearly 160 years, the machine learning (ML) team supporting its modern operations is quite young. Its story began in 2018 with one simple mission: to leverage ML-powered forecasting to help inform business decisions, such as demand planning based on supply chain seasonality and expected customer demand. The end goal? By having insight into not only current data but also projections of what could happen in the future, the business can optimize operations to keep customers happy, save costs, and support its sustainability goals (more on that later!).Coop’s initial forecasting environment was one on-premises workstation that leveraged open-source frameworks such as PyTorch and TensorFlow. Fine tuning and scaling models to a larger number of CPUs or GPUs was cumbersome. In other words, the infrastructure couldn’t keep up with their ideas.So when the question arose of how to solve these challenges and operationalize the produced outcomes beyond those local machines, Coop leveraged the company’s wider migration to Google Cloud to find a solution that could stand the test of time.Setting up new grounds for innovationOver a two-day workshop with the Google Cloud team, Coop kicked things off by ingesting data from its vast data pipelines and SAP systems to BigQuery. At the same time, Coop’s ML team implemented physical accumulation cues of incoming new information and sorted out what kind of information this was. The team was relieved to not have to worry about setting up infrastructure and new instances.Next, the Coop team turned to Vertex AI Workbench to further develop its data science workflow, finding it surprisingly fast to get started. The goal was to train forecasting models to support Coop’s distribution centers so they could optimize their stock of fresh produce based on accurate numbers. Achieving higher accuracy, faster, to better meet customer demandDuring the proof-of-concept (POC) phase, Coop’s ML team had two custom-built models competing against an AutoML-powered Vertex AI Forecast model, which the team ultimately operationalized on Vertex AI: a single Extreme Gradient Boosting model and a Temporal Fusion Transformer in PyTorch. The team established that using Vertex AI Forecast was faster and more accurate than training a model manually on a custom virtual machine (VM).On the test set in the POC, the team reached 14.5 WAPE (Weighted Average Percentage Error), which means Vertex AI Forecast provided a 43% performance improvement relative to models trained in-house on a custom VM.After a successful POC and several internal tests, Coop is building a small-scale pilot (to be put live in production for one distribution center) that will conclude with the Coop ML team eventually streaming back the forecasting insights to SAP, where processes such as carrying out orders to importers and distributors take place. Upon successful completion and evaluation of the small-scale pilot in production in the next few months, they could possibly scale it out to full blown production across distribution centers throughout Switzerland. The architecture diagram below approximately illustrates the steps involved in both stages. The vision is of course to leverage Google’s data and AI services, including forecasting and post-forecasting optimization, to support all of Coop’s distribution centers in Switzerland in the near futureLeveraging Google Cloud to increase the relative forecasting accuracy by 43% over custom models trained by the Coop team can significantly affect the retailer’s supply chain. By taking this POC to pilot and possibly production, the Coop ML team hopes to improve its forecasting model to better support wider company goals, such as reducing food waste.Driving sustainability by reducing food wasteCoop believes that sustainability must be a key component of its business activity. With the aim to become a zero-waste company, its sustainability strategy feeds into all corporate divisions, from how it selects suppliers of organic, animal-friendly, and fair-trade products to efforts for reducing energy, CO2 emissions, waste materials, and water usage in its supply chains. Achieving these goals boils down to an optimal control problem. This is known as a Bayesian framework: Coop must carry out quantile inference to determine the scope of its distributions. For example, is it expecting to sell between 35 and 40 tomatoes on a given day, or is its confidence interval between 20 and 400? Reducing this amount of uncertainty with more specific and accurate numbers means Coop can order the precise number of units for distribution centers, ensuring customers can always find the products they need. At the same time, it prevents ordering in excess, which reduces food waste. Pushing the envelope of what can be achieved company-wideHaving challenged its in-house models against the Vertex AI Forecast model in the POC, Coop is in the process of rolling out a production pilot to one distribution center in the coming months, and possibly all distribution centers across Switzerland later thereafter. In the process, one of the most rewarding things was realizing that the ML team behind the project could use different Google Cloud tools, such as Google Kubernetes Engine and BigQuery, and Vertex AI to create its own ML platform. Beyond using pre-trained Vertex AI models, the team can automate and create data science workflows quickly so it’s not always dependent on infrastructure teams.Next, Coop’s ML team aims to use BigQuery as a pre-stage for Vertex AI. This will allow the entire data streaming process to flow more efficiently, serving data to any part of Vertex AI when needed. “The two tools integrate seamlessly, so we look forward to trying that combination for our forecasting use cases and potentially new use cases, too. We are also exploring the possibility of deploying different types of natural language processing-based solutions to other data science departments within Coop that are relying heavily on TensorFlow models,” says Martin Mendelin, Head of AI/ML Analytics, Coop. “By creating and customizing our own ML platform on Google Cloud, we’re creating a standard for other teams to follow, with the flexibility to work with open-source programs but in a stable, reliable environment where their ingenuity can flourish,” Mendelin adds. “The Google team went above and beyond with its expertise and customer focus to help us make this a reality. We’re confident that this will be a nice differentiator for our business.”
Quelle: Google Cloud Platform

Introducing time-bound Session Length defaults to improve your security posture

Google Cloud provides many layers of security for protecting your users and data. Session length is a configuration parameter that administrators can set to control how long users can access Google Cloud without having to reauthenticate. Managing session length is foundational to cloud security and it ensures access to Google Cloud services is time-bound after a successful authentication. Google Cloud session management provides flexible options for setting up session controls based on your organization’s security policy needs. To further improve security for our customers, we are rolling out a recommended default 16-hour session length to existing Google Cloud customers.Many apps and services can access sensitive data or perform sensitive actions. It’s important that only specific users can access that information and functionality for a period of time. By requiring periodic reauthentication, you can make it more difficult for unauthorized people to obtain that data if they gain access to credentials or devices.Enhancing your security with Google Cloud session controlsThere are two tiers of session management for Google Cloud: one for managing user connections to Google services (e.g. Gmail on the web), and another for managing user connections to Google Cloud services (e.g. Google Cloud console). This blog outlines the  session control updates for Google Cloud services.Google Cloud customers can quickly set up session length controls by selecting the default recommended reauthentication frequency. For existing customers who have session length configured to Never Expire, we are updating the session length to 16 hours.Google Cloud session control: Reauthentication policyThis new default session length rollout helps our customers gain situational awareness of their security posture. It ensures that customers did not mistakenly grant infinite session length to users or apps using Oauth user scopes. After the time bound session expires, users will need to reauthenticate with their login credentials to continue their access. The session length changes impact the following services and apps:Google Cloud Consolegcloud command-line toolAny other app that requires Google Cloud scopesThe session control settings can be customized for specific organizations, and the policies apply to all users within that organization. When choosing a session length, admins have the following options:Choose from a range of predefined session lengths, or set a custom session length between 1 and 24 hours. This is a timed session length that expires the session based on the session length regardless of the user’s activity.Configure whether users can use just their password, or are required to use a Security Key to reauthenticate.How to get started The session length will be on by default for 16 hours for existing customers and can be enabled at the Organizational Unit (OU) level. Here are steps for the admins and users to get started:Admins: Find the session length controls at Admin console > Security > Access and data control > Google Cloud session control. Visit the Help Center to learn more about how to set session length for Google Cloud services. End users: If a session ends, users will simply need to log in to their account again using the familiar Google login flow. Sample Use CasesThird-party SAML identity providers and session length controls If your organization uses a third-party SAML-based identity provider (IdP), the cloud sessions will expire, but the user may be transparently re-authenticated (i.e., without actually being asked to present their credentials) if their session with the IdP is valid at that time. This is expected behavior as Google will redirect the user to the IdP and accept a valid assertion from the IdP. To ensure that users are required to reauthenticate at the correct frequency, evaluate the configuration options on your IdP and review the Help Center to Set up SSO via a third party Identity provider.Trusted applications and session length controlsSome apps are not designed to gracefully handle the reauthentication scenario, causing confusing app behaviors or stack traces. Some other apps are deployed for server-to-server use cases with user credentials instead of the recommended service account credential, in which case there is no user to periodically reauthenticate. If you have specific apps like this, and you do not want them to be impacted by session length reauthentication, the org admin can add these apps to the trusted list for your organization. This will exempt the app from session length constraints, while implementing session controls for the rest of the apps and users within the organization.General Availability & Rollout PlanAvailable to all Google Cloud customersGradual rollout starting on March 15, 2023.Helpful links Help Center: Set session length for Google Cloud services Help Center: Control which third-party & internal apps access Google Workspace dataHelp Center: Use a security key for 2-Step VerificationCreating and managing organizationsUsing OAuth 2.0 for Server to Server ApplicationsRelated ArticleIntroducing IAM Deny, a simple way to harden your security posture at scaleOur latest new capability for Google Cloud IAM is IAM Deny, which can help create more effective security guardrails.Read Article
Quelle: Google Cloud Platform

Google Cloud and MongoDB expand partnership to support startups

Scale your Startups from ideation to growth with MongoDB Atlas on Google Cloud. By providing an integrated set of database and data services and a unified developer experience, MongoDB Atlas on Google Cloud lets companies at all stages build applications that are highly available, performant at global scale, and compliant with the most demanding security and privacy standards.Today we’re excited to announce that we’re expanding our partnership to also support startups together.In addition to the technology, each company has dedicated programs to help startups scale quicker with financial, business and technical support. Harness the power of our partnership for startupsThere are two key ways in which we believe our partnership can help startups scale quicker, more safely and more successfully: 1. Our technologiesMongoDB Atlas allows you to run our fully-managed developer data platform on Google Cloud in just a few clicks. Set up, scale, and operate MongoDB Atlas anywhere in the world with the versatility, security and high-availability you need. Run MongoDB Atlas on Google Cloud to gain true multi-cloud capabilities, best-in-class automation, workload intelligence, and proven practices with the most modern developer data platform available. With the Pay-As-You-Go option on the Google Cloud Marketplace, you only pay for the Atlas resources you use, with no upfront commitment required.Got global customers? Google Cloud is wherever they are and MongoDB Atlas makes it easy to distribute your data for low latency performance and global compliance needs. Selling to a tough enterprise crowd? Data in MongoDB Atlas is protected from the start with preconfigured security features for authentication, authorization, and encryption, and is stored in the same zero-trust, shared-risk model that Google itself depends on. As partners, Google Cloud and MongoDB co-engineer streamlined integrations between MongoDB Atlas and many Google Cloud services to make it easier to deploy apps (Dataflow, GKE, Cloud Run), pull in data from other sources (Apigee), run in flexible multi cloud environments (Anthos), easy deployment of MEAN stack, and Terraform and analyze data (BigQuery, Vertex AI). 2. Our dedicated startup programsThe Google for Startups Cloud program provides:Credits for Google Cloud, Google Workspace, access to training programs and technical support via a dedicated Startup Success Manager, our global Google Cloud Startup Community, and co-marketing opportunities for select startups.Credits: If you’re early in your startup journey and not yet backed with equity funding, you’ll have access to $2,000 of Google Cloud credits. If you are, your first year of Cloud and Firebase usage is covered with credits up to $100,000. Plus, in year two get 20% of Google Cloud and Firebase usage covered, up to an additional $100,000 in credits*Google-wide discounts: Free Google Workspace Business Plus for new signups and monthly credits on Google Maps Platform for 12 months for new signupsTraining: Google Cloud Skills Boost credits giving access to online courses and hands-on labsTechnical support: Get timely help 24/7 through Enhanced Support by applying Google Cloud creditsBusiness Support & Networking: Access to a Startup Success Manager, our global Google Cloud Startup Community, and co-marketing opportunities for select startupsThe MongoDB for Startups program provides: Credits for MongoDB Atlas, dedicated onboarding support, a wide range of hands-on training available on-demand, a complimentary technical advisor session, and co-marketing opportunities to help you amplify your business.  Credits: Free credits for MongoDB Atlas,  including usage of the core Atlas Database, in addition to extended data services for full-text search, data visualization, real-time analytics, building event-driven applications and more to supercharge your data infrastructureDedicated Onboarding Support: Bespoke onboarding resources tailored to help you successfully adopt and scale MongoDB Atlas  Hands-on Training: Free on-demand content access to MongoDB’s library of training with 150+ hands-on labsExpert Technical Advice: A dedicated one-on-one session with our technical experts for personalized recommendations to add scale and optimizeGo to Market Opportunities: Engage with MongoDB’s diverse community of startups and developers through networking events and work with MongoDB on co-marketing initiatives to amplify your startup’s growth and promote the innovative tech you are buildingStartups finding success with Google Cloud and MongoDB Atlas Startup programsMany startups have found these integrations and the interoperability between Google Cloud and MongoDB Atlas to be a powerful combination: Thunkable, a no-code app development platform, has found quick success (3 million users) with a team of just four to six engineers. “The engineering team has always been focused on building the product,” said Thunkable engineer, Jose Dominguez. “So not having to worry about the database was a great win for us. It allowed us to iterate very fast…. As we scale, supporting more enterprise customers, we don’t have to worry about database management issues.”Phonic — a software company that applies intelligent analytics to qualitative research in order to break down barriers between qualitative and quantitative data — uses Google Cloud for distributed file storage, App Engine for auto-scaling, and MongoDB Atlas to support its needs for flexible databases that can adjust to frequent schema changes.Next stepsTo apply to join the Google for Startups Cloud program and MongoDB Atlas Startup program, and to learn more about the benefits each offers, visit our partnership page. Companies enrolled in both startup programs will have exclusive access to joint events, technical support, bespoke offers and much more.
Quelle: Google Cloud Platform

Rapidly expand the reach of Spanner databases with read-only replicas and zero-downtime moves

As Google Cloud’s fully managed relational database that offers near unlimited scale, strong consistency, and availability up to 99.999%, Cloud Spanner powers applications at any scale in industries such as financial services, games, retail, and healthcare. When you set up a Spanner instance, you can choose from two different kinds of configurations: regional and multi-regional. Both configuration types offer high availability, near unlimited scale, and strong consistency. Regional configurations offer 99.99% availability and can survive zone outages. Multi-regional configurations offer 99.999% availability and can survive two zone outages and entire regional outages.Today, we’re announcing a number of significant enhancements to Spanner’s regional and multi-regional capabilities: Configurable read-only replicas let you add read-only replicas to any regional or multi-regional Spanner instance to deliver low latency reads to clients in any geographySpanner’s zero-downtime instance move service gives you the freedom to move your production Spanner instances from any configuration to another on the fly, with zero downtime, whether it’s regional, multi-regional, or a custom configuration with configurable read-only replicas We’re also dropping the list prices of our nine-replica global multi-regional configurations nam-eur-asia1 and nam-eur-asia3 to make them even more affordable for global workloadsLet’s take a look at each of these enhancements in a bit more detail. Configurable read-only replicasOne of Spanner’s most powerful capabilities is its ability to deliver high performance across vast geographic territories. Spanner achieves this performance with read-only replicas. As its name suggests, a read-only replica contains an entire copy of the database and it can serve stale reads without requiring a round trip back to the leader region. In doing so, read-only replicas deliver low latency stale reads to nearby clients and help increase a node’s overall read scalability.For example, a global online retailer would likely want to ensure that its customers worldwide can search and view products from its catalog efficiently. This product catalog would be ideally suited for Spanner’s nam-eur-asia1 multi-region configuration, which has read/write replicas in the United States and read-only replicas in Belgium and Taiwan. This would ensure that customers can view the product catalog with low latency around the globe.Until today, read-only replicas were available in several multi-region configurations: nam6, nam9, nam12, nam-eur-asia1, and nam-eur-asia3. But now, with configurable read-only replicas, you can add read-only replicas to any regional or multi-regional Spanner instance so that you can deliver low-latency stale reads to clients everywhere. To add read-only replicas to a configuration, go to the Create Instance page in the Google Cloud console. You’ll now see a “Configure read-only replicas” section. In this section, select the region for the read-only replica, along with the number of replicas you want per node, and create the instance. It’s as simple as that! The following snapshot shows how to add a read-only replica in us-west2 (Los Angeles) to the nam3 multi-regional configuration.As we roll out configurable read-only replicas, we do not yet offer read-only replicas in every configuration/region pair. If you find that your desired read-only replica region is not yet listed, simply fill out this request form.Configurable read-only replicas are available today for $1/replica/node-hour plus storage costs. Full details on pricing are available at Cloud Spanner pricing. Also announcing: Spanner’s zero-downtime instance move serviceNow that you can use configurable read-only replicas to create new instance configurations that are tailored to your specific needs, how can you migrate your current Spanner instances to these new configurations without any downtime? Spanner database instances are mission critical and can scale to many petabytes and millions of queries per second. So you can imagine that moving a Spanner instance from one configuration to another — say us-central1 in Iowa to nam3 with a read-only replica in us-west2 — is no small feat. Factor in Spanner’s stringent availability of up to 99.999% while serving traffic at extreme scale, and it might seem impossible to move a Spanner instance from us-central1 to nam3 with zero downtime.However, that’s exactly what we’re announcing today! With the instance move service, now generally available, you can request a zero-downtime, live migration of your Spanner instances from any configuration to any other configuration — whether they are regional, multi-regional, or custom configurations with configurable read-only replicas. To request an instance move, select “contact Google” in the Edit Instance of the Google Cloud Console and fill out the instance move request form. Once you make a move request, we’ll contact you to let you know the start date of your instance configuration move, and then move your configuration with zero downtime and no code changes while preserving the SLA guarantees of your configuration. When moving an instance, both the source and destination instance configurations are subject to hourly compute and storage charges, as outlined in Cloud Spanner pricing. Depending on your environment, instance moves can take anywhere from a few hours to a few days to complete. Most importantly, during the instance move, your Spanner instance continues to run without any downtime, and can continue to rely on Spanner’s high availability, near unlimited scale, and strong consistency to serve your mission-critical production workloads. Price drops for global 9-replica Spanner multi-regional configurationsFinally, we’re also pleased to announce that we’re making it even more compelling to use Spanner’s global configurations of nam-eur-asia1 and nam-eur-asia3 by dropping the compute list price of these configurations from $9/node/hour to $7/node/hour. With write quorums in North America and read-only replicas in both Europe and Asia, these configurations are perfectly suited for global applications with strict performance requirements and 99.999% availability. And now, they’re even more cost-effective to use!Learn more If you are new to Spanner, try Spannerat no charge with a 90-day free trial instance.Learn more about multi-regional Spanner configurations by reading Demystifying Cloud Spanner multi-region configurationsRelated ArticleDemystifying Cloud Spanner multi-region configurationsCloud Spanner is a strongly consistent, highly scalable, relational database. It powers billion-user products every month. In order to pr…Read Article
Quelle: Google Cloud Platform

Node hosting on Google Cloud: a pillar of Web3 infrastructure

Blockchain nodes are the physical machines that power the virtual computer that comprises a blockchain network and store the distributed ledger. There are several types of blockchain nodes, such as:RPC nodes, which DApps, wallets, and other blockchain “clients” use as their blockchain “gateway” to read or submit transactionsValidator nodes, which secures the network by participating in consensus and producing blocksArchive nodes, which indexers use to archive nodes to get the full history of on-chain transactions Deploying and managing nodes can be costly, time consuming, and complex. Cloud providers can help abstract away the complexities of node hosting so that Web3 developers do not need to think about infrastructure. In this article, we’ll explore both how organizations can avoid challenges by running their own nodes on Google Cloud, and how in many scenarios, our fully managed offering, Blockchain Node Engine, can make node hosting even easier.Figure 1 – Blockchain nodesWhy running nodes is often difficult and costly Developers often choose a mix of deploying their own nodes or using shared nodes provided by third parties. Free RPC nodes are sufficient to start exploring but may not offer the required latency or performance. Web3 infrastructure providers’ APIs or dedicated nodes are another option, letting developers focus on their app without worrying about the underlying blockchain node infrastructure. There are situations, however, in which it is beneficial to run your own nodes in the cloud. For example:Privacy is too critical for RPC calls to go over the public internet.Certain regulated industries require organizations to operate in a specific jurisdiction and control their nodesNode hardware needs to be configured for optimal performance.A DApp requires low latency to the node.An organization is a validator with a significant stake and needs to be in control of the uptime of its validator node and security.An organization needs predictable and consistent high performance that will not be impacted by others using your node.In Ethereum, the fee recipient is an address nominated by a validator to receive tips from user transactions. The node controls the fee recipient, not the validator client, so to guarantee control of the fee recipient, the organization must run its own nodes.Figure 2 – Dedicated blockchain nodesOrganizations can face challenges running their own nodes. At a macro level, node infrastructure challenges fall into one of these buckets:Sustainability (impact on the environment)Security (DDoS attacks, private key management)Performance (can the hardware keep up with the blockchain software)Scalability (how a network starts and grows)In addition, there is a learning curve related to how each protocol works (e.g., Ethereum, Solana, Arbitrum, Aptos, etc.), what hardware specifications the protocol requires (compute, memory, disk, network), and how to optimize (e.g., sync modes).Hyperscalers have been perceived as not performant enough and too expensive. As a result, a lot of the Web3 infrastructure today runs in bare-metal server providers or in one hyperscaler. For example, as of September 20, 2022, more than 40% of Solana validators ran in Hetzner. But then, Hetzner blocked Solana activity on its servers, causing disruption to the protocol. Similarly, as of October 2022, 5 out of the top 10 Solana validators by SOL staked (representing 8.3% of all staked SOL) ran in AWS, per validators.app. Simply put, this concentration of validators creates a dependency on only a select few hosting providers. As a result, an outage–or a ban–from a single provider can lead to a material failure of the underlying protocol. Moreover, this centralization goes against the Web3 ethos of decentralization and diversification. Healthy protocols require a diversity of participants, clients, and geographic distribution. In fact, the Solana Foundation, via its delegation program, incentivizes infrastructure diversity with the data center criteria.Running nodes on Google Cloud for security, resiliency, and speedTo avoid the aforementioned challenges and improve decentralization on major protocols, organizations have been using Google Cloud to host nodes for several years. For example, we are a validator for protocols like Aptos, Arbitrum, Solana, and Hedera, and Web3 customers use Google Cloud to power nodes include Blockdaemon, Bullish, Coinbase and Dapper Labs. We support a diverse set of ecosystems and use cases, for example:The nodes can run in Google Cloud, regardless of the protocol (we run nodes for Ethereum, layer 2’s, and alternative layer 1’s, etc.). Please note that Proof of Work mining is restricted.We have nodes running in both live and test networks. This is important for the learnings required for each protocol.While these examples are public (permissionless) networks, we also support the private networks favored by some of our regulated customers.Streamlining and accelerating node hosting with Blockchain Node EngineBlockchain Node Engine provides streamlined provisioning, and a secure environment, as a fully managed service. A developer using Blockchain Node Engine doesn’t need to worry about configuring or running nodes. Blockchain Node Engine does all this so that the developer can focus on building a superb DApp. We’ve simplified this process and collapsed all the required node hosting steps into one.For protocols not supported by Blockchain Node Engine, or if an organization wants to manage their own nodes themselves,  services in Google Cloud are built to cover an organization’s full Web3 journey: An organization might start with a simple Compute Engine VM instance using the machine family that works for the protocol. (We support the most demanding protocols, including Solana.)Then, they’ll make their architecture more resilient with managed instance group fronted by Cloud Load BalancerNext, the organization might secure the user-facing nodes by fronting them with Cloud Armor as a Web Application Firewall and DDoS protectionThis node hosting infrastructure is fully automated and integrated with the organization’s DevOps pipelines, helping them to seamlessly accelerate development.As the organization grows and its apps attract more traffic, Kubernetes becomes a natural choice for health monitoring and management. Blockchain nodes can be migrated to GKE node pools (pun intended). (Note: Organizations can also start directly in GKE, rather than Compute Engine.)As the organization continues to grow, it can benefit from access to the cloud-native services close to the nodes. For example, customers use various caching solutions like Cloud CDN, Memorystore and/or Spanner (like blockchain.com) so that most requests do not even have to hit your nodes.On the data side, the organization can implement pipelines that extract data from the node and ingest into BigQuery to make it available for analysis and ML.It can also leverage Confidential Computing for data encrypted while in use (e.g., Multi-Party Computation, Bullish).Next stepsAs we’ve shown with the formation of both customer-facing and product teams dedicated to Web3, Google Cloud is inspired by the Web3 community and grateful to work with so many innovators within it. We’ve been excited to see our work in open-source projects, security, reliability, and sustainability address core needs we see in Web3 communities, and we look forward to seeing more creative decentralized apps and services as Web3 businesses continue to accelerate. To get started with Blockchain Node Engine or explore hosting your own nodes in Google Cloud, contact sales or visit our Google Cloud for Web3 page. Acknowledgements: I’d like to thank customer engineers David Mehi and Sam Padilla and staff software engineer Ross Nicoll, who helped me to better understand node hosting, and Richard Widmann, digital assets head of strategy for his review of this post.
Quelle: Google Cloud Platform

Snap partners with Google Cloud to upskill teams around the globe

Snap inc., the developer of the Snapchat platform, has become a global leader in the social media industry. Snap runs its business on Google Cloud, and relies on Premium Support to optimize its cloud business imperatives. Snap sought new ways to extract business value from their cloud data, so turned to their assigned Google Technical Account Management team to develop a means to strengthen and expand cloud skills to meet their goal.  The Technical Account Managers (TAMs) serve as an extension of the Snap Engineering Program Management team to deliver deep expertise of Google Cloud and guide Snap in their cloud journey including mapping the essential cloud skills for Snap to achieve broader business strategy.  Since Snap employees had varying levels of cloud expertise, it meant that their TAM team needed to design a tailored learning program to optimally meet Snap’s needs.The TAM team engaged with Google Cloud Customer Experience (CCE) team which included cloud support, learning, consulting, customer success and customer insight and advocacy services. The Google team leveraged a Skill Training Survey for the Snap team to identify, extract and map existing skills to the level of targeted cloud skills that would enable them to design a learning curriculum to boost employee productivity, enable efficient scaling and strengthen the mitigation of technical issues while optimizing their environment for issue prevention. In addition, Google Cloud offered instructor-led virtual training focused on Looker, Kubernetes and others topics of interest and a tailored Snap Global Training Program was launched in the last quarter of 2022.By including training for Looker, it expanded Snap employees’ ability to reach data-driven decisions. Snap also took advantage of the Google Cloud Skills Boost licenses delivering access to a learning platform with over 700 courses and learning labs,  included with Premium Support. Next, the TAMs and CCE teams were tasked to raise internal awareness of the global Snap training program, so developed a comprehensive marketing and communications plan to drive promotion over a twelve week period to prospective trainees through newsletters, email groups, Slack channels, Engineering meetings, and an internal site dedicated to Snap training resources. “Partnering with Google to provide Snap engineers with learning opportunities aligns with Snap’s values of Kind, Smart and Creative.  Investing in growing our team member’s skills helps them personally advance and helps our business achieve our goals.”—Michele Vaughan, Snap Engineering Program ManagerThe Google-led Snap Global Training Program includes hands-on, instructor-led training, in-person gamified Cloud Hero Learning Events, and access to on-demand Google Cloud Skills Boost labs. Over 100 trainees at Snap initially participated in the instructor-led training, and more than 500 employees completed the on-demand labs. This training program has enabled Snap employees to develop and strengthen skills in the targeted cloud areas including data visualization, AI and ML, and Kubernetes. In addition, the program ignited a Looker Advisory Professional Services initiative to advise Snap with best practices and improvements for the usage of Looker. These skills enable Snap to extract increased value for their cloud data and guide the future of their business for sustaining a competitive advantage in a dynamic marketplace. To learn more about how Google Cloud Customer Experience can support your organization’s business transformation journey with cloud support, learning, consulting, customer success, and customer insight and advocacy services, visit: Premium Support to empower business innovation with expert-led technical guidance and cloud support Google Cloud Training & Certification to expand and diversify your team’s cloud education
Quelle: Google Cloud Platform

Three new Specializations help partners digitally transform customers

Two of our most enduring commitments to partners include our mission to provide you with the support, tools, and resources you need to grow and drive customer delivery excellence, and to ensure Google Cloud partners stand apart as deeply skilled technology pace setters. This includes working with partners to stay ahead of important new trends that have the potential to disrupt our shared customers—and that also have the potential to accelerate your business growth.To help do this, we’ve rolled out three new Specializations that are aligned to three very important new trends.Partners who earn our new Data Center Modernization Services Specialization have demonstrated success with data center transformation of workloads from on-premises, private cloud, or other public clouds.Partners who earn our new DevOps Services Specialization have demonstrated success implementing, managing, and improving the quality and speed of creating new applications on Google Cloud.Finally, partners who earn our new Contact Center AI Services Specialization have demonstrated success in implementing and migrating Contact Center AI projects with Dialogflow. I am also very proud to announce that we have several partners who have already earned these Specializations. I’d like to briefly talk about why each area is important, who the launch partners are, and provide you with information to learn more about each one.Data Center ModernizationGoogle worked with IDC on multiple studies involving global organizations across industries. This research projects that by 2026, the world will create 7 petabytes of data each second*—that’s equal to about 500 billion full pages of text every second. At some point all of this data will run through, or reside in, a data center—putting enormous pressure on customer infrastructures.Google’s perspective is to construct a unified data cloud “that supports every stage of the data lifecycle” in which “databases, data warehouses, data lakes, streaming, BI, AI, and ML all reside on a common infrastructure that is pre-configured to work together seamlessly.”Regardless of the approach, customers can rely on these partners who have earned our new  Data Center Modernization Services Specialization to lead the way to the modernized data center: Proquire, HCL Technologies, SADA Systems, Wipro, and Deloitte Consulting.DevOps ServicesWe live in an era in which customer demand for software solutions is rising so fast that quality and delivery times are becoming critical points of failure. Our DevOps Services Specialization positions our partners to meet this challenge head on and deliver sophisticated, reliable, secure software, fast—and manage it as a service, if required.In fact, DevOps is so important that it is regarded as a critical ingredient in driving customer satisfaction. According to the newly released 2023 Testing in DevOps report, nearly 90% of coding teams with “highly automated pipelines and mature DevOps practices” report high customer satisfaction rates.Congratulations to partners 66degrees, and DoiT for being the first two companies to achieve this critically important Specialization.Call Center AICall Center support is a significant area of focus for organizations across the globe for one major reason: Business reputations can be made or broken more by the quality of their support systems, than the quality of a product or service. This is why digitally transforming the call center has become a priority for business leaders.Dialogflow is the foundation of Google Cloud’s Contact Center AI Specialization. This platform understands natural languages, making it easy to design and integrate a conversational user interface into apps, web applications, devices, bots, interactive voice response systems, and more. In sum, it enables partners to transform the contact center by making it available to anyone, anywhere, on any device using a variety of different communication modes. All quickly, and accurately.Dialogflow can analyze multiple types of input from your customers, including text or audio inputs (like from a phone or voice recording). It can also respond to customers in a couple of ways, either through text or with synthetic speech.Customers looking to transform their contact center experience can work with our first group of partners to earn this Specialization: Solstice Consulting (DBA Kin + Carta U.S.), Teksystems Global Services, IBM, Quantiphi, and yosh.ai (Shopai Spółka Z Ograniczoną Odpowiedzialnością in Poland).If you’re a customer looking for a partner with a particular Specialization, we invite you to search through our Partner Directory.If you’re a partner who wants to learn more about how to earn Specializations, check out everything you need to know—including certification requirements, Customer Success Stories, and more—on the Partner Advantage portal. Partners can also schedule an optional pre-assessment with ISSI (for a fee) before applying for a Specialization by emailing googlespecadmin@issi-inc.com.*IDC, 2023 Data and AI Trends Report, February 2023.
Quelle: Google Cloud Platform

What would you build with $500 in Google Cloud credits included with Innovators Plus?

Imagine you had $500 in Google Cloud credits to build whatever you want. That’s what you get when you start an Innovators Plus subscription, along with a range of other benefits including: access to the entire catalog of on-demand training on Google Cloud Skills Boost, a certification exam voucher, invite-only quarterly technical briefings, and live-learning events. It’s the best of Google Cloud for anyone looking to skill up your expertise for in-demand cloud roles. I’ll get into detail on that below.Iasked our communitywhat they would use the $500 Google Cloud credits for, and they came back with some really great ideas. Here are some that I wanted to share:Build your own Mastodon server on Google Cloud Platform (@lukwam, @ehienabs). This is a hot topic these days, and picking the right server involves a variety of choices and ultimately comes down to the experience you want for your community. Justin Ribeiro talks about how to do that on Google Cloud Platform in this article.Use Vision AI to identify beer preferences and choices (@Pistol_Peter_D). This is a cool idea in which you could take a picture of a refrigerator door in a store, for example. The photo would then be used to recommend a beer based on your personal preferences, ratings, and other details. Check out Vision AI here to see how it could help build out that idea.Outdoor activity map tracker app with journaling activities enabled (@rmcsqrd). A great idea for anyone who lives an active lifestyle! Consider using Google Maps under the hood for this.Indulge your domain name addiction (@lukeschlangen). Are you into buying and selling interesting domain names? Here is our Cloud Domains documentation for registration, transfer and management of domains with Google Cloud to help you with that.Invent a tool that picks green regions, helping you balance latency and emissions to choose the most sustainable Google Cloud region for your app (@taylorkstacey). Check out this Google Cloud region picker tool that considers carbon footprint, price and latency.Build your skillset by using the Cloud Resume Challenge to do a self-guided tour of Google Cloud Platform (@billblum). Google Cloud Skills Boost has loads of resume-building, hands-on labs that give you access to the Google Cloud Platform.Watch this 60 second YouTube short where I talk about these some more. OK, so you have the vision, how do you get the credits and get started building? Explore Innovators Plus today and make the most of great savings Innovators Plusis an annual subscription for technical practitioners and developers, offering extensive additional training benefits to grow your career in cloud, with new knowledge and skills.For $299/year Innovators Plus offers you benefits with up to 80% savings off the full retail value of this package. Innovators Plus includes:$500 of Google Cloud credits – what will you build?BONUS! An extra $500 credit after the first certification earned each yearAccess to the entire Google Cloud Skills Boost catalog of extensive on-demand training: over 700 courses, labs, certification preparation, and learning paths to help grow your careerReady to get that Google Cloud certification? You’ll get a voucher to help you reach for that goalSpecial access to Google Cloud experts, executives and learning events throughout the yearJoin us at technical briefings and private events1:1 consultations with Google Cloud expertsLearn more about the benefits of Innovators Plus and see what you can build and learn 2023!*Innovators Plus requires you to use a Google Account and a Developer Profile. For customers in the EEA, the UK, and Switzerland, Innovators Plus is restricted to business or professional use.
Quelle: Google Cloud Platform

Building your own private knowledge graph on Google Cloud

A Knowledge Graph ingests data from multiple sources, extracts entities (e.g., people, organizations, places, or things), and establishes relationships among the entities (e.g., owner of, related to) with the help of common attributes such as surnames, addresses, and IDs.Entities form the nodes in the graph and the relationships are the edges or connections. This graph building is a valuable step for data analysts and software developers for establishing entity linking and data validation.The term “Knowledge Graph” was first introduced by Google in 2012 as part of a new Search feature to provide users with answer summaries based on previously collected data from other top results and sources.Advantages of a Knowledge GraphBuilding a Knowledge Graph for your data has multiple benefits:Clustering text together that is identified as one single entity like “Da Vinci,” “Leonardo Da Vinci,” “L Da Vinci,” “Leonardo di ser Piero da Vinci,” etc. Attaching attributes and relationships to this particular entity, such as “painter of the Mona Lisa.”Grouping entities based on similarities, e.g., grouping Da Vinci with Michelangelo because both are famous artists from the late 15th century.It also provides a single source of truth that helps users discover hidden patterns and connections between entities. These linkages would have been more challenging and computationally intensive to identify using traditional relational databases.Knowledge Graphs are widely deployed for various use cases, including but not limited to: Supply chain: mapping out suppliers, product parts, shipping, etc.Lending: connecting real estate agents, borrowers, insurers, etc.Know your customer: anti-money laundering, identity verification, etc.Deploying on Google CloudGoogle Cloud has introduced two new services (both in Preview as of today): The Entity Reconciliation API lets customers build their own private Knowledge Graph with data stored in BigQuery. Google Knowledge Graph Search API lets customers search for more information about their entities from the Google Knowledge Graph.To illustrate the new solutions, let’s explore how to build a private knowledge graph using the Entity Reconciliation API and use the generated ID to query the Google Knowledge Graph Search API. We’ll use the sample data from zoominfo.com for retail companies available on Google Cloud Marketplace (link 1, link 2). To start, enable the Enterprise Knowledge Graph API and then navigate to the Enterprise Knowledge Graph from the Google Cloud console.The Entity Reconciliation API can reconcile tabular records of organization, local business, and person entities in just a few clicks.Three simple steps are involved: Identify the data sources in BigQuery that need to be reconciled and create a schema mapping file for each source. Configure and kick off a Reconciliation job through our console or API.Review the results after job completion.Step 1For each job and data source, create a schema mapping file to inform how Enterprise Knowledge Graph ingests the data and maps to a common ontology using schema.org. This mapping file will be stored in a bucket in Google Cloud Storage.For the purposes of this demo, I am choosing the organization entity type and passing in the database schema that I have for my BigQuery table. Note to always use the latest from our documentation.code_block[StructValue([(u’code’, u’prefixes:rn ekg: http://cloud.google.com/ekg/0.0.1#rn schema: https://schema.org/rnrnmappings:rn organization:rn sources:rn – [yourprojectid:yourdataset.yourtable~bigquery]rn s: ekg:company_$(id_column_from_table)rn po:rn – [a, schema:Organization]rn – [schema:name, $(name_column_from_table)]rn – [schema:streetAddress, $(address_column_from_table)]rn – [schema:postalCode, $(ZIP_column_from_table)]rn – [schema:addressCountry, $(country_column_from_table)]rn – [schema:addressLocality, $(city_column_from_table)]rn – [schema:addressRegion, $(state_column_from_table)]rn – [ekg:recon.source_name, (chosen_source_name)]rn – [ekg:recon.source_key, $(id_column_from_table)]’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eae0e207050>)])]Step 2The console page shows the list of existing entity reconciliation jobs available in the project.Create a new job by clicking on the “Run A Job” button in the action bar, then select an entity type for entity reconciliation.Add one or more BigQuery data sources and specify a BigQuery dataset destination where EKG will create new tables with unique names under the destination data set. To keep the generated cluster IDs constant across different runs, advanced settings like “previous BigQuery result table” are available. Click “DONE” to create the job.Step 3After the job completes, navigate to the output BigQuery table, then use a simple join query similar to the one below to review the output:code_block[StructValue([(u’code’, u’SELECT *rnFROM `<dataset>.clusters_14002307131693260818` as RS join `<dataset>.retail_companies` as SRCrnon RS.source_key = SRC.COMPANY_IDrnorder by cluster_id;’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eadf639cf10>)])]This query joins the output table with the input table(s) of our Entity Reconciliation API and orders by cluster ID. Upon investigation, we can see that two entities are grouped into one cluster.The  confidence score indicates how likely it is that these entities belong to this group. Last but not least, the cloud_kg_mid column returns the linked Google Cloud Knowledge Graph machine ID, which can be used for our Google Knowledge Graph Search API.Running the above cURL command will return response that contains a list of entities, presented in JSON-LD format and compatible with schema.org schemas with limited external extensions.For more information, kindly visit our documentation.Special thanks to Lewis Liu, Product Manager and Holt Skinner, Developer Advocate for the valuable feedback on this content.
Quelle: Google Cloud Platform

Introducing new cloud services and pricing for ultimate flexibility

As the saying goes, “it’s hard to make predictions, especially about the future.” Some organizations find it challenging to predict what cloud resources they’ll need in months or years ahead. Every organization is on its own unique cloud journey. To help, we’re developing new ways for customers to consume and pay for Google Cloud services. We’re doing this by removing barriers to entry, aligning cost to consumption and providing contractual and product flexibility. Read on to learn how we’re rolling out several new go-to-market programs across these key areas to help our customers purchase and consume Google Cloud services more easily.Removing barriers to entry with Google Cloud Flex AgreementsMany customers choose multi-year commitments because they provide better line-of-sight into IT spend and budgeting. However, these commitments can create difficulty for those who don’t have clear visibility into their future cloud consumption needs. That’s why today we’re launching Flex Agreements, which enable customers to migrate their workloads to the cloud with no up-front commitments. As part of this new licensing option, Google Cloud customers still get access to unique incentives, such as monthly spend discounts1, committed use discounts, cloud credits, and access to professional services, based on monthly spend and workloads migrated to Google Cloud.Flex Agreements are just one example of how we are removing barriers to help customers start using Google Cloud. In 2022, we launched the Innovators Plus annual subscription, which gives developers a curated toolkit to accelerate their expertise, including access to live and on-demand training through Google Cloud Skills Boost, Google Cloud credits, and more. We also recently expanded trials for Google Cloud products. For example, the new Spanner free trial instance is good for 90 days, allowing developers to create Google Standard SQL or PostgreSQL databases, explore Spanner capabilities, and prototype applications—with no commitment or contract needed. Contractual and feature flexibility  Contractual flexibility has always been one of our core principles. Committed Use Discounts (CUDs), for example, provide discounted prices in exchange for a commitment to use a minimum level of resources for a specified term. Last year, we introduced Flexible CUD, spend-based commitments that offer predictable and simple flat-rate discounts that apply across multiple virtual machine families and regions.In addition to contractual flexibility, our customers also need the flexibility to choose features and functionality based on their stages of cloud adoption and the complexity of their business requirements. Therefore, over the next few quarters, we will launch new product pricing editions—Standard, Enterprise, and Enterprise Plus—in parts of our cloud portfolio. This new commercial packaging model will help give customers more choice and flexibility to optimize their cloud spend.For customers running workloads such as those in regulated industries like banking and public sector, the higher-end Enterprise Plus tier will offer compute, storage, networking and analytics services with high availability, multi-region support, regional failover and disaster recovery, advanced security, and a broad range of regulatory compliance support. The Enterprise pricing tier will include a broad range of features designed for customers with workloads that demand a high level of scalability, flexibility, and reliability. The Standard pricing tier will offer cost-efficient and easy-to-use managed services that include all essential capabilities such as autoscaling to meet the core workload requirements of customers.Align costs to consumption with autoscalingAt Google Cloud, a core requirement for the products we build is providing customers industry-leading capabilities to automatically scale (autoscale) services up and down to match capacity with real-time demand. Autoscaling improves uptime, reduces infrastructure costs, and removes the operational burden of managing resources.  Many Google Cloud products include autoscaling capabilities to help customers manage unplanned variations in demand. For example, Dataflow vertical and horizontal autoscaling, in combination with granular adaptive resource configuration (aka “right-fitting”), has resulted in up to 50% saving in infrastructure costs for streaming by automatically choosing the right number of instances required to run the jobs and dynamically re-allocating more or fewer instances during the runtime of jobs. Bigtable also provides native autoscaling capabilities, and Spanner’s autoscale is an open source tool that works across regional and multi-regional Spanner deployments. Similarly, we added multiple features such as Cluster Autoscaler, Horizontal Pod Autoscaling, Vertical Pod Autoscaling, and Node Auto-Provisioning to GKE for elasticity and cost efficiency. For L.L.Bean, the ability to quickly scale capacity to meet changing usage patterns (e.g., during the holidays), as well as to rapidly perform load tests to test capacity, are “night and day” with Google Cloud compared to L.L.Bean’s legacy on-premises IT system.”We won’t have to pay for peak capacity to have it available during peak shopping times. We just scale capacity up or down as needed.” — Randy Dyer, Enterprise Architect, L.L.BeanWe are now taking these capabilities to the next level by enabling autoscaling in BigQuery at a more granular level so you never pay more than what you use. This allows you to provision additional capacity in smaller increments, so you never overprovision and overpay for underutilized capacity. BigQuery customers can now try the new BigQuery autoscaler (currently in public preview) in their Google Cloud console.A commitment to flexibility and choiceAt Google Cloud, we remain deeply committed to the success of our customers and partners, and we are uniquely positioned to help organizations transform their business. By providing you with more flexibility and choice in how to purchase our products, we are empowering you to be more efficient and resilient.Join Google Data Cloud & AI Summit to hear the latest announcements around innovations in Google Data Cloud for databases, data analytics, business intelligence, and AI. Gain expert insights, new solutions, and strategies that can help you transform customer experiences with modern apps, boost revenue, and reduce costs.1. Not available for customers buying through Partner Advantage.
Quelle: Google Cloud Platform