Hey Google, show me the future of retail

Today we’re hosting our Retail & Consumer Goods Summit, a digital event dedicated to helping leading retailers and brands digitally transform their business. For me, this is a personally exciting moment, as I see tremendous opportunities for those companies that choose to focus on their customers and leverage technology to elevate experiences.Our event includes breakout sessions to help retailers and brands become customer centric, embrace the digital moment and transform their operations. Some of my favorite sessions include: Why Search Abandonment is the metric that matters, highlighting Retail Search, and featuring a conversation with Macy’sDriving Consumer Closeness in a Privacy-Centric World, discussing how retailers and brands can create a successful first party data strategy, featuring a conversation with P&GThe Modern Store: 7 Innovation Hotspots, sharing how retailers and brands can approach store transformation to unlock the most value from technology, featuring a conversation with The Home Depot.   I’ll be speaking in our Retail Spotlight session, discussing the current retail landscape and our industry approach, followed by conversations with Albert Bertilsson, Head of Engineering – Edge at IKEA Retail (Indga Group) and Neelima Sharma, Senior Vice President, Technology Ecommerce, Marketing and Merchandising at Lowe’s. Let me share a bit more about the topics we’ll discuss in that session.In retail specifically, digital-first shopping journeys are blurring the lines between the physical and digital brand experience. Shoppers want to know what’s available before they visit your stores, and they expect fulfillment options like curbside pickup. We see this when tracking trends for interest in curbside pickup or in-stock items.This has left many retailers asking how they can get smarter with their data, tackle the $300 billion dollar problem of “search abandonment,” move faster to create new customer experiences, and do a better job of connecting their employees and customers – with confidence.Our team has been spending time thinking about how we can rise and succeed in this new era together. We continue to focus on areas where we can bring the best of our capabilities to our retail customers around the world. And we’re focused on ways we can bring the best of what Google has to offer through cloud integrations.Our goal is to help retailers become customer-centric and data-driven, capture digital and omni-channel revenue growth, create the modern store and drive operational improvement.Let’s dig into each of these strategic pillars in a bit more detail. Become customer centric and data drivenCustomers today expect experiences that are timely, targeted, and tailored for them and their needs, and reject experiences that can’t deliver these features. Data modeling, legacy technology, and siloed systems often prevent retailers from providing that level of personalized experience. At Google Cloud, we work with global retailers and our ecosystem partners to activate and bring value from first-party data, particularly in the field of customer data platforms (CDPs). This includes integrations from Google Cloud, such as our business intelligence platform Looker and other popular platforms to power one source of customer data through the organization. We also help retailers modernize their data warehouse with Looker for gathering business intelligence across their organization. This is important not just for consumer data, but inventory, supply chain, and store operations as well. Capture Digital and Omnichannel Growth We power some of the largest e-commerce sites in the world, helping them scale for Black Friday, Cyber Monday, and other holiday events. While scale is critically important, it’s also important to consider the quality of the online experience. How do your customers find products? How can you help deliver seamless online and omnichannel experiences? To help, we’re building product discovery solutions that bring together the best of our technologies that help retailers drive engagement with their consumers. Retail Search, for example, gives retailers the ability to provide Google-quality search on their own digital properties – search that is customizable for their unique business needs and built upon Google’s advanced understanding of user intent & context. The imperative is clear. Recent research found that retailers lose more than $300 billion to search abandonment — when purchase intent is not converted into a sale due to bad search results — every year in the US alone. Today, we announced that Retail Search is available to a larger set of retailers. If you are interested in learning more about Retail Search you can contact your sales representative for additional details.Create the modern store  With the rise of buying trends like curbside pickup and proximity-based search, our Google Maps Platform team is working on new products and features to help raise inventory awareness for your shoppers. We want to help you make it easier for them to understand what’s available to purchase in their channel of choice.With Product Locator, each product page connects customers with information they need for local pickup and delivery options. This ensures customers are aware of pickup and delivery options throughout the buying journey—not just checkout. Awareness of local inventory can boost a wide range of key metrics for your business. Shopify recently shared that shoppers who opt for local pickup over delivery had a +13% higher conversion rate and that 45% of local pickup customers make an additional purchase upon arrival.This is just one quick example of how our Google Maps Platform team can improve experiences for your shoppers.Operational improvementIt can be challenging to operate in a world and at a time when consumer behavior and supply chains are so disrupted and volatile, and where entire retail teams had to go remote during the pandemic and beyond. We’re working with retailers to leverage artificial intelligence (AI) to improve consumer experience through chat bots or conversational commerce that solves problems for customers from anywhere. You can learn more about these offerings in our Conversational Commerce with Google breakout session, featuring Albertsons.As the need for digital transformation continues to accelerate, Google Cloud is helping retailers stay ahead of the curve with solutions for digital and omnichannel growth, data-driven and customer-focused experiences, and operational improvement. For every era of cloud technologies, from the past into the future, Google Cloud is committed to providing solutions to retailers.Read more about our solutions for retail, and check out additional sessions, including the CPG Industry Spotlight Session How To Grow Brands in Times of Rapid Change – Featuring L’Oréalat our Retail & Consumer Goods Summit.Related ArticleIKEA Retail (Ingka Group) increases Global Average Order Value for eCommerce by 2% with Recommendations AIIKEA uses Recommendations AI to provide customers with more relevant product information.Read Article
Quelle: Google Cloud Platform

With software supply chain security, developers have a big role to play

When it comes to security headlines, 2021 has unfortunately been one for the record books. Colonial Pipeline, which supplies nearly half the United States East Coast’s gasoline, was the victim of a ransomware attack that forced it to take down its systems. Several high-profile breaches, on Kaseya, SolarWinds, Codecov, and others, gained global attention. To strengthen the U.S.’s cybersecurity profile, President Biden signed an executive order mandating changes for companies that do business with the federal government about how to secure their software. While traditional security efforts have centered around securing the perimeter, the responsibility for security is increasingly falling to developers. Specifically, a key element of the executive order is focused on enhancing the security of the enterprise software supply chain. Securing the software supply chain entails knowing exactly what components are being used in your software products—everything that impacts your code as it goes from development to production. This  includes having visibility into even the code you didn’t write, like open-source or third-party dependencies, or any other artifacts, and being able to prove their provenance. In a number of the above-mentioned events, attackers were able to exploit vulnerabilities in the software supply chain, for example, leveraging a downstream vulnerability that had gone unnoticed; injecting bad code; or using leaked credentials to access a CI/CD pipeline. These are all things that can be prevented by implementing strong software supply chain best practices. At Google, securing the software supply chain is something to which we’ve given a lot of thought, for example, working with organizations like the National Institute of Standards and Technology (NIST) and the National Security Council (NSC) to develop guidelines. In the next couple of months, after consultation with the federal government, various private sector companies and academia, we plan to publish these guidelines together with NIST. In the meantime, we’re hosting Building trust in your software supply chain on July 29, an event designed to explore this topic in depth. To get us started, I’ll be talking with a panel of industry experts:Phil Venables, Chief Information Security Officer, Google Cloud, will talk about the White House executive order, what it means to enterprises, and how Google can help you follow it.Eric Brewer, VP, Google Fellow, Google Cloud, will talk about some recent attacks, how they could have been avoided, and the role of open-source software and standards bodies in the future of cybersecurity.  Aparna Sinha, Director, Product Management, Google Cloud will tell you about Google Cloud tools that leverage software supply chain best practices, and that you can use to make your builds more secure and compliant, simplify how you manage your open-source dependencies, and make policy management more scalable across your deployment.Shane Lawrence, Staff Infrastructure Security Engineer, Shopify, will share how his company approaches security, and how that focus actually helps them increase their development velocity.  In a series of breakout sessions, you’ll also learn about software supply chain best practices, and how to implement them in your own organization. I’m looking forward to seeing you all. If you haven’t already done so, register here.Related ArticleRead Article
Quelle: Google Cloud Platform

Design considerations for SAP data modeling in BigQuery

Over the past few years, many organizations have experienced the benefits of migrating their SAP solutions to Google Cloud. But this migration can do more than reduce IT maintenance costs and make data more secure. By leveraging BigQuery, SAP customers can complement their SAP investments and gain fresh insights by consolidating enterprise data and easily extending it with powerful datasets and machine learning from Google. BigQuery is a leading cloud data warehouse, fully managed and serverless, and allows for massive scale, supporting petabyte-scale queries at super-fast speeds. It can easily combine SAP data with additional data sources, such as Google Analytics or Salesforce, and its built-in machine learning lets users operationalize machine learning models using standard SQL — all at a comparatively low cost. If your SAP-powered organization is looking to supercharge its analytics with the strength of BigQuery, read on for considerations and recommendations for modeling with SAP data. These guidelines are based on our real-world implementation experience with customers and can serve as a roadmap to the analytics capabilities your business needs.Considerations for data replication Like most technology journeys, this one should start with a business objective. Keeping your intended business value and goals in mind is critical to making the right decisions in the early steps of the design process.When it comes to replicating the data from an SAP system into BigQuery, there are multiple ways to do it successfully. Decide which method will work best for your organization by answering these questions:Does your business need real-time data? Will you need to time travel into past data?Which external datasets will you need to join with the replicated data?Are the source structures or business logic likely to change? Will you be migrating the SAP source systems any time soon? For instance, will you be moving from SAP ECC to SAP S/4HANA?You’ll also need to determine whether replication should be done on a table-by-table basis or whether your team can source from pre-built logic. This decision, along with other considerations such as licensing, will influence which replication tool you should use.Replicating on a table-by-table basisReplicating tables, especially standard tables in their raw form, allows sources to be reused and ensures more stability of the source structure and functional output. For example, the SAP table for sales order headers (VBAK) is very unlikely to change its structure across different versions of SAP, and the logic that writes to it is also unlikely to change in a way that affects a replicated table. Something else to consider: Reconciliation between the source system and the landing table in BigQuery is linear when comparing raw tables, which helps avoid issues in consolidation exercises during critical business processes, such as period-end closing. Since replicated tables aren’t aggregated or subject to process-specific data transformation, the same replicated columns can be reused in different BigQuery views. You can, for instance, replicate the MARA table (the material master) once and use it in as many models as needed. Replicating pre-built logicIf you replicate pre-built models, such as those from SAP extractors or CDS views, you don’t need to build the logic in BigQuery, since you’re using existing logic. Some of these extraction objects have embedded delta mechanisms, which may complement a replication tool that can’t handle deltas. This will save initial development time, but it can also lead to challenges if you create new columns, or if customizations or upgrades change the logic behind the extraction. It’s also important to note that different extraction processes may transform and load the same source columns multiple times, which creates redundancy in BigQuery and can lead to higher maintenance needs and costs. However, replicating pre-built models may still be a good choice, since doing so can be especially useful for logic that tends to be immutable, such as flattening a hierarchy, or logic that is highly complex.How you approach replication will also depend on your long-term plans and other key factors — for example, the availability (and curiosity) of your developers, and the time or effort they can put into applying their SQL knowledge to a new data warehouse. With either replication approach, bear in mind when designing your replication process that BigQuery is meant to be an append-always database — so post-processing of data and changes will be required in both cases. Processing data changesThe replication tool you choose will also determine how data changes are captured (known as CDC – change data capture). If the replication tool allows for it (for example as SAP SLT does) the same patterns described in the CDC with BigQuery documentation also apply to SAP data. Because some data, like transactions, are known to be less static than others (e.g., master data), you need to decide what should be scanned in real time, what will require immediate consistency, and what can be processed in batches to manage costs. This decision will be based on the reporting needs from the business.Consider the SAP table BUT000, containing our example master data for business partners, where we have replicated changes from an SAP ERP system:In an append-always replication in BigQuery, all updates are received as new records. For example, deleting a record in the source will be represented as a new record in BigQuery with a deletion flag. This applies to whether the records are coming from raw tables like BUT000 itself or pre-aggregated data, as from a BW extractor or a CDS view.Let’s take a closer look at data coming particularly from the partners “LUCIA” and “RIZ”. The operation flag tells us whether the new record in BigQuery is an insert (I), update (U) or deletion (D), while the timestamps help us identify the latest version of our business partner.If we want to find the latest updated record for the partners LUCIA and RIZ, this is what the query would look like:With the following result:After identifying stale records for “LUCIA” and “RIZ” business partners, we can proceed to deleting all stale records for “LUCIA” if we do not want to retain the history. In this example, we are using a different table to which the same replication has been done, for the purpose of comparison and to check that all stale records have been deleted for the selection made and that we only kept last updated records. For example:You can also use the following query to retrieve stale records for “LUCIA” partner before moving forward with deletionWhich produces all of the records, except the latest update:Partitioning and clusteringTo limit the number of records scanned in a query, save on cost and achieve the best performance possible, you’ll need to take two important steps: determine partitions and create clusters. PartitioningA partitioned table is one that’s divided into segments, called partitions, which make it easier to manage and query your data. Dividing a large table into smaller partitions improves query performance and controls costs because it reduces the number of bytes read by a query.You can partition BigQuery tables by:Time-unit column: Tables are partitioned based on a “timestamp,” “date,” or “datetime” column in the table.Ingestion time: Tables are partitioned based on the timestamp recorded when BigQuery ingested the data.Integer range: Tables are partitioned based on an integer column.Partitions are enabled when the table is created, as in the example below.  A great tip is to always include the partition filter as shown on the left-hand side of the query.ClusteringClustering can be created on top of partitioned tables by applying the fields that are likely to be used for filtering. When you create a clustered table in BigQuery, the table data is automatically organized based on the contents of one or more of the columns in the table’s schema. The columns you specify are then used to colocate related data.Clustering can improve the performance of certain query types — for example, queries that use filter clauses or that aggregate data. It makes a lot of sense to use them for large tables such as ACDOCA, the table for accounting documents in SAP S/4HANA. In this case, the timestamp could be used for partitioning, and common filtering fields such as the ledger, company code, and fiscal year could be used to define the clusters.A great feature is that BigQuery will also periodically recluster the data automatically.Materialized viewsIn BigQuery, materialized views are precomputed views that periodically cache the results of a query for better performance and efficiency. BigQuery uses precomputed results from materialized views and, whenever possible, reads only the delta changes from the base table to compute up-to-date results quickly. Materialized views can be queried directly or can be used by the BigQuery optimizer to process queries to the base table.Queries that use materialized views are generally completed faster and consume fewer resources than queries that retrieve the same data only from the base table. If workload performance is an issue, materialized views can significantly improve the performance of workloads that have common and repeated queries. While materialized views currently only support single tables, they are very useful common and frequent aggregations like stock levels or order fulfillment.Further tips on performance optimization while creating select statements can be found in the documentation for optimizing query computation.Deployment pipeline and securityFor most of the work you’ll do in BigQuery, you’ll normally have at least two delivery pipelines running — one for the actual objects in BigQuery and the other to keep the data staging, transforming, and updated as intended within the change-data-capture flows. Note that you can use most existing tools for your Continuous Integration / Continuous Deployment (CI/CD) pipeline — one of the benefits of using an open system like BigQuery. But, if your organization is new to CI/CD pipelines, this is a great opportunity to gradually gain experience. A good place to start is to read our guide for setting up a CI/CD pipeline for your data-processing workflow.  When it comes to access and security, most end-users will only have access to the final version of the BigQuery views. While row and column-level security can be applied, as in the SAP source system, separation of concerns can be taken to the next level by splitting your data across different Google Cloud projects and BigQuery datasets. While it’s easy to replicate data and structures across your datasets, it’s a good idea to define the requirements and naming conventions early in the design process so you set it up properly from the start. Start driving faster and more insightful analyticsThe best piece of advice we can give you is this: Try it yourself. Anyone with SQL knowledge can get started using the free BigQuery tier. New customers get $300 in free credits to spend on Google Cloud during the first 90 days. All customers get 10 GB storage and up to 1 TB queries/month, completely free of charge. In addition to discovering the massive processing capabilities, embedded machine learning, multiple integration tools, and cost benefits, you’ll soon discover how BigQuery can simplify your analytics tasks. If you need additional assistance, our Google Cloud Professional Services Organization (PSO) and Customer Engineers will be happy to help show you the best path forward for your organization. For anything else, contact us at cloud.google.com/contact.Related ArticleATB Financial boosts SAP data insights and business outcomes with BigQueryATB Financial migrated its vast SAP landscape to Google Cloud to focus on customer service as opposed to IT infrastructure, realizing mor…Read Article
Quelle: Google Cloud Platform

A container story – Google Kubernetes Engine

Sam (sysadmin) and Erin (developer) work at “Mindful Containers” , an imaginary company that sells sleeping pods for mindful breaks. One day, Sam calls Erin because her application has crashed during deployment, but it worked just fine on her workstation. They check logs, debug stuff, and eventually find version inconsistencies; the right dependencies were missing in production. Together, they perform a risky rollback. Later, they install the missing dependencies and hope nothing else breaks. Erin and Sam decide to fix the root problem once and for all using containers.Why containers?Containers are often compared with virtual machines (VMs). You might already be familiar with VMs: a guest operating system such as Linux or Windows runs on top of a host operating system with virtualized access to the underlying hardware. Like virtual machines, containers enable you to package your application together with libraries and other dependencies, providing isolated environments for running your software services. As you’ll see, however, the similarities end here as containers offer a far more lightweight unit for developers and IT Ops teams to work with, bringing a myriad of benefits.Instead of virtualizing the hardware stack as with the virtual machines approach, containers virtualize at the operating system level, with multiple containers running atop the OS kernel directly. This means that containers are far more lightweight: They share the OS kernel, start much faster, and use a fraction of the memory compared to booting an entire OS.Containers help improve portability, shareability, deployment speed, reusability, and more. More importantly to Erin and Sam,  containers made it possible to solve the ‘it worked on my machine’ problem.Click to enlargeWhy Kubernetes?Now, it turns out that Sam is responsible for more developers than just Erin.He struggles with rolling out software:Will it work on all the machines? If it doesn’t work, then what?What happens if traffic spikes? (Sam decides to over-provision just in case…)With lots of developers now containerizing their apps, Sam needs a better way to orchestrate all the containers that developers ship. The solution: Kubernetes!What is so cool about Kubernetes?The Mindful Container team had a bunch of servers, and used to make decisions on what ran on each manually based on what they knew would conflict if it were to run on the same machine. If they were lucky, they might have some sort of scripted system for rolling out software, but it usually involved SSHing into each machine. Now with containers—and the isolation they provide—they can trust that in most cases, any two applications can fairly share the resources of the same machine.With Kubernetes, the team can now introduce a control plane that makes decisions for them on where to run applications.  And even better, it doesn’t just statically place them; it can continually monitor the state of each machine, and make adjustments to the state to ensure that what is happening is what they’ve actually specified. Kubernetes runs with a control plane, and on a number of nodes. We install a piece of software called the kubelet on each node, which reports the state back to the master.Here is how it works:The master controls the clusterThe worker nodes run podsA pod holds a set of containersPods are bin-packed as efficiently as configuration and hardware allowsControllers provide safeguards so that pods run according to specification (reconciliation loops)All components can be deployed in high-availability mode and spread across zones or data centersKubernetes orchestrates containers across a fleet of machines, with support for:Automated deployment and replication of containersOnline scale-­in and scale-­out of container clustersLoad balancing over groups of containersRolling upgrades of application containersResiliency, with automated rescheduling of failed containers (i.e., self­-healing of container instances)Controlled exposure of network ports to systems outside of the clusterA few more things to know about Kubernetes:Instead of flying a plane, you program an autopilot: Declare a desired state, and Kubernetes will make it true – and continue to keep it true.It was inspired by Google’s tools for running data centers efficiently.It has seen unprecedented community activity and is today one of the largest projects on GitHub. Google remains the top contributor.The magic of Kubernetes starts happening when we don’t require a sysadmin to make the decisions. Instead, we enable a build and deployment pipeline. When a build succeeds, passes all tests and is signed off, it can automatically be deployed to the cluster gradually, blue/green, or immediately.Kubernetes the hard wayBy far, the single biggest obstacle to using Kubernetes (k8s) is learning how to install and manage your own cluster. Check out k8s the hard way as a step-by-step guide to install a k8s cluster. You have to think about tasks like:Choosing a cloud provider or bare metalProvisioning machinesPicking an OS and container runtimeConfiguring networking (e.g. P ranges for pods, SDNs, LBs)Setting up security (e.g. generating certs and configuring encryption) Starting up cluster services such as DNS, logging, and monitoringOnce you have all these pieces together, you can finally start to use k8s and deploy your first application. And you’re feeling great and happy and k8s is awesome! But then, you have to roll out an update…