Let's migrate: why lifting and shifting is simply too easy to ignore

Enterprises across all verticals are choosing Google Cloud as their preferred partner for digital transformation. Taking such a transformational approach to cloud adoption, and building modern, cloud-native services brings the largest impact to an organisation – in terms of business agility, return on investment, time to market and more. The cloud’s scale and flexibility enables an organisation to build services that just wouldn’t have been possible in an on-premises data centre. When our Professional Services teams engage with customers, we adopt a holistic approach to cloud migration, and generally examine the complete technology landscape in an organisation before embarking on the cloud journey. We recommend that you focus efforts on modernizing high-value workloads that create business-differentiating value; and our experience shows that this is likely in many cases easier than you think. This approach results either in greenfield software development, or in a “modernization factory”; we’ve described these outcomes in a previous blog post.  However, this sort of transformation, or even incremental modernization of workloads to take advantage of platform-as-a-service services like Google Kubernetes Engine and Cloud SQL takes time and effort. This is an effort that may not be justified for legacy workloads. We also often encounter customers who have a strong desire to modernise their applications but can’t because of one or more of the following challenges:Scaling infrastructure on premises can be hard – but you might not have the time or resources to modernise the application. Moving the applications to the cloud as a first step can give you flexibility and breathing room whilst you begin the modernizationOff-the-shelf applications can’t be rearchitected, so moving them to the cloud can allow you to reduce operational toilYou need less-costly, more-scalable backup and recovery.  Transitioning backups from on-premises to the cloud is a common use case in all but the most heavily regulated industries or for applications with the tightest RPOs/RTOs (recovery point objectives/recovery time objectives).Whatever your reason for not modernizing workloads for the cloud,  it might then seem an unnecessary hurdle to move these applications to cloud as-is – surely this is just shifting from hardware you own, to hardware you rent? In fact, this isn’t the case at all. There are many benefits you can gain in moving these legacy applications to cloud:Using a migration factory approach to move applications as-is to cloud can give you immediate financial benefits. In the absence of costly and time-consuming application changes, you can quickly realise savings from hardware and operations.The cloud can offer easy access to specialised hardware – custom machine sizes for SAP workloads or GPUs for high-performance computing needs. This hardware can be provided on-demand, and is regularly upgraded, meaning you avoid costly purchases in your data centres.Legacy workloads can be managed separately to cloud-native workloads, using your existing tooling and operational processes. This means that security and compliance works in almost the same way you’re used to. This gives a simple stepping stone to modernisation – starting with what you have, but gradually adopting cloud-native tooling. This ‘migration factory’ approach allows you to maximise velocity of migrations, and gives you a ‘best of both worlds’ first step into the cloud. You start with minimal change to your infrastructure, but can quickly benefit from Google Cloud capabilities that reduce cost and toil, allowing you to invest in the next step, modernizing your workloads. Let’s look at three categories of features in Google Cloud that bring you these benefits:Active AssistGoogle Cloud offers a series of features and tools built on top of our deep AI capabilities, that all work together to bring intelligence to your cloud environment. We call these services Active Assist. For example, you can automatically act on rightsizing recommendations to shut down or reduce the size of idle machines, disks, or even IP addresses to reduce costs. You will also receive  recommendations for subscribing to committed use discounts for long-running resources.Alternatively, you can receive notifications, and configure automated size increases and scale-up of VM groups for spikes in load, avoiding downtime or issues with application performance. Similarly, you can configure auto-healing for failed instances, based on health checks.Meanwhile, Policy Analyser highlights user and service account issues – showing outliers in access and allowing troubleshooting of permissions. Likewise, IAM recommendations will highlight unused, or rarely used permissions that can be removed, with a simulator to preview the impact of any change. You’ll find these services and more across key GCP services, and combined together in the Recommendation Hub.Network intelligenceWhen hosting your workloads in Google Cloud, you share the same network infrastructure as Google’s own services, where we host billions of users of YouTube, Google Workspace and Search. This means you gain the benefits of global scale and proximity to your users; you also gain access to a series of tools that make a real difference to your legacy workloads. These network intelligence tools include the ability to visualise network traffic flows, network routing and latency across your GCP resources, and your connectivity to on-premises infrastructure or elsewhere. You’re able to track topology changes and network health during migration of workloads to Google Cloud. This is particularly relevant during migration, as it is simple to troubleshoot firewall issues or configuration that prevents your application components from talking to each other – connectivity tests allow you to diagnose issues and also to preview the impact of pending configuration changes on network traffic before they’re made. When planning a migration, you can extend your L2/L3 network into GCP so you can seamlessly move virtual machines (VMs) without even changing IP addresses. This drastically reduces the testing burden, and with Migrate for Compute Engine, you can have VMs up and running in the cloud in minutes. We often find that on-premises networks adopt a perimeter security approach, with very little firewall control between machine instances. By moving machines to the cloud, you can benefit from network telemetry to understand traffic patterns – VPC flow logs can record network flows between VM instances, including those used as Kubernetes nodes, without adding latency or having any impact to the VMs themselves. Combined with IAM controls and instance tagging, this makes it easy to define firewall rules that segregate traffic and protect your applications. Meanwhile, Firewall Insights provides visibility into firewall usage, detecting configuration issues such as redundant rules, or recommending updates to firewall rules to refine permissions. VM ManagerAlthough large enterprises will typically have asset management tooling and a process for patch management, these are often expensive tools from a multitude of vendors, designed to support a collection of operating systems and hardware platforms that have grown over time. Customers often describe to us the effort that maintaining their on-premises infrastructure requires, and we routinely discover VMs that haven’t been patched or upgraded in many years.To address this need, Google Cloud VM Manager is a suite of tools designed to automate the maintenance of large fleets of VMs hosted in Google Compute Engine. These tools include:Patch management – providing insights on patch status of VM instances, both Windows and Linux; highlighting recommendations and automated deployment of patches. You can create flexible patching schedules and observe patch status across your entire fleet. In combination with Google Cloud Monitoring, you’re able to troubleshoot any issues with the patch management and detect and resolve issues easily.Configuration management – maintain consistent configuration across your VMs, complete with automated remediation features. You can deploy configuration, or push software packages to machines using simple policies and recipes.Inventory management – collect operating system and software / package information. Also integrated with Cloud Asset Inventory to simplify the management of your complete cloud environment. Based on experience of managing a fleet of Windows infrastructure within Google, we’ve also recently open-sourced our own Windows fleet management tooling, bringing a cloud-native approach to Windows imaging, Active Directory management and software package distribution / deployment. Getting startedIn combination, when moving applications from your on-premises data centre to Google Cloud, these features support customers to significantly reduce the burden of infrastructure management, lower the cost of hosting cloud infrastructure, and can improve the security and reliability of your applications.  As outlined earlier, we would encourage this kind of migration as a first step towards broader transformation – through effort and cost reduction you’ll be able to take bolder steps towards that goal. What’s the best way to get started on your migration journey? We recommend first, you make sure to document your long-term goals for cloud adoption, and consider your current cloud maturity. We use the Google Cloud Adoption Framework to help determine whether your cloud migration needs to be tactical, strategic, or transformational, and to help you understand your future cloud operating model.  Then, you should establish an initial landing zone ready to receive your apps running on VMs. Migrate for Compute Engine  enables simple, frictionless, and large-scale enterprise migrations of virtual machines to Google Compute Engine with minimal downtime and risk.  If you’re planning a large-scale migration, our Professional Services team can help you assess the benefits and build a migration plan, often at no cost. Reach out to your Google Cloud sales contact, fill out this quick form for more information, or sign up for a free discovery and assessment of your current IT landscape – a great way to get started!Related ArticleRegistration is open for Google Cloud Next: October 12–14Register now for Google Cloud Next on October 12–14, 2021Read Article
Quelle: Google Cloud Platform

Data protection in transit, in storage, and in use

In our first episode of the Cloud Security Podcast, we had the pleasure to talk to Nelly Porter, Group Product Manager for the Cloud Security team.In this interview Anton, Tim, and Nelly examine a critical question about data security: how can we process extremely sensitive data in the cloud while also keeping it protected against insider access? Turns out it’s easier than it sounds on Google Cloud.Some customers using public cloud worry about their data in many different ways. And they have all sorts of sensitive data, from healthcare records, to credit card numbers, to corporate secrets, and more. For some organizations, it is seen as a risk to entrust that data to a public cloud provider. Or, some organizations may have the data that is extremely sensitive, or highly damaging, if lost or stolen.In the past most companies would collect data, process it themselves, and do any transformation or aggregation on-premise. They knew who was using the data, how, and when. That made roles and responsibilities really clear.With the cloud everything has changed. The storage and usage capabilities are much better, but it also moves some of the data management out of the company’s hands. Cloud security is a shared responsibility model: some handled by the customer, some handled by the provider.For example, let’s say you have gathered a bunch of customer behavior data, buying patterns and purchase history. You’ve got it all uploaded to Cloud Storage – it’s encrypted, and you can hold on to the keys (such as via Google Cloud EKM); you are safe. This will work for many types of sensitive and regulated data. Right?Next up you start doing data analysis, maybe even training an AI model on your data. Now that you’re using the data, it’s no longer protected by the same encryption. You still get the advantage of reserved memory, but the data is not scrambled, as desired by some clients for some use cases.We solve this tricky problem with confidential computing, which lets you complete the cycle and keep the data protected in transit, in storage and in use. While it starts with CPUs, we’re also extending the service to include GPUs and Accelerators, so your data enjoys protection wherever it goes.Confidential computing becomes possible with the right CPU hardware, allowing encryption of data while it’s loaded and used. And because this is a hardware upgrade, there’s nothing that needs to change with your code to take advantage of it.The alternative for most companies would be to handle and process such ultra-sensitive data on-premise only, which means missing out on the scale, functionality and reliability of public cloud infrastructure. With this improved cryptographic isolation, companies of all types can now use sensitive data across services and tools. The only downside is a slight latency gain and cost increase.Whether you’re handling highly regulated financial services data, or sensitive pictures from your customers, or need to protect high-value intellectual property, check out confidential computing and hear more about how it works on this episode of Cloud Security Podcast.Related ArticleStay in control of your security with new product enhancements in Google CloudStay in control of your Google Cloud security posture with enhanced built-in capabilities for Cloud Security Command CenterRead Article
Quelle: Google Cloud Platform

The new Google Cloud region in Melbourne is now open

We opened our Sydney cloud region in 2017 and, since then, we have continued to invest and expand across Australia and New Zealand to support the digital future of organizations of all sizes. In Australia, Google Cloud supports almost A$3.2 billion in annual gross benefits to businesses and consumers. This includes A$686 million to businesses using Google Workspace and Google Cloud Platform, another A$698 million to Google Cloud partners, and A$1.8 billion to consumers.1 For customers in Australia, New Zealand and across Asia Pacific, we’re excited to announce that our new Google Cloud region in Melbourne is now open. Designed to help businesses build highly available applications for their customers, the Melbourne region is our second Google Cloud region in Australia and 11th to open in Asia Pacific. We’re celebrating the occasion with a digital event where federal minister for the Digital Economy, Jane Hume, and customers Australia Post, Trade Me, Bendigo and Adelaide Bank, The Australian Football League and Macquarie Bank will share their perspectives. Come join us!A global network of regionsMelbourne joins the existing 26 Google Cloud regions connected via our high-performance network, helping customers better serve their users and customers throughout the globe. With this our second region in Australia, customers benefit from improved business continuity planning with distributed, secure infrastructure needed to meet IT and business requirements for disaster recovery, all the while maintaining data sovereignty in-country.With this new region, Google Cloud customers operating in Australia and New Zealand will benefit from low latency and high performance of their cloud-based workloads and data. Designed for high availability, the region opens with three zones to protect against service disruptions, and offers a portfolio of key products, including Compute Engine, Google Kubernetes Engine, Cloud Bigtable, Cloud Spanner, and BigQuery. We also continue to invest in expanding connectivity across the Australia and New Zealand region by working with partners to establish subsea cables and new Dedicated Cloud Interconnect locations and points of presence in major cities including Sydney, Melbourne, Perth, Canberra, Brisbane and Auckland.Collectively, this will deliver geographically distributed and secure infrastructure to customers across Australia and New Zealand – which is especially important for those in regulated industries such as Financial Services and the Public Sector.  What customers and partners are sayingNavigating this past year has been a challenge for companies as they grapple with changing customers demands and greater economic uncertainty. Technology has played a critical role, and we’ve been fortunate to partner with and serve people, companies, and government institutions around the world to help them adapt. The Google Cloud region in Melbourne will help our customers adapt to new requirements, new opportunities and new ways of working.  “We moved to Google Cloud to improve the stability and resilience of our infrastructure and become more cloud-native as part of a digital transformation program that keeps the customer at the heart of our business. We welcome Google Cloud’s investment in ANZ and the opportunities the Google Cloud Melbourne region presents to improve Trade Me’s agility and performance. – Paolo Ragone, Chief Technology Officer, Trade Me     “We initially turned to Google Cloud to help us process parcels faster and gain deeper insights into our business and its processes. The relationship has continued to deliver benefits to our customers and our organization and we welcome Google Cloud’s opening of the Melbourne region as presenting even more opportunities for businesses to innovate and generate efficiencies.” – Munro Farmer, Chief Information Officer,Australia Post.“We are well progressed with our multi-year strategy to grow and transform our organization to be Australia’s bank of choice. Google Cloud’s advanced data capabilities and renowned culture of innovation are strongly aligned to this strategy and will allow us to become even more innovative and agile in responding to our customers’ ever-changing needs. We were quick to run our workloads out of the Melbourne cloud region and we believe Google Cloud’s expanded investment in local infrastructure will further assist us on our business transformation journey.” – Andrew Cresp, Chief Information Officer, Bendigo and Adelaide Bank.“We have a clear vision when it comes to innovating to deliver world-class service to our customers, and our partnership with Google Cloud is core to that strategy. The company’s continued investments in local infrastructure and technology present new opportunities for us as we advance our transformation journey in this digital-first era.” – Chris Smith, Vice President, Digital Service, OptusOur global ecosystem of channel partners has expanded by more than 400% in the last two years, and we look forward to continuing our close relationships with partners  in Australia and New Zealand as we help customers modernize, innovate, scale and grow.“Australian companies are increasingly realising the benefits of their cloud investments and are now looking to transform their organisations at scale. We are excited about the potential and new value that the Google Melbourne Cloud region will bring to our clients as we continue to work together on delivering intelligent and innovative solutions to Australian organisations.” – Tara Brady, CEO of Accenture Australia and New Zealand“Google Cloud has always been there for its customers for the long haul and the opening of the new Melbourne Cloud region is great news. This increased resilience and scale will empower companies of all sizes to be bold in accelerating their digital transformation plans.” – Tony Nicol,  CEO of Servian”We’re excited about the launch of the Melbourne Cloud region. It will cater to the needs of industries we work closely with including healthcare and financial services, and will further enhance how we jointly deliver on the compliance, privacy and security requirements of companies as they advance their digital transformation.” – Simon Poulton, CEO of Kasna“The opening of the new Google Cloud region in Melbourne is fantastic news as it now enables DXC customers access to enhanced services for their mission critical application and data solutions across two regions within Australia.  As our customers modernise their application estate, many are seeking dual region cloud services, and DXC is excited to partner with Google Cloud to deliver these services to customers in Australia and New Zealand.” – Tim Fraser, Google Practice Lead ANZ at DXC TechnologyHelping customers build their transformation cloudsGoogle Cloud is here to support businesses, helping them get smarter with data, deploy faster, connect more easily with people and customers throughout the globe, and protect everything that matters to their businesses. The cloud region in Melbourne offers new technology and tools that can be a catalyst for this change.  Click here to learn more about all our Google Cloud locations.1. AlphaBeta, The Economic Impact of Google Cloud to Australia, July 2021Related ArticleRegistration is open for Google Cloud Next: October 12–14Register now for Google Cloud Next on October 12–14, 2021Read Article
Quelle: Google Cloud Platform

Best practices for dependency management

This article describes a set of best practices for managing dependencies of your application, including vulnerability monitoring, artifact verification, and steps to reduce your dependency footprint and make it reproducible.The specifics of each of these practices may vary depending on the specifics of your language ecosystem and the tooling you use, but general principles apply.Dependency management is only one aspect of creating a secure and reliable software supply chain. For information about other best practices, see the following resources:Best practices for building containersShifting left on securitySupply chain Levels for Software Artifacts (SLSA)DevOps capabilities from DevOps Research & AssessmentVersion pinningIn short, version pinning means restricting the version of a dependency of your application to a very specific version—ideally, a single version.Pinning versions for your dependencies has a side effect of freezing your application in time. While this is good practice for reproducibility, it has the downside of preventing you from receiving updates as the dependency makes new releases, either for security fixes, bug fixes, or general improvements.This can be mitigated by applying automated dependency management tools to your source control repositories. These tools monitor your dependencies for new releases, and make updates to your requirements files to upgrade you to those new releases as necessary, often including changelog information or additional details.Signature and hash verificationTo ensure that a given artifact for a given release of a package is actually what you intend to install, there are a number of methods that allow you to verify the authenticity of the artifact with varying levels of security.Hash verification allows you to compare the hash of a given artifact with a known hash provided by the artifact repository. Enabling hash verification ensures that your dependencies cannot be surreptitiously replaced by different files, either through a man-in-the-middle attack or a compromise of the artifact repository. This requires trusting that the hash you receive from the artifact repository at the time of verification (or at the time of first retrieval) is not compromised as well.Signature verification adds additional security to the verification process. Artifacts may be signed by the artifact repository, by the maintainers of the software, or both. New services like sigstore seek to make it easy for maintainers to sign software artifacts and for consumers to verify those signatures.Lockfiles and compiled dependenciesLockfiles are fully resolved requirements files, specifying exactly what version of a dependency should be installed for an application. Usually produced automatically by installation tools, lockfiles combine version pinning and signature or hash verification with a full dependency tree for your application.Full dependency trees are produced by ‘compiling’ or fully resolving all dependencies that will be installed for your top-level dependencies. A full dependency tree means that all dependencies of your application, including all sub-dependencies, their dependencies, and onwards down the stack, are included in your lockfile. It also means that only these dependencies can be installed, so builds can be considered more reproducible and consistent between multiple installs. Mixing private and public dependenciesModern cloud-native applications often depend on both open source, third-party code, as well as closed-source, internal libraries. The latter can be especially useful if you need to share your business logic across multiple applications, and when you want to reuse the same tooling to install both external and internal libraries, using private repositories like Artifact Registry make it easy.However, when mixing private and public dependencies, be aware of the “dependency confusion” attack: by publishing projects with the same name as your internal project to open-source repositories, attackers may be able to take advantage of misconfigured installers to surreptitiously install their malicious libraries over your internal package.To avoid a “dependency confusion” attack, you can take a number of steps:Verify the signature or hashes of your dependencies by including them in a lockfileSeparate the installation of third-party dependencies and internal dependencies into two distinct stepsExplicitly mirror the third-party dependencies you need into your private repository, either manually or with a pull-through proxyRemoving unused dependenciesRefactoring happens: sometimes a dependency you need one day is no longer necessary the next day. Continuing to install dependencies along with your application when they’re no longer being used increases your dependency footprint as well as the potential for you to be compromised by a vulnerability in those dependencies.A common practice is to get your application working locally, copy every dependency you installed during the development process into the requirements file for your application, and then deploy that. It’s guaranteed to work, but it’s also likely to introduce dependencies you don’t need in production.Generally, be cautious when adding new dependencies to your application: each one has the potential to introduce more code that you don’t have complete control over. Using tools to audit your requirements files to determine if your dependencies are actually being used or imported allows you to integrate this into your regular linting and testing pipeline.Vulnerability scanningHow will you be notified if a vulnerability is identified in one of your dependencies? Chances are, you aren’t actively monitoring all vulnerability databases for the third-party software you depend on, and most likely you may not be able to reliably audit what third-party software you depend on at all.Vulnerability scanning allows you to automatically and consistently assess whether your dependencies are introducing vulnerabilities into your application. Vulnerability scanning tools consume lockfiles to determine exactly what artifacts you depend on, and notify you when new vulnerabilities surface, sometimes even with suggested upgrade paths.Tools like Container Analysis can provide a wide array of vulnerability scanning for container images, as well as language artifacts like Java package scanning. When enabled, this feature identifies package vulnerabilities in your container images. Images are scanned when they are uploaded to Artifact Registry and the data is continuously monitored to find new vulnerabilities for up to 30 days after pushing the image.Related ArticleDefining SLOs for services with dependencies—CRE life lessonsLearn how to define and manage SLOs for services with dependencies, each of which may have their own SLOs.Read Article
Quelle: Google Cloud Platform