This article describes a set of best practices for managing dependencies of your application, including vulnerability monitoring, artifact verification, and steps to reduce your dependency footprint and make it reproducible.The specifics of each of these practices may vary depending on the specifics of your language ecosystem and the tooling you use, but general principles apply.Dependency management is only one aspect of creating a secure and reliable software supply chain. For information about other best practices, see the following resources:Best practices for building containersShifting left on securitySupply chain Levels for Software Artifacts (SLSA)DevOps capabilities from DevOps Research & AssessmentVersion pinningIn short, version pinning means restricting the version of a dependency of your application to a very specific version—ideally, a single version.Pinning versions for your dependencies has a side effect of freezing your application in time. While this is good practice for reproducibility, it has the downside of preventing you from receiving updates as the dependency makes new releases, either for security fixes, bug fixes, or general improvements.This can be mitigated by applying automated dependency management tools to your source control repositories. These tools monitor your dependencies for new releases, and make updates to your requirements files to upgrade you to those new releases as necessary, often including changelog information or additional details.Signature and hash verificationTo ensure that a given artifact for a given release of a package is actually what you intend to install, there are a number of methods that allow you to verify the authenticity of the artifact with varying levels of security.Hash verification allows you to compare the hash of a given artifact with a known hash provided by the artifact repository. Enabling hash verification ensures that your dependencies cannot be surreptitiously replaced by different files, either through a man-in-the-middle attack or a compromise of the artifact repository. This requires trusting that the hash you receive from the artifact repository at the time of verification (or at the time of first retrieval) is not compromised as well.Signature verification adds additional security to the verification process. Artifacts may be signed by the artifact repository, by the maintainers of the software, or both. New services like sigstore seek to make it easy for maintainers to sign software artifacts and for consumers to verify those signatures.Lockfiles and compiled dependenciesLockfiles are fully resolved requirements files, specifying exactly what version of a dependency should be installed for an application. Usually produced automatically by installation tools, lockfiles combine version pinning and signature or hash verification with a full dependency tree for your application.Full dependency trees are produced by ‘compiling’ or fully resolving all dependencies that will be installed for your top-level dependencies. A full dependency tree means that all dependencies of your application, including all sub-dependencies, their dependencies, and onwards down the stack, are included in your lockfile. It also means that only these dependencies can be installed, so builds can be considered more reproducible and consistent between multiple installs. Mixing private and public dependenciesModern cloud-native applications often depend on both open source, third-party code, as well as closed-source, internal libraries. The latter can be especially useful if you need to share your business logic across multiple applications, and when you want to reuse the same tooling to install both external and internal libraries, using private repositories like Artifact Registry make it easy.However, when mixing private and public dependencies, be aware of the “dependency confusion” attack: by publishing projects with the same name as your internal project to open-source repositories, attackers may be able to take advantage of misconfigured installers to surreptitiously install their malicious libraries over your internal package.To avoid a “dependency confusion” attack, you can take a number of steps:Verify the signature or hashes of your dependencies by including them in a lockfileSeparate the installation of third-party dependencies and internal dependencies into two distinct stepsExplicitly mirror the third-party dependencies you need into your private repository, either manually or with a pull-through proxyRemoving unused dependenciesRefactoring happens: sometimes a dependency you need one day is no longer necessary the next day. Continuing to install dependencies along with your application when they’re no longer being used increases your dependency footprint as well as the potential for you to be compromised by a vulnerability in those dependencies.A common practice is to get your application working locally, copy every dependency you installed during the development process into the requirements file for your application, and then deploy that. It’s guaranteed to work, but it’s also likely to introduce dependencies you don’t need in production.Generally, be cautious when adding new dependencies to your application: each one has the potential to introduce more code that you don’t have complete control over. Using tools to audit your requirements files to determine if your dependencies are actually being used or imported allows you to integrate this into your regular linting and testing pipeline.Vulnerability scanningHow will you be notified if a vulnerability is identified in one of your dependencies? Chances are, you aren’t actively monitoring all vulnerability databases for the third-party software you depend on, and most likely you may not be able to reliably audit what third-party software you depend on at all.Vulnerability scanning allows you to automatically and consistently assess whether your dependencies are introducing vulnerabilities into your application. Vulnerability scanning tools consume lockfiles to determine exactly what artifacts you depend on, and notify you when new vulnerabilities surface, sometimes even with suggested upgrade paths.Tools like Container Analysis can provide a wide array of vulnerability scanning for container images, as well as language artifacts like Java package scanning. When enabled, this feature identifies package vulnerabilities in your container images. Images are scanned when they are uploaded to Artifact Registry and the data is continuously monitored to find new vulnerabilities for up to 30 days after pushing the image.Related ArticleDefining SLOs for services with dependencies—CRE life lessonsLearn how to define and manage SLOs for services with dependencies, each of which may have their own SLOs.Read Article
Quelle: Google Cloud Platform
Published by