With the recent release of Cloud Run, it’s now even easier to deploy serverless applications on Google Cloud Platform (GCP) that are automatically provisioned, scaled up, and scaled down. But in a serverless world, being able to ensure your service meets the twelve factors is paramount. The Twelve-Factor App denotes a paradigm that, when followed, should make it frictionless for you to scale, port, and maintain web-based software as a service. The more factors your environment has, the better.So, on a scale of 1 to 12, just how twelve-factor compatible is Cloud Run? Let’s take the factors, one by one. The Twelve FactorsI. CODEBASEOne codebase tracked in revision control, many deploysEach service you intend to deploy on Cloud Run should live in its own repository, whatever your choice of source control software. When you want to deploy your service, you need to build the container image, then deploy it. For building your container image, you can use a third-party container registry, or Cloud Build, GCP’s own build system. You can even supercharge your deployment story by integrating Build Triggers, so any time you, say, merge to master, your service builds, pushes, and deploys to production.You can also deploy an existing container image as long as it listens on a PORT, or find one of the many sporting a shiny Deploy on Cloud Run button. II. DEPENDENCIESExplicitly declare and isolate dependenciesSince Cloud Run is a Bring-Your-Own container environment, you can declare whatever you want in this container, and the container encapsulates the entire environment. Nothing escapes, so two containers won’t conflict with each other. When you need to declare dependencies, these can be captured using environment variables, keeping your service stateless.It is important to note that there are some limitations to what you can put into a Cloud Run container due to the environment sandboxing, and what ports can be used (which we’ll cover later in Section VII.)III. CONFIGStore config in the environmentYes, Cloud Run supports stored configuration in the environment by default. And it’s mandatory. You must listen for requests on PORT, otherwise your service will fail to start. To be truly stateless, your code goes in your container, and configurations are decoupled by way of environment variables. These can be declared when you create the service, in the Optional Settings. Don’t worry if you miss this setting when you declare your service. You can always edit it again by clicking “+ Deploy New Revision” when viewing your service, or by using the –update-env-vars flag in gcloud beta run deployEach revision you deploy is not editable, which means revisions are reproducible, as the configuration is frozen. To make changes you must deploy a new revision. For bonus points, consider using berglas, which leverages Cloud KMS and Cloud Storage to secure your environment variables. It works out of the box with Cloud Run (and the repo even comes with multiple language examples).IV. BACKING SERVICESTreat backing services as attached resourcesMuch like you would connect to any external database in a containerized environment, you can connect to a plethora of different hosts in the GCP universe.And since your service cannot have any internal state, to have any state you must use a backing service.V. BUILD, RELEASE, RUNStrictly separate build and run stagesHaving separate build and run stages is how you deploy in Cloud Run land! If you set up your Continuous Deployment back in Section I, then you’ve already automated that step. If you haven’t, building a new version of your Cloud Run service is as easy as building your container image: gcloud builds submit –tag gcr.io/YOUR_PROJECT/YOUR_IMAGE .to take advantage of Cloud Build, and deploying the built container image: gcloud beta run deploy –image gcr.io/YOUR_PROJECT/YOUR_IMAGE YOUR SERVICECloud Run creates a new revision of the service, ensures the container starts, and then re-routes traffic to this new revision for you. If for any reason your container image encounters an error, the service is still active with the old version, and no downtime occurs. You can also create continuous deployment by configuring Cloud Run automations using Cloud Build triggers, further streamlining your build, release, and run process. VI. PROCESSESExecute the app as one or more stateless processesEach Cloud Run service runs its own container, and each container should have one process. If you need multiple concurrent processes, separate those out into different services, and use a stateful backing service (Section IV) to communicate between them. VII. PORT BINDINGExport services via port bindingCloud Run follows the modern architecture best practices and each Service must expose themselves on a port number, specified by the PORT environment variable. This is the fundamental design of Cloud Run: any container you want, as long as it listens on port 8080.Cloud Run does support outbound gRPC and WebSockets, but does not currently work with these protocols inbound.VIII. CONCURRENCYScale out via the process modelConcurrency is a first-class factor in Cloud Run. You declare what the maximum number of concurrent requests your container can receive. If the incoming concurrent request count exceeds this number, Cloud Run will automatically scale by adding more container instances to handle all incoming requests. IX. DISPOSABILITYMaximize robustness with fast startup and graceful shutdownSince Cloud Run handles scaling for you, it’s in your best interest to ensure your services are the most efficient they can be. The faster they are to startup, the more seamless scaling can be. There are a number of tips around how to write effective services, so be sure to consider the size of your containers, the time they take to startup, and how gracefully they handle errors without terminating. X. DEV/PROD PARITYKeep development, staging, and production as similar as possibleA container-based development workflow means that your local machine can be the development environment, and Cloud Run can be your production environment! Even if you’re running on a non-Linux environment, a local Docker container should behave in the same way as the same container running elsewhere. It’s always a good idea to test your container locally when developing. Testing locally helps you achieve a more efficient iterative development strategy, allowing you to work more effectively. To ensure that you get the same port-binding behaviour as Cloud Run in production, make sure you run with a port flag: PORT=8080 && docker run -p 8080:${PORT} -e PORT=${PORT} gcr.io/[PROJECT_ID]/[IMAGE]When testing locally, consider if you’re using any GCP external services, and ensure you point Docker to the authentication credentials. Once you’ve confirmed your service is sound, you can deploy the same container to a staging environment, and after confirming it’s working as intended there, to a production environment. A GCP Project can host many services, so it’s recommended that your staging and production (or green and blue, or however you wish to call your isolated environments) are separate projects. This also ensures isolation between databases across environments. XI. LOGSTreat logs as event streamsCloud Run uses Stackdriver Logging out of the box. The “Logs” tab on your Cloud Run service view will show you what’s going on under the covers, including log aggregation across all dynamically created instances. Stackdriver Logging automatically captures stdout and stderr, and there may also be a native client for Logging in your preferred programming language. In addition, since logs are captured in Stackdriver Logging, you can then use the tools available for StackDriver logging to further work with your logs; for example, exporting to Big Query. XII. ADMIN PROCESSESRun admin/management tasks as one-off processesAdministration tasks are outside the scope of Cloud Run. If you need to do any project configuration, database administration, or other management changes, you can perform these tasks using the GCP Console, gcloud CLI, or Cloud Shell. A near-perfect score, as a matter of fact(or)With the exception of one factor being outside of scope, Cloud Run maps near perfectly with Twelve-Factor, which means it will map well to scalable, manageable infrastructure for your next serverless deployment. To learn more about Cloud Run, check out this quickstart.
Quelle: Google Cloud Platform
Published by