Cost optimization strategies for cloud-native application development

Today, we’ll explore some strategies that you can leverage on Azure to optimize your cloud-native application development process using Azure Kubernetes Service (AKS) and managed databases, such as Azure Cosmos DB and Azure Database for PostgreSQL.

Optimize compute resources with Azure Kubernetes Service

AKS makes it simple to deploy a managed Kubernetes cluster in Azure. AKS reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure. As a managed Kubernetes service, Azure handles critical tasks like health monitoring and maintenance for you.

When you’re using AKS to deploy your container workloads, there are a few strategies to save costs and optimize the way you run development and testing environments.

Create multiple user node pools and enable scale to zero

In AKS, nodes of the same configuration are grouped together into node pools. To support applications that have different compute or storage demands, you can create additional user node pools. User node pools serve the primary purpose of hosting your application pods. For example, you can use these additional user node pools to provide GPUs for compute-intensive applications or access to high-performance SSD storage.

When you have multiple node pools, which run on virtual machine scale sets, you can configure the cluster autoscaler to set the minimum number of nodes, and you can also manually scale down the node pool size to zero when it is not needed, for example, outside of working hours.

For more information, learn how to manage node pools in AKS.

Spot node pools with cluster autoscaler

A spot node pool in AKS is a node pool backed by a virtual machine scale set running spot virtual machines. Using spot VMs allows you to take advantage of unused capacity in Azure at significant cost savings. Spot instances are great for workloads that can handle interruptions like batch processing jobs and developer and test environments.

When you create a spot node pool. You can define the maximum price you want to pay per hour as well as enable the cluster autoscaler, which is recommended to use with spot node pools. Based on the workloads running in your cluster, the cluster autoscaler scales up and scales down the number of nodes in the node pool. For spot node pools, the cluster autoscaler will scale up the number of nodes after an eviction if additional nodes are still needed.

Follow the documentation for more details and guidance on how to add a spot node pool to an AKS cluster.

Enforce Kubernetes resource quotas using Azure Policy

Apply Kubernetes resource quotas at the namespace level and monitor resource usage to adjust quotas as needed. This provides a way to reserve and limit resources across a development team or project. These quotas are defined on a namespace and can be used to set quotas for compute resources, such as CPU and memory, GPUs, or storage resources. Quotas for storage resources include the total number of volumes or amount of disk space for a given storage class and object count, such as a maximum number of secrets, services, or jobs that can be created.

Azure Policy integrates with AKS through built-in policies to apply at-scale enforcements and safeguards on your cluster in a centralized, consistent manner. When you enable the Azure Policy add-on, it checks with Azure Policy for assignments to the AKS cluster, downloads and caches the policy details, runs a full scan, and enforces the policies.

Follow the documentation to enable the Azure Policy add-on on your cluster and apply the Ensure CPU and memory resource limits policy which ensures CPU and memory resource limits are defined on containers in an Azure Kubernetes Service cluster.

Optimize the data tier with Azure Cosmos DB

Azure Cosmos DB is Microsoft's fast NoSQL database with open APIs for any scale. A fully managed service, Azure Cosmos DB offers guaranteed speed and performance with service-level agreements (SLAs) for single-digital millisecond latency and 99.999 percent availability, along with instant and elastic scalability worldwide. With the click of a button, Azure Cosmos DB enables your data to be replicated across all Azure regions worldwide and use a variety of open-source APIs including MongoDB, Cassandra, and Gremlin.

When you’re using Azure Cosmos DB as part of your development and testing environment, there are a few ways you can save some costs. With Azure Cosmos DB, you pay for provisioned throughput (Request Units, RUs) and the storage that you consume (GBs).

Use the Azure Cosmos DB free tier

Azure Cosmos DB free tier makes it easy to get started, develop, and test your applications, or even run small production workloads for free. When a free tier is enabled on an account, you'll get the first 400 RUs per second (RU/s) throughput and 5 GB of storage. You can also create a shared throughput database with 25 containers that share 400 RU/s at the database level, all covered by free tier (limit 5 shared throughput databases in a free tier account). Free tier lasts indefinitely for the lifetime of the account and comes with all the benefits and features of a regular Azure Cosmos DB account, including unlimited storage and throughput (RU/s), SLAs, high availability, turnkey global distribution in all Azure regions, and more.

Try Azure Cosmos DB for free.

Autoscale provisioned throughput with Azure Cosmos DB

Provisioned throughput can automatically scale up or down in response to application patterns.  Once a throughput maximum is set, Azure Cosmos DB containers and databases will automatically and instantly scale provisioned throughput based on application needs.

Autoscale removes the requirement for capacity planning and management while maintaining SLAs. For that reason, it is ideally suited for scenarios of highly variable and unpredictable workloads with peaks in activity. It is also suitable for when you’re deploying a new application and you’re unsure about how much provisioned throughput you need. For development and test databases, Azure Cosmos DB containers will scale down to a pre-set minimum (starting at 400 RU/s or 10 percent of maximum) when not in use. Autoscale can also be paired with the free tier.

Follow the documentation for more details on the scenarios and how to use Azure Cosmos DB autoscale.

Share throughput at the database level

In a shared throughput database, all containers inside the database share the provisioned throughput (RU/s) of the database. For example, if you provision a database with 400 RU/s and have four containers, all four containers will share the 400 RU/s. In a development or testing environment, where each container may be accessed less frequently and thus require lower than the minimum of 400 RU/s, putting containers in a shared throughput database can help optimize cost.

For example, suppose your development or test account has four containers. If you create four containers with dedicated throughput (minimum of 400 RU/s), your total RU/s will be 1,600 RU/s. In contrast, if you create a shared throughput database (minimum 400 RU/s) and put your containers there, your total RU/s will be just 400 RU/s. In general, shared throughput databases are great for scenarios where you don't need guaranteed throughput on any individual container

Follow the documentation to create a shared throughput database that can be used for development and testing environments.

Optimize the data tier with Azure Database for PostgreSQL

Azure Database for PostgreSQL is a fully-managed service providing enterprise-grade features for community edition PostgreSQL. With the continued growth of open source technologies especially in times of crisis, PostgreSQL has been seeing increased adoption by users to ensure the consistency, performance, security, and durability of their applications while continuing to stay open source with PostgreSQL. With developer-focused experiences and new features optimized for cost, Azure Database for PostgreSQL enables the developer to focus on their application while database management is taken care of by Azure Database for PostgreSQL.

Reserved capacity pricing—Now on Azure Database for PostgreSQL

Manage the cost of running your fully-managed PostgreSQL database on Azure through reserved capacity now made available on Azure Database for PostgreSQL. Save up to 60 percent compared to regular pay-as-you-go payment options available today.

Check out pricing on Azure Database for PostgreSQL to learn more.

High performance scale-out on PostgreSQL

Leverage the power of high-performance horizontal scale-out of your single-node PostgreSQL database through Hyperscale. Save time by doing transactions and analytics in one database while avoiding the high costs and efforts of manual sharding.

Get started with Hyperscale on Azure Database for PostgreSQL today.

Stay compatible with open source PostgreSQL

By leveraging Azure Database for PostgreSQL, you can continue enjoying the many innovations, versions, and tools of community edition PostgreSQL without major re-architecture of your application. Azure Database for PostgreSQL is extension-friendly so you can continue achieving your best scenarios on PostgreSQL while ensuring top-quality, enterprise-grade features like Intelligent Performance, Query Performance Insights, and Advanced Threat Protection are constantly at your fingertips.

Check out the product documentation on Azure Database for PostgreSQL to learn more.
Quelle: Azure

Published by