Introducing a New Way to Access All Your Sites

If you’ve been looking for a simple way to access and quickly navigate between your multiple WordPress.com websites, we have an exciting announcement for you today. We’ve launched a new dashboard to help you manage all your WordPress.com and Jetpack-connected websites.

You can access this new Sites page at WordPress.com/Sites.

From here, you can locate a site and jump into its dashboard, launch a site to the public, or view your site’s Hosting Configuration to grab its SFTP details.

A Simple, Centralized Starting Point

Once you’re managing more than a few sites, it can be difficult to keep track of where everything is. The Sites page organizes all of your websites in one place.

For public sites, you’ll see a preview of each site’s homepage, making it easier for you to find the site you’re looking for.

Use the dropdown filter to find the “Private” or “Coming Soon” sites you’re currently working on. Our “Coming Soon” feature gives you a safe space to build and edit your site until you’re ready to launch it to the world.

Switch to the “List View” and navigate all of your sites with a more compact presentation:

Switch back to “Grid View” to see larger previews for all of your sites. This display mode is saved for the next time you come back to the page.

Build Your Next Site on WordPress.com

This is the first version of the Sites page, and we plan to continue improving it in the future. It’s also the first in a series of new tools for those building multiple sites. Our goal is to make WordPress.com an enjoyable, indispensable part of your workflow.

What else would you like to see in the Sites page? How could we make WordPress.com an even more powerful place to build a website? Feel free to leave a comment or submit your ideas in our short feature request form.

Visit Your New Sites Dashboard

Quelle: RedHat Stack

Spanner on a modern columnar storage engine

Google was born in the cloud. At Google, we have been running massive infrastructure that powers critical internal and external facing services for more than two decades. Our investment in this infrastructure is constant, ranging from user visible features to invisible internals that makes the infrastructure more efficient, reliable and secure. Constant updates and improvements are made into the infrastructure. With billions of users served around the globe, availability and reliability is at the core of how we operate and update our infrastructure.Spanner is Google’s massively scalable, replicated and strongly consistent database management service. With hundreds of thousands of databases running in our production instance, Spanner serves over 2 billion requests at peak and has over 6 exabytes of data under management that is the “source of the truth” for many mission critical services, including AdWords, Search, and Cloud Spanner customers. The customer workloads are diverse, and would stretch a system in various ways. Although there have been constant binary releases to Spanner, fundamental changes such as swapping out the underlying storage engine is a challenging undertaking.  In this post, we talk about our journey migrating Spanner to a new columnar storage engine. We discuss the challenges a massive scale migration faced and how we accomplished this effort in ~2-3 years with all the critical services running on top uninterrupted.The Storage EngineThe storage engine is where a database turns their data into actual bytes and stores them in underlying file systems. In a Spanner deployment, a database is hosted in one or more instance configurations, which are physical collections of resources. The instance configurations and databases comprise one or more zones or replicas that are served by a number of spanner servers. The storage engine in the server encodes the data and stores them in the underlying large scale distributed file system – Colossus.Spanner originally used a Bigtable-like storage engine based on SSTable (Sorted String Table) format stacks. This format has proven to be incredibly robust through years of large scale deployment such as in Bigtable and Spanner itself. The SSTable format is optimized for schemaless NoSQL data consisting of primarily large strings. While it is a perfect match for Bigtable, it is not the best fit for Spanner. In particular, traversing individual columns is inefficient.Ressi is the new low-level, column-oriented storage format for Spanner. It is designed from the ground up for handling SQL queries over large-scale, distributed databases with both OLTP and OLAP workloads, including maintaining and improving performance of read and write queries with key-value data in the database. Ressi includes optimizations ranging from block-level data layout, file-level organization of active and inactive data, and existence filters for storage I/O savings etc. The data organization improves storage usage and helps in large scan queries. Deployment of Ressi with very large scale services such as GMail on Spanner have shown performance improvements over multiple dimensions, such as CPU and storage I/O.The Challenges of Storage Engine MigrationImprovements and updates to Spanner are constant and we are adept at safely operating and evolving our system in a dynamic environment. However, a storage engine migration changes the foundation of a database system and presents distinct challenges, especially at a massive deployment scale.In general, in a production OLTP database system, storage engine migration needs to be done without interruption to the hosted databases, without degradation to latency and throughput, and without compromising data integrity. There had been past attempts and success stories of live database storage engine migration. However, successful attempts at the scale of Spanner with multiple exabytes of data are rare. The mission critical nature of the services and the massive scale place a very high requirement on how the migration should be handled.Reliability, Availability & Data IntegrityThe topmost requirement of the migration is maintaining service reliability, availability and data integrity throughout the migration. The challenges were paramount and unique with the massive deployment scale of Spanner:Spanner database workloads are diverse and interact with the underlying Spanner system in different ways. Successful migration of one database does not guarantee successful migration of another.Massive data migration inherently creates unusual churns in the underlying system. This may trigger latent and unanticipated behavior, causing production outages.We operate in a dynamic environment with constant new ambient changes from the customers and Spanner new feature development. Migration faced non-monotonically decreasing risk.Performance & CostAnother challenge of migrating to a new storage engine is to achieve good performance and reduce cost. Performance regression can arise during the migration from underlying churns, and/or after the migration due to certain aspects of the workloads interacting with the new storage engine. This can cause issues such as increased latency and rejected requests.Performance regression may also manifest as increased storage usage in some databases due to variances in database compressibility. This increases internal resource consumption and cost. What’s more, if additional storage is not available, it may lead to production outages.Although the new columnar storage engine improves both performance and data compression in general, due to Spanner’s massive deployment, we must watch out for the outliers.Complexity and SupportabilityExistence of dual formats not only requires more engineering effort to support, but also increases system complexity and performance variances in different zones. An obvious approach to mitigate the risk here is to achieve high migration velocity and in particular, shorten the co-existence of dual formats in the same databases.However, databases on Spanner have different sizes, spanning several orders of magnitude. As a result, the time required to migrate each database can vary by a large degree. Scheduling databases for migration can not be done one-size-fit-all. The migration effort must take into account the transitioning period where dual formats exist while trying to achieve highest velocity safely and reliably.A Systematic Principled Approach toward Migration ReliabilityWe introduced a systematic approach based on a set of reliability principles we defined. Using the reliability principles, our automation framework automatically evaluated migration candidates (i.e., instance configurations and/or databases), selecting conforming candidates for migration and flagging violations. The flagged migration candidates were specially examined and violations resolved before the candidates became eligible for migration. This effectively reduced toil and increased velocity without sacrificing production safety.The Reliability Principles & Automation ArchitectureThe reliability principles were the cornerstones of how we conducted the migration. They covered multiple aspects: from evaluating the healthiness and suitability of migration candidates, managing customer exposure to production changes, handling performance regression and data integrity, to mitigating risks in a dynamic environment with constant changes, such as new releases and feature launches within and outside of Spanner.Based on the reliability principles, we built an automation framework. Various stats and metrics were collected. Together they formed a modeled view of the state of the Spanner universe. This view was continuously updated to accurately reflect the current state of the universe.In this architectural design, the reliability principles became filters, where a migration candidate could only pass through and be selected by the migration scheduler if it satisfied the requirements. Migration scheduling was done in weekly waves to enable gradual rollout.As previously mentioned, migration candidates not satisfying the reliability principles were not ignored – they were flagged for attention and were resolved in one of two ways: override and migrate with caution, or resolve the underlying blocking issue then migrate.Migration Scheduling & Weekly RolloutMigration scheduling was the core component in managing migration risk, preventing performance regressions and ensuring data integrity.Due to the diverse customer workload and wide spectrum of deployment sizes, we adopted fine-grained migration scheduling. The scheduling algorithm observed the customer deployment as failure domains and properly staged and spaced the migration of customer instance configurations. Together with the rollout automation, they enabled an efficient migration journey while keeping risk under control.Under this framework, the migration proceeded progressively in the following dimensions:among multiple instance configurations of the same customer deployment;among the multiple zones of the same instance configuration; andamong the migration candidates in the weekly rollout wave.Customer Deployment-aware SchedulingProgressive rollout within a customer’s deployment required us to recognize the customer deployment as failure domains. We used an heuristic that indicates deployment ownership and usage. In Spanner’s case, this is also a close approximation of workload categorization as the multiple instances are typically regional instances of the same service. The categorization produced equivalent classes of deployment instances where each class is a collection of instance configurations from the same customer and with the same workload, as shown in a simplified graph:The weekly wave scheduler selected migration candidates (i.e., replicas/zones in instance configuration) from each equivalent class. Candidates from multiple equivalent classes can be chosen independently as their workloads were isolated. Blocking issues in one equivalent class would not prevent progress in other classes.Progressive Rollout of Weekly WavesTo mitigate new issues from new releases and changes from both customers and Spanner, the weekly waves were also rolled out in a progressive manner, allowing issues to surface without causing widespread impact while accelerating to increase migration velocity.Managing Reliability, Availability & PerformanceUnder the mechanisms described above, customer deployments were carefully moved through a series of state changes, preventing performance degradation and loss of availability and data integrity.At the start, an instance configuration of a customer was chosen and an initial zone/replica ( henceforth referred to as “first zone”) was migrated. This avoided potential global production impact to the customer while revealing issues should the workload interact poorly with the new storage engine. Following the first zone migration, data integrity was checked by comparing the migrated zone with other zones using Spanner’s built-in integrity check. If this check failed or performance regression occurred following the migration, the instance was restored to the previous state.We pre-estimated the post migration storage size and the reliability principle blocks instances with excessive storage increase from migrating. As a result, we did not have many unexpected storage compression regression following a migration. Regardless, the resource usage and system health was closely monitored by our monitoring infrastructure. If unexpected regression occurred, the instance was restored back to the desired state by migrating the zone back to SSTable format.Only when everything was OK would the migration of the customer deployment proceed forward, progressively by migrating more instances and/or zones, and accelerating as risk was further reduced.Project Management & Driving MetricsA massive migration effort requires effective project management and the identification of key metrics to drive progress. We drove a few key metrics, including (but not limited to):The coverage metric. This metric tracked the number and percentage of Spanner instances running the new storage engine. This was the highest priority metric. As the name indicated, this metric covered the interaction of different workloads with the new storage engine, allowing early discovery of underlying issues.The majority metric. This metric tracked the number and percentage of Spanner instances with the majority of the zones running the new storage engine. This allows catching anomalies at tipping points in a quorum based system like Spanner.The completion metric. This metric tracked the number and percentage of Spanner instances that were completely running the new storage engine. Achieving 100% on this metric was our ultimate goal.The metrics were maintained as time series, allowing examination of the trend and shifting gears as we approached the later stages of the effort.SummaryPerforming massive scale migration is an effort that encompasses strategic design, building automation, designing processes, and shifting execution gears as effort progresses. With a systematic and principled approach, we achieved a massive scale migration involving over 6 exabytes of data under management and 2 billion QPS at peak in Spanner within a short amount of time with service availability, reliability and integrity uncompromised.Many of Google’s critical services depend on Spanner and have already seen significant improvements with this migration. Furthermore, the new storage engine provides a platform for many future innovations. The best is yet to come.
Quelle: Google Cloud Platform

Cloud Wisdom Weekly: 5 ways to reduce costs with containers

“Cloud Wisdom Weekly: for tech companies and startups” is a new blog series we’re running this fall to answer common questions our tech and startup customers ask us about how to build apps faster, smarter, and cheaper. In this installment, Google Cloud Product Manager Rachel Tsao explores how to save on compute costs with modern container platforms. Many tech companies and startups are built to operate under a certain degree of pressure and to efficiently manage costs and resources. These pressures have only increased with inflation, geopolitical shifts, and supply chain concerns, however, creating urgency for companies to find ways to preserve capital while increasing flexibility. The right approach to containers can be crucial to navigating these challenges. In the last few years, development teams have shifted from virtual machines (VMs) to containers, drawn to the latter because they are faster, more lightweight, and easier to manage and automate. Containers also consume fewer resources than VMs, by leveraging shared operating systems. Perhaps most importantly, containers enable portability, letting developers put an application and all its dependencies into a single package that can run almost anywhere. Containers are central to an organization’s agility, and in our conversations with customers about why they choose Google Cloud, we hear frequently that services like Google Kubernetes Engine (GKE) and Cloud Run help tech companies and startups to not only go to market quickly, but also save money. In this article, we’ll explore five ways to help your business quickly and easily reduce compute costs with containers. 5 ways to control compute costs with containers Whether your company is  an established player that is modernizing its business or a startup building its first product, managed containerized products can help you reduce costs, optimize development, and innovate. The following tips will help you to evaluate core features you should expect of container services and include specific advice for GKE and Cloud Run.1. Identify opportunities to reduce cluster administration Most companies want to dedicate resources to innovation, not infrastructure curation. If your team has existing Kubernetes knowledge or runs workloads that need to leverage machine types or graphics processing units (GPUs), you may be able to simplify provisioning with GKE Autopilot. GKE Autopilot provisions and manages the cluster’s underlying infrastructure, all while you pay for only the workload, not 24/7 access to the underlying node-pool compute VMs. In this way, it can reduce cluster administration while saving you money and giving you hardened security best practices by default.2. Consider serverless to maximize developer productivity Serverless platforms continue the theme of empowering your technical talent to focus on the most impactful work. Such platforms can promote productivity by abstracting away aspects of infrastructure creation, letting developers work on projects that drive the business while the platform provider oversees hardware and scalability, aspects of security, and more.  For a broad range of workloads that don’t need machine types or GPUs, going serverless with Cloud Run is a great option for building applications, APIs, internal services, and even real-time data pipelines. Analyst research supports that Cloud Run customers achieve faster deployments with less time spent monitoring services, resulting in reinvested productivity that lets these customers do more with fewer resources.  Designed with high scalability in mind, and an emphasis on the portability of containers, Cloud Run also supports a wide range of stateless workloads, including jobs that run to completion. Moreover, it lets you maximize the skills of your existing team, as it does not require cluster management, a Kubernetes skillset or prior infrastructure experience. Additionally, Cloud Run leverages the Knative spec and a container image as a deployment artifact, enabling an easy migration to GKE if your workload needs change.With Cloud Run, gone are the days of infrastructure overprovisioning! The platform scales down to zero automatically, meaning your services always have the capacity to meet demand, but do not incur costs if there is no traffic. 3. Save with committed use discountsCommitted use discounts provide discounted pricing in exchange for committing to a minimal level of usage in a region for a specified term. If you are able to reliably predict your resource needs, for instance, you can get a 17% discount for Cloud Run (for either one year or three years), and either a 20% discount (for one year) or a 45% discount (for three years) on GKE Autopilot.4. Leverage cost management features Minimum and maximum instances are useful for ensuring your services are ready to receive requests but do not cause cost overages. For Google Cloud customers, best practices for cost management include building your container with Cloud Build, which offers pay-for-use pricing and can be more cost efficient than steady-state build farms.Relatedly, if you choose to leverage serverless containers with Cloud Run, you can set minimum instances to avoid the lag (i.e., the cold start) when a new container instance is starting up from zero. Minimum instances are billed at one-tenth of the general Cloud Run cost. Likewise, if you are testing and want to avoid costs spiraling, you can set a maximum number of instances to ensure your containers do not scale beyond a certain threshold. These settings can be turned off anytime, resulting in no costs when your service is not processing traffic. To have better oversight of costs, you can also view built-in billing reports and set budget alerts on Cloud Billing. 5. Match workload needs to pricing modelsGKE Autopilot is great for running highly reliable workloads thanks to its Pod-level SLA. But if you have workloads that do not need a high level of reliability (e.g., fault tolerant batch workloads, dev/test clusters), you can leverage spot pricing to receive a discount of 60% to 91% compared to regularly-priced pods. Spot Pods run on spare Google Cloud compute capacity as long as resources are available. GKE will evict your Spot Pod with a grace period of 25 seconds during times of high resource demand, but you can automatically redeploy as soon as there is available capability. This can result in significant savings for workloads that are a fit. Innovation requires balance Put into practice, these tips can help you and your business to get the most out of containers while controlling management and resource costs. That said, it is worth noting that while managing cloud costs is important, the relationship between “cloud” and “cost” is often complex. If you are adopting cloud computing with only the primary goal of saving money, you may soon run into other challenges. Cloud services can save your business money in many ways, but they can also help you get the most value for your money. This balance between cost efficiency and absolute cost is important to keep in mind so that even in challenging economic landscapes, your tech company or startup can continue growing and innovating. Beyond cost savings, many tech and startup companies are seeking improved business agility, which is the ability to deploy new products and features frequently and with high quality. With deployment best practices built into GKE Autopilot and Cloud Run, you can transform the way your team operates while maximizing productivity with every new deployment. You can learn if your existing workloads are appropriate for containers with this fit assessment and these guides for migrating to containers. For new workloads, you can leverage these guides for GKE Autopilot and Cloud Run. And for more tips on cost optimization, check out our Architecture Framework for compute, containers, and serverless.If you want to learn more about how Google Cloud can help your startup, visit our page here to get more information about our program and apply for our Google for Startups Cloud Program, and sign up for our communications to get a look at our community activities, digital events, special offers, and more.Related ArticleThink serverless: tips for early-stage startupsGoogle Cloud tips for early-stage startups, from leveraging serverless to maximizing cloud credits to comparing managed services.Read Article
Quelle: Google Cloud Platform