CodeBuild unterstützt öffentlich einsehbare Build-Ergebnisse

Besitzer von AWS-CodeBuild-Projekten können jetzt Build-Protokolle und Artefakte für Personen öffentlich zugänglich machen, die nicht bei der AWS-Konsole angemeldet sind. Dies vereinfacht die Zusammenarbeit von CodeBuild-Projektverantwortlichen mit Open-Source-Mitwirkenden, da die Projektverantwortlichen nicht für jeden Mitwirkenden einen AWS-Kontozugang verwalten müssen.
Quelle: aws.amazon.com

Understanding Cloud SQL Maintenance: why is it needed?

Since I joined the Cloud SQL team, customers have asked me one question about our service more than any other: “What happens during Cloud SQL maintenance?” It’s a fair question–I’d want to know too if something was going to impact my database’s availability!In this blog series, I’ll take you through the ins and outs of Cloud SQL maintenance. In Part 1, I will share how maintenance and other system updates make database operations a whole lot simpler for our users. In Part 2, I’ll take you step-by-step through the maintenance process and offer a behind the scenes look at the engineering that has gone into minimizing database downtime. In Part 3, I will finish with an overview of how users use Cloud SQL maintenance settings and design their applications to optimize their scheduled maintenance experiences.Let’s get started!What comprises a Cloud SQL instance?We first need to cover the system components that comprise a Cloud SQL instance. Each Cloud SQL instance is powered by a virtual machine (VM) running on a host Google Cloud server. Each VM operates the database engine, such as MySQL,  PostgreSQL, or SQL Server, as well as service agents that provide supporting services like logging and monitoring. For users of our high availability option, we set up a standby VM in another zone in the same region with an identical configuration to the primary VM. Database data is stored on a scalable, durable network storage device called a persistent disk that attaches to the VM. Finally, a static IP address sits in front of each VM, which ensures that the IP address that an application connects to persists throughout the lifetime of the Cloud SQL instance, including through maintenance or automatic failover.What are the database updates that happen on a Cloud SQL instance?Over the life of a Cloud SQL instance, there are two types of updates. Updates that users perform, which are called configuration updates and updates that Cloud SQL performs, which are called system updates.As a database’s usage grows and new workloads are added, users may want to update their database configuration accordingly. These configuration updates include increasing compute resources, modifying a database flag, and enabling high availability. Although Cloud SQL makes these updates possible with the click of a button, configuration updates can require downtime. When thinking holistically about application availability, users need to plan ahead for these configuration updates.Keeping the database instance up and running requires operational effort beyond configuration updates. Servers and disks need to be replaced and upgraded. Operating systems need to be patched as new vulnerabilities are discovered. Database engines need to be upgraded as the database software provider releases new features and fixes new issues. Normally, a database administrator would need to perform each of these updates regularly in order to ensure their system stays reliable, protected, and up-to-date. Cloud SQL takes care of these system updates on behalf of our users, so that they can spend fewer cycles managing their database and more cycles developing great applications. In fact, managed system updates attract many users to our managed service.How does maintenance fit into system updates?In general, Cloud SQL system updates are divided into three categories: hardware updates, online updates, and maintenance.Hardware updates improve underlying physical infrastructure. These include swapping out a defective machine host or replacing an old disk. Google Cloud performs hardware updates without interruption to a user’s application. For example, when updating a database server, Google Cloud uses live migration, an advanced technology that reliably migrates a VM from the original host to a new one while the VM stays running.Online updates enhance the software of the supporting service agents that sit adjacent to the database engine. These updates are performed while the database is up and running, serving traffic. Online updates do not cause downtime for a user’s application.Maintenance updates the operating system and the database engine. Since these updates require that the instance be restarted, they incur some downtime. For this reason, Cloud SQL allows users to schedule maintenance to occur at the time that is least disruptive to a user’s application.As you can see, Cloud SQL performs most system updates without any application impact. We take care to only schedule maintenance when we need to update a part of the system that cannot be updated without interrupting the service. To moderate application impacts, we bundle critical updates together into maintenance events that are scheduled once every few months. We’ve gone further to design the maintenance workflow to complete quickly so that our users’ applications can get back up and running. We’ll discuss this further in Part 2. To make maintenance more manageable, we equip users with settings such as maintenance windows and deny periods, which we will cover in more detail in Part 3.If you’re interested in learning more about how maintenance fits together with all of the other benefits of Cloud SQL, read our blog about the value of managed database services.Stay tuned for Part 2, where we will talk more specifically about how long maintenance lasts, what kinds of updates come with maintenance, and how Cloud SQL conducts maintenance to ensure minimum impact to our users’ instances.Related ArticlePrevent planned downtime during the holiday shopping season with Cloud SQLNew maintenance deny periods for Cloud SQL let you choose when downtime occurs for database maintenance–especially useful for retailers …Read Article
Quelle: Google Cloud Platform

What is Memorystore?

Many of today’s applications ranging from gaming, cybersecurity, social media require processing data at sub-millisecond latency to deliver real-time experiences. To meet demands of low latency at increased scale and reduced cost you need an in-memory datastore. Redis and Memchaced are among the most popular. Memorystore is a fully managed in-memory data store service for Redis and Memcached at Google Cloud. Like any other Google Cloud service it is fast, scalable, highly available, and secure. It automates complex tasks of provisioning, replication, failover, and patching so you can spend more time on other activities!! It comes with a 99.9% SLA and integrates seamlessly with your apps within Google Cloud.  Memorystore is used for different types of in-memory caches and transient stores; and Memorystore for Redis is also used as a highly available key-value store. This serves multiple use cases including web content caches, session stores, distributed locks, stream processing, recommendations, capacity caches, gaming leaderboards, fraudthreat detection, personalization, and ad tech.Click to enlargeWhat’s your application’s availability needs?Memorystore for Redis offers Basic and Standard Tiers. The Basic Tier is best suited for applications that use Redis as a cache and can withstand a cold restart and full data flush. Standard Tier instances provide high availability using replication and automatic failover.Memorystore for Memcached instances are provisioned on a node basis with vCPU and memory per cores per node, which means you can select them based on your specific application requirements. Features and capabilitiesSecure: Memorystore is protected from the internet using VPC networks and private IPand comes with IAM integration to protect your data. Memorystore for Redis also offers instance level AUTH and in-transit encryption. It is also compliant with major certifications (e.g., HIPAA, FedRAMP, and SOC2)Observability: You can monitor your instance and set up custom alerts with Cloud Monitoring. You can also integrate with OpenCensus to get more insights into client-side metrics.Scalable: Start with the lowest tier and smallest size and then grow your instance as needed. Memorystore provides automated scaling using APIs, and optimized node placement across zones for redundancy. Memorystore for Memcached can support clusters as large as 5 TB, enabling millions of QPS at very low latency.Highly available: Memorystore for Redis instances are replicated across two zones and provide a 99.9% availability SLA. Instances are monitored constantly and with automatic failover—applications experience minimal disruption.Migrate with no code changes: Memorystore is open source software compliant, which makes it easy to switch your applications with no code changes. Backups: Memorystore for Redis offers an import/export feature to migrate Redis instances to Google Cloud using RDS snapshots.Use casesMemorystore is great for use cases that require fast, real-time processing of data. Simple caching, gaming leaderboards, and real-time analytics are just a few examples.Caching: Caches are an integral part of modern application architectures. Memorystore is used in caching use cases such as session management, frequently accessed queries, scripts, and pages.Gaming:  With data structures like Sorted Set, Memorystore makes it easy to maintain a sorted list of scores for a leaderboard while providing uniqueness of elements. Redis hash makes it fast and easy to store and access player profiles.Stream Processing: Whether processing a Twitter feed or stream of data from IoT devices, Memorystore is a perfect fit for streaming solutions combined with Dataflow and Pub/Sub.ConclusionIf your application needs to provide low latency to guarantee a great user experience check out Memorystore.  For a more in-depth look into Memorystore check out the documentation.For more #GCPSketchnote, follow the GitHub repo. For similar cloud content follow me on Twitter @pvergadia and keep an eye out on thecloudgirl.dev.Related ArticleWhat is Cloud IoT Core?Cloud IoT Core is a managed service to securely connect, manage, and ingest data from global device fleetsRead Article
Quelle: Google Cloud Platform

Access free training and learn how to automate hyperparameter tuning to find the best model

In today’s post, we’ll walk through how to easily create optimal machine learning models with BigQuery ML’s recently launched automated hyperparameter tuning. You can also register for our free training on August 19 to gain more experience with hyperparameter tuning and get your questions answered by Google experts. Can’t attend the training live? You can watch it on-demand after August 19.  Without this feature, users have to manually tune hyperparameters by running multiple training jobs and comparing the results. The efforts might not even work without knowing the good candidates to try out.With a single extra line of SQL code, users can tune a model and have BigQuery ML automatically find the optimal hyperparameters. This enables data scientists to spend less time manually iterating hyperparameters and more time focusing on unlocking insights from data. This hyperparameter tuning feature is made possible in BigQuery ML by using Vertex Vizier behind-the-scenes.  Vizier was created by Google research and is commonly used for hyperparameter tuning at Google.BigQuery ML hyperparameter tuning helps data practitioners by:Optimizing model performance with one extra line of code to automatically tune hyperparameters, as well as customizing the search spaceReducing manual time spent trying out different hyperparametersLeveraging transfer learning from past hyperparameter-tuned models to improve convergence of new modelsHow do you create a model using Hyperparameter Tuning?You can follow along in the code below by first bringing the relevant data to your BigQuery project. We’ll be using the first 100K rows of data from New York taxi trips that is part of the BigQuery public datasets to predict the tip amount based on various features, as shown in the schema below:First create a dataset, bqml_tutorial in the United States (US) multiregional location, then run:Without hyperparameter tuning, the model below uses the default hyperparameters, which may very likely not be ideal. The responsibility falls on data scientists to train multiple models with different hyperparameters, and compare evaluation metrics across all the models. This can be a time-consuming process and it can become difficult to manage all the models. In the example below, you can train a linear regression model, using the default hyperparameters, to try to predict taxi fares.With hyperparameter tuning (triggered by specifying NUM_TRIALS), BigQuery ML will automatically try to optimize the relevant hyperparameters across a user-specified number of trials (NUM_TRIALS). The hyperparameters that it will try to tune can be found in this helpful chart.In the example above, with NUM_TRIALS=20, starting with the default hyperparameters, BigQuery ML will try to train model after model while intelligently using different hyperparameter values — in this case, l1_reg and l2_reg as described here. Before training begins, the dataset will be split into three parts1: training/evaluation/test. The trial hyperparameter suggestions are calculated based upon the evaluation data metrics. At the end of each trial training, the test set is used to evaluate the trial and record its metrics in the model. Using an unseen test set ensures the objectivity of the test metric reported at the end of tuning.The dataset is split into 3-ways by default when hyperparameter tuning is enabled. The user can choose to split the data in other ways as described in the documentation here.We also set max_parallel_trials=2 in order to accelerate the tuning process. With 2 parallel trials running at any time, the whole tuning should take roughly as long as 10 serial training jobs instead of 20.Inspecting the trials How do you inspect the exact hyperparameters used at each trial? You can use ML.TRIAL_INFO to inspect each of the trials when training a model with hyperparameter tuning.Tip: You can use ML.TRIAL_INFO even while your models are still training.In the screenshot above, ML.TRIAL_INFO shows one trial per row, with the exact hyperparameter values used in each trial. The results of the query above indicate that the 14th trial is the optimal trial, as indicated by the is_optimal column. Trial 14 is optimal here because the hparam_tuning_evaluation_metrics.r2_score — which is R2 score for the evaluation set — is the highest. The R2 score improved impressively from 0.448 to 0.593 with hyperparameter tuning!Note that this model’s hyperparameters were tuned just by using num_trials and max_parallel_trials, and BigQuery ML searches through the default hyperparameters and default search spaces as described in the documentation here. When default hyperparameter search spaces are used to train the model, the first trial (TRIAL_ID=1) will always use default values for each of the default hyperparameters for the model type LINEAR_REG. This is to help ensure that the overall performance of the model is no worse than a non-hyperparameter tuned model.Evaluating your modelHow well does each trial perform on the test set? You can use ML.EVALUATE, which returns a row for every trial along with the corresponding evaluation metrics for that model.In the screenshot above, the columns “R squared” and “R squared (Eval)” correspond to the evaluation metrics for the test and evaluation set, respectively. For more details, see the data split documentation here.Making predictions with your hyperparameter-tuned modelHow does BigQuery ML select which trial to use to make predictions? ML.PREDICT will use the optimal trial by default and also returns which trial_id was used to make the prediction. You can also specify which trial to use by following the instructions.Customizing the search spaceThere may be times where you want to select certain hyperparameters to optimize or change the default search space per hyperparameter. To find the default range for each hyperparameter, you can explore the Hyperparameters and Objectives section of the documentation.For LINEAR_REG, you can see the feasible range  for each hyperparameter. Using the documentation as reference, you can create your own customized CREATE MODEL statement:Transfer learning from previous runsIf this isn’t enough, hyperparameter tuning in BigQuery with Vertex Vizier running behind the scenes means you also get the added benefit of transfer learning between models that you train, as described here. How many trials do I need to tune a model?The rule of thumb is at least 10 * the number of hyperparameters, as described here (assuming no parallel trials). For example, LINEAR_REG will tune 2 hyperparameters by default, and so we recommend using NUM_TRIALS=20.PricingThe cost of hyperparameter tuning training is the sum of all executed trials costs, which means that if you train a model with 20 trials, the billing would be equal to the total cost across all 20 trials. The pricing of each trial is consistent with the existing BigQuery ML pricing model.Note: Please be aware that the costs are likely going to be much higher than training one model at a time.Exporting hyperparameter-tuned models out of BigQuery MLIf you’re looking to use your hyperparameter-tuned model outside of BigQuery, you can export your model to Google Cloud Storage, which you can then use to, for example, host in a Vertex AI Endpoint for online predictions.SummaryWith automated hyperparameter tuning in BigQuery ML, it’s as simple as adding one extra line of code (NUM_TRIALS) to easily improve model performance! Ready to get more experience with hyperparameter tuning or have questions you’d like to ask? Sign up here for our no-cost August 19 training.Related ArticleDistributed training and Hyperparameter tuning with TensorFlow on Vertex AILearn how to configure and launch a distributed hyperparameter tuning job with Vertex Training using bayesian optimization.Read Article
Quelle: Google Cloud Platform

Unlocking Application Modernization with Microservices and APIs

If you build apps and services that your customers consume, two things are certain: You’re exposing APIs in some form or the other. Your apps are made by multiple functions working together to deliver products and services. As you scale up and grow, your enterprise architecture can benefit from a sound strategy for both API management and service management, both of which impact your customer and developer experience. In this article, we’ll explore how these two technologies fit into your application modernization strategy, including how we’re seeing our customers use Anthos Service Mesh and Apigee API Management together. How APIs, microservices, and a service mesh are relatedAPIs accelerate your modernization journey by unlocking and allowing legacy data and applications to be consumed by new cloud services. As a result, organizations can launch new mobile, web, and voice experiences for customers. The API layer acts as a buffer between legacy services and front-end systems and keeps the front-end systems up and running by routing requests as the legacy services are migrated or transformed into modern architectures.  In addition, an API management platform, like Apigee, manages the lifecycle of those APIs with design, publish, analyze, and governance capabilities.Once microservices architectures become prevalent in an organization, technical complexity increases and organizations find a need for deeper and more granular visibility into their applications and services. This is where a service mesh comes into play. A service mesh is not only an architecture that empowers managed, observable, and secure communication across an organization’s services, but also the tool that enables it. Anthos Service Mesh lets organizations build platform-scale microservices with requirements around standardized security, policies, and controls, and it provides teams with in-depth telemetry, consistent monitoring, and policies for properly setting and adhering to SLOs. How API management and a service mesh compliment one anotherMany organizations ask themselves, “Do I really need both an API management platform and a service mesh? How do I manage them together?” The answer to the first question is yes. These two technologies focus on different aspects of the technology stack and are complementary to each other. A service mesh modernizes your application networking stack by standardizing how you deal with network security, observability, and traffic management. An API management layer focuses on managing the lifecycle of APIs, including publishing, governance, and usage analytics. Most organizations draw a logical boundary at business units or technology groups. Sharing these microservices outside that boundary with other business units or with partners is where Apigee plays a significant role. You can drive and manage the consumption of those services through developer portals, monitoring API usage, providing authentication, and more, with Apigee. Google Cloud offers Anthos Service Mesh for service management and Apigee for API management. These two products work together to provide IT teams with a seamless experience throughout the application modernization journey. The Apigee Adapter for Envoy enables organizations that use Anthos Service Mesh to reap the benefits of Apigee by enforcing API management policies within a service mesh. Accelerate your application modernization journeyThough the journey to application modernization doesn’t always follow a clear-cut path, by adopting API management and a service mesh as part of a modernization journey, your organization can be better equipped to rapidly respond to changing markets securely and at scale. Wherever you are on your application modernization journey, Google Cloud can help. To learn more about how service management and API management can be part of your application modernization journey, read this whitepaper.Related ArticleAnnouncing API management for services that use EnvoyAmong forward-looking software developers, Envoy has become ubiquitous as a high-performance pluggable proxy, providing improved networki…Read Article
Quelle: Google Cloud Platform