Leapp Upgrade using Red Hat Satellite 6

 In this post, we’ll cover how to use the Leapp upgrade utility, a feature that aims to give you both stability and the ability to upgrade with minimal downtime. We’ll enable the Leapp upgrade feature on Red Hat Satellite 6 to upgrade from RHEL 7 to RHEL 8.
Quelle: CloudForms

How to update Red Hat Enterprise Linux via minor releases and Extended Update Support

Customers often tell me that they need to stay on a specific RHEL minor release in order to maintain a supported configuration for a third-party application, such as SAP for example. It’s also typically true for large, business-critical applications that don’t tolerate frequent downtime for updates.

This article explains the mechanisms available in Red Hat Enterprise Linux (RHEL) to help make this possible.
Quelle: CloudForms

Six new features in Dialogflow CX

Today, we are excited to announce the public preview of six new features in Dialogflow CX that make it the best chatbot virtual agent for enterprises.With these features, our customers will be able to improve the end-user conversational experience and manage security and deployment better. These launches also include some console improvements and built-in support for making the bot building experience more efficient.Here’s a selection of what’s new:Streaming partial responsePrivate network access to Webhook targetsSearch in consoleSystem functions supportContinuous tests and deploymentChange Diffs in Change HistoryStreaming Partial ResponseUntil now, Dialogflow CX was only able to send ordered responses to the end user once the agent’s turn was over (which includes webhook execution). For some customers, it can take 10-15 seconds for their webhooks to execute, during which time, the bot is silent and the end user is left waiting (and probably wondering!)Owing to the Streaming Partial Response feature, if a webhook will likely run for a long time, customers can now add a static response in the fulfillment and enable partial response when using streaming APIs. This way, the Dialogflow Streaming API will send partial bot responses to the user while the webhook is executing, improving the ‘perceived’ latency in such scenarios. Here’s a sample conversation without partial response:Now, a sample conversation with partial response:To enable partial response use the toggle button under the ‘Fulfillment’ section in the Dialogflow CX UI as shown below:Private network access to Webhook targets  Dialogflow now integrates withService Directory private network access so it can connect to webhook targets inside our customers’ VPC network. This keeps the traffic within the Google Cloud network and enforcesIAM andVPC Service Controls for enterprise security. Learn more about setting this up in our Dialogflow CX documentation.Search EverywhereWondering how to navigate through hundreds of intents, thousands of training phrases? Dialogflow CX’s new global search functionality enables users to search, filter and access  resources like pages, intents, training phrases, entity types, webhooks, route groups and more. One can easily search relevant resources using the ‘resource type’ filter while searching. Search results can further be narrowed down by clicking on the ‘Search Options’ icon. This allows users to specify a criteria for filtering search results. For instance, users can search for intents containing a certain training phrase, routes referencing a certain intent and more. Continuous Tests and DeploymentBuilding an enterprise-grade bot often involves intensive QA (Quality Assurance) and deployment processes, which can lead to a lot of manual work. The CI/CD (Continuous Integration/Continuous Deployment) feature in Dialogflow CX aims to assist bot developers with new tools to manage their bot release cycles.Using this feature, one can automatically run a set of test cases configured for an environment to verify the intended behavior of the agent in that environment. Users can also use the continuous deployment setting to run verification tests before deploying a flow version to the environment to prevent unintended behaviors and minimize errors.System functionsDialogflow now has supportedsystem functions that can execute common arithmetic, string manipulation, conditional and date/time operations. This will save our users the extra effort to write code for these operations, thus improving efficiency. This is also a step towards centralization of business logic of the bot, reducing dependency on webhooks. Customers can apply these inline system functions in their agent’sconditions and fulfillments (such astext responses,conditional responses andparameter presets) to generate dynamic values during conversations. Here’s an example:Here the agent is responding back with a dynamically calculated count of the number of items in a customer’s list. The dynamic calculation is made possible by simply using system functions for counting the number of parameters (COUNT) in the session and converting that number to text (TO_TEXT) for display. CurrentFunctions list:Arithmetic operations: ADD, MINUS, RANDString manipulation: LOWER, SUBSTITUTE, SPLIT, JOIN, CONCATENATE, MID, LEN, TO_TEXT, CONTAINConditional: IFDate/time: NOW, FORMAT_DATEOther: COUNTChange DiffsDialogflow CX makes it really easy to work with teams on a single agent, with this new enhancement to the Change history feature in Dialogflow CX, which logs changes made to the agent. Users can now click each entry in the change history table to view the before and after of each resource and see exactly what changed.Sample screenshot:ConclusionWith these new features in Dialogflow CX, it is easier for enterprises to build and manage large and complex agents. Streaming partial responses will improve the experience of the end user when connecting to external systems via webhooks. The search functionality and system functions will help make the bot building process more efficient. The features around continuous tests and deployment will make it easier to manage the CI/CD pipeline. And with the Service Directory integration for private network access in webhooks, enterprises can better secure their back-end code. Finally, change diffs along with change history will make change management easier while working with large teams.Stay tuned for more features that will enhance the collaboration experience in Dialogflow CX, visual builder improvements and features to help our power users build and manage bots more efficiently.Related ArticleRespond to customers faster and more accurately with Dialogflow CXNew Dialogflow CX Virtual Agents can jumpstart your contact center operational efficiency goals, drive CSAT up and take care of your huma…Read Article
Quelle: Google Cloud Platform

Save messages, money, and time with Pub/Sub topic retention

Starting today, there is a simpler, more useful way to save and replay messages that are published to Pub/Sub: topic retention. Prior to topic retention, you needed to individually configure and pay for message retention in each subscription. Now, when you enable topic retention, all messages sent to the topic within the chosen retention window are accessible to all the topic’s subscriptions, without increasing your storage costs when you add subscriptions. Additionally, messages will be retained and available for replay even if there are no subscriptions attached to the topic at the time the messages are published. Topic retention extends Pub/Sub’s existing seek functionality—message replay is no longer constrained to the subscription’s acknowledged messages. You can initialize new subscriptions with data retained by the topic, and any subscription can replay previously published messages. This makes it safer than ever to update stream processing code without fear of data processing errors, or to deploy new AI models and services built on a history of messages. Topic retention explainedWith topic retention, the topic is responsible for storing messages, independently of subscription retention settings. The topic owner has full control over the topic retention duration and pays the full cost associated with message storage by the topic. As a subscription owner, you can still configure subscription retention policies to meet your individual needs.Topic-retained messages are available even when the subscription is not configured to retain messagesInitializing data for new use casesAs organizations become more mature at using streaming data, they often want to apply new use cases to existing data streams that they’ve published to Pub/Sub topics. With topic retention, you can access the history of this data stream for new use cases by creating a new subscription and seeking back to a desired point in time.Using the GCloud CLI These two commands initialize a new subscription and replay data from two days in the past. Retained messages are available within a minute after the seek operation is performed.Choosing the retention option that’s right for youPub/Sub lets you choose between several different retention policies for your messages; here’s an overview of how we recommend you should use each type.Topic retention lets you pay just once for all attached subscriptions, regardless of when they were created, to replay all messages published within the retention window. We recommend topic retention in circumstances where it is desirable for the topic owner to manage shared storage.Subscription retention allows subscription owners, in a multi-tenant configuration, to guarantee their retention needs independently of the retention settings configured by the topic owner.Snapshots are best used to capture the state of a subscription at the time of an important event, e.g. an update to subscriber code when reading from the subscription.Transitioning from subscription retention to topic retentionYou can configure topic retention when creating a new topic or updating an existing topic via the Cloud Console or the gcloud CLI. In the CLI, the command would look like:gcloud alpha pubsub topics update myTopic –message-retention-duration 7d.If you are migrating to topic retention from subscription storage, subscription storage can be safely disabled after 7 days.What’s nextPub/Sub topic retention makes reprocessing data with Pub/Sub simpler and more useful. To get started, you can read more about the feature, visit the pricing documentation, or simply enable topic retention on a topic using Cloud Console or the gcloud CLI.Related ArticleHow to detect machine-learned anomalies in real-time foreign exchange dataModel the expected distribution of financial technical indicators to detect anomalies and show when the Relative Strength Indicator is un…Read Article
Quelle: Google Cloud Platform

Celebrating Women’s Equality Day with Google Cloud

Editor’s note: August 26th is Women’s Equality Day, a day celebrated in the United States to commemorate the 1920 adoption of the Nineteenth Amendment to the United States Constitution, which prohibits the states and the federal government from denying the right to vote on the basis of sex. This post celebrates this moment in U.S. history and all women globally.On this day, 101 years ago, the Nineteenth Amendment to the United States Constitution was adopted. This monumental amendment granted women the right to vote and protects U.S. citizens from being denied the right to vote on the basis of sex. Today, the Google Cloud community celebrates and remembers those who have fought and continue to fight for the voting rights of all people. While this day is officially celebrated in the U.S., we’re taking this time to celebrate all women globally. Women’s Equality Day is an important moment to recognize the powerful women and movements that have paved the way for us while continuing to work to break barriers for the next generation. As members of the Google Cloud team, we are passionate about seeking opportunities for women and underrepresented communities to have a voice and strive to reflect the diversity of our users and fellow Googlers in all that we do. We asked members of the Google Cloud team, “What does Women’s Equality Day mean to you?”. The responses were hopeful and passionate, calling for a celebration of the progress made while recognizing that there is still work to be done. The respondents highlighted areas where they are committed to doing the work now and in the future to make sure that equality is a reality for all women. We are inspired by the words of our peers and are honored to share them with you today. Alison Wagonfeld, VP Marketing, Google Cloud, thanks all the women before her that fought for our rights and pledges to women around the world that she will fight every day for global equality for women.Eva Tsai, Director, Marketing Strategy & Operations, Google Cloud, wisely proclaims that we do not have to choose between greatness and diversity. Without diversity, there’s no greatness.Allison Romano, Director, Digital Experience, Google Cloud, is calling to fix the wage gap to make sure women have pay equality and are treated, valued, and paid equally.Taylor Sterling, Director of Customer Marketing, Google Workspace, Google Cloud is proud to be a part of a company that shines a light on all opportunities to expand knowledge.Cynthia Hester, Director, Customer Programs, Google Cloud, is expanding the narrative of this day beyond women getting the right to vote and committed to contributing to making sure that equality is a reality for all women.Teena Piccione, Managing Director, US, TMEG Industry, asks us to join her as we celebrate today and continue to ensure that we are still shattering that glass ceiling, hacking that glass ceiling and making a difference for the generations to come.Patricia Hadden, Growth Marketing, Google Workspace, celebrates those women in her life that are symbols of strength in her life from  those who are incredibly well known to our mothers and our grandmothers.Kristi Berg, Director, Enterprise Customer Demand, reminds us that there are still barriers that keep women from living up to their potential and that the empowerment of girls is key to social, economic and political stability.Jeana Jorgensen, Senior Director, Product Marketing, Google Cloud, is reminding us of the power of encouragement and urging us to use our words for good and to build up our fellow women.Kady Dundas, Director of Product Marketing, Google Workspace, Google Cloud, is reflecting on the “journey of our trans sisters…and celebrating with them this year”.Visit Google’s Diversity, Equity & Inclusion site to learn more about Google’s commitment to continuing to make diversity, equity, and inclusion part of everything we do. Here you can read our 2021 Annual Diversity Report and check out Google’s approach to diversity, equity, and inclusion.Related ArticleRead Article
Quelle: Google Cloud Platform

BigQuery Admin reference guide: Monitoring

Last week, we shared information on BigQuery APIs and how to use them, along with another blog on workload management best practices. This blog focuses on effectively monitoring BigQuery usage and related metrics to operationalize workload management we discussed so far.Monitoring Options for BigQuery ResourceBigQuery Monitoring Best PracticesVisualization Options For Decision MakingTips on Key Monitoring Metrics Monitoring options for BigQueryAnalyzing and monitoring BigQuery usage is critical for businesses for overall cost optimization and performance reporting. BigQuery provides its native admin panel with overview metrics for monitoring. BigQuery is also well integrated with existing GCP services like Cloud Logging to provide detailed logs of individual events and Cloud Monitoring dashboards for analytics, reporting and alerting on BigQuery usage and events. BigQuery Admin PanelBigQuery natively provides an admin panel with overview metrics. This feature is currently in preview and only available for flat-rate customers within the Admin Project. This option is useful for organization administrators to analyze and monitor slot usage and overall performance at the organization, folder and project levels. Admin panel provides real time data for historical analysis and is recommended for capacity planning at the organization level. However, it only provides metrics for query jobs. Also, the history is only available for up to 14 days.Cloud MonitoringUsers can create custom monitoring dashboards for their projects using Cloud Monitoring. This provides high-level monitoring metrics, and options for alerting on key metrics and automated report exports. There is a subset of metrics that are particularly relevant to BigQuery including slots allocated, total slots available, slots available by job, etc. Cloud Monitoring also has a limit of 375 projects that can be monitored per workspace (as of August 2021). This limit can be increased upon request. Finally, there is limited information about reservations in this view and no side by side information about the current reservations and assignments.Audit logs Google Cloud Audit logs provide information regarding admin activities, system changes, data access and data updates to comply with security and compliance needs. The BigQuery data activities logs, provide the following key metrics:query – The BigQuery SQL executedstartTime – Time when the job startedendTime – Time when the job endedtotalProcessedBytes – Total bytes processed for a jobtotalBilledBytes – Processed bytes, adjusted by the job’s CPU usagetotalSlotMs – The total slot time consumed by the query jobreferencedFields – The columns of the underlying table that were accessedUsers can set up an aggregated logs sink at organization, folder or project level to get all the BigQuery related logs:Other Filters:Logs from Data Transfer ServiceprotoPayload.serviceName=bigquerydatatransfer.googleapis.comLogs from BigQuery Reservations APIprotoPayload.serviceName=bigqueryreservation.googleapis.comINFORMATION_SCHEMA VIEWSBigQuery provides a set of INFORMATION_SCHEMA views secured for different roles to quickly get access to BigQuery jobs stats and related metadata. These views (also known as system tables) are partitioned and clustered for faster extraction of metadata and are updated in real-time. With the right set of permission and access level a user can monitor/review jobs information at user, project, folder and organization level. These views allow users to:Create customized dashboards by connecting to any BI tool Quickly aggregate data across many dimensions such as user, project, reservation, etc.Drill down into jobs to analyze total cost and time spent per stageSee holistic view of the entire organizationFor example, the following query provides information about the top 2 jobs in the project with details on job id, user and bytes processed by each job.Data StudioLeverage these easy to set up public Data Studio dashboards for monitoring slot and reservation usages,  query troubleshooting, load slot estimations, error reporting, etc. Check out this blog for more details on performance troubleshooting using Data Studio.Looker Looker marketplace provides  BigQuery Performance Monitoring Block for monitoring BigQuery usage. Check out this blog for more details on performance monitoring using Looker. Monitoring best practicesKey metrics to monitorTypical questions administrator or workload owners would like to understand are:What is my slots utilization for a given project?How much data scan and processing takes place during a given day or an hour?How many users are running jobs concurrently?How is performance and throughout changing over the time?How can I appropriately perform cost analysis for showback and chargeback?One of the most demanding analyses is to understand how many slots are good for a given workload i.e. do we need more or less slots as workload patterns change? Below is a list of key metrics and trends to observe for better decision making on BigQuery resources:Monitor slot usage and performance trends (week over week, month over month). Correlate trends with any workload pattern changes, for example:Are more users being onboarded within the same slot allocation?Are new workloads being enabled with the same slot allocation?You may want to allocate more slots if you see:Concurrency – consistently increasingThroughput – consistently decreasingSlot Utilization – consistently increasing or keeping beyond 90%If slot utilization has spikes, are they on a regular frequency?In this case, you may want to leverage flex slots for predictable spikesCan some non-critical workloads be time shifted?For a given set of jobs with the same priority, e.g.  for a specific group of queries or users:Avg. Wait Time – consistently increasingAvg. query run-time – consistently increasingConcurrency and throughputConcurrency is the number of queries that can run in parallel with the desired level of performance, for a set of fixed resources. In contrast, throughput is the number of completed queries for a given time duration and a fixed set of resources.In the blog BigQuery workload management best practices, we discussed in detail on how BigQuery leverages dynamic slot allocation at each step of the query processing. The chart above reflects the slot replenishment process with respect to concurrency and throughput. More complex queries may require more number of slots, hence fewer available slots for other queries. If there is a requirement for a certain level of concurrency and minimum run-time, increased slot capacity may be required.  In contrast, simple and smaller queries gives you faster replenishment of slots, hence high throughput to start with for a given workload. Learn more about BiqQuery’s fair schedulingand query processing in detail.Slot utilization rateSlot utilization rate is a ratio of slots used over total available slots capacity for a given period of time. This provides a window of opportunities for workload optimization. So, you may want to dig into the utilization rate of available slots over a period. If you see that on an average a low percentage of available slots are being used during a certain hour, then you may add more scheduled jobs within that hour to further utilize your available capacity.  On the other hand, high utilization rate means that either you should move some scheduled workloads to different hours or purchase more slots.For example: Given a 500 slot reservation (capacity), the following query can be used to find total_slot_ms over a period of time:Lets say, we have the following results from the query above:sum(total_slot_ms) for a given second is 453,708 mssum(total_slot_ms) for a given hour is 1,350,964,880 mssum(total_slot_ms) for a given day is  27,110,589,760 msTherefore, slot utilization rate can be calculated  using the following formula: Slot Utilization = sum(total_slot_ms) / slot capacity available in msBy second: 453,708 / (500 * 1000) = 0.9074 => 90.74%By hour: 1,350,964,880/(500 * 1000 * 60 * 60) = 0.7505 => 75.05%By day: 27,110,589,760 / (500 * 1000 * 60 * 60 * 24) = 0.6276 => 62.76%Another common metric used to understand slot usage patterns is to look at the average slot time consumed over a period for a specific job or workloads tied to a specific reservation. Average slot usage over a period: Highly relevant for workload with consistent usageMetric:SUM(total_slot_ms) / {time duration in milliseconds} => custom durationDaily Average Usage: SUM(total_slot_ms) / (1000 * 60 * 60 * 24) => for a given dayExample Query:Average slot usage for an individual job: Job level statisticsAverage slot utilization over a specific time period is useful to monitor trends, help understand how slot usage patterns are changing or if there is a notable change in a workload. You can find more details about trends in the ‘Take Action’ section below. Average slot usage for an individual job is useful to understand query-run time estimates, to identify outlier queries and to estimate slots capacity during capacity planning.ChargebackAs more users and projects are onboarded with BigQuery, it is important for administrators to not only monitor and alert on resource utilization, but also help users and groups to efficiently manage cost+performance. Many organizations require that individual project owners be responsible for resource management and optimization. Hence, it is important to provide reporting at a project-level that summarizes costs and resources for the decision makers.Below is an example of a reference architecture that enables comprehensive reporting,  leveraging audit logs, INFORMATION_SCHEMA and billing data. The architecture highlights persona based reporting for admin and individual users or groups by leveraging authorized view based access to datasets within a monitoring project.Export audit log data to BigQuery with specific resources you need (in this example for the BigQuery). You can also export aggregated data at organization level.The INFORMATION_SCHEMA provides BigQuery metadata and job execution details for the last six months. You may want to persist relevant information for your reporting into a BigQuery dataset. Export billing data to BigQuery for cost analysis and spend optimization.With BigQuery, leverage security settings such as authorized views to provide separation of data access by project or by persona for admins vs. users.Analysis and reporting dashboards built with visual tools such as Looker represent the data from BigQuery dataset(s) created for monitoring. In the chart above, examples of dashboards include: Key KPIs for admins such as usage trend or spend trendsData governance and access reportsShowback/Chargeback by projectsJob level statistics User dashboards with relevant metrics such as query stats, data access stats and job performanceBilling monitoringTo operationalize showback or chargeback reporting, cost metrics are important to monitor and include in your reporting application. BigQuery billing is associated at project level as an accounting entity. Google Cloud billing reports help you understand trends and protect your resource costs and help answer questions such as:What is my BigQuery project cost this month?What is the cost trend for a resource with a specific label?What is my forecasted future cost based on historical trends for a BigQuery project?You can refer to these examples to get started with billing reports and understand what metrics to monitor. Additionally, you can export billing and audit metrics to BigQuery dataset for comprehensive analysis with resource monitoring.As a best practice, monitoring trends is important to optimize spend on cloud resources. This article provides a visualization option with Looker to monitor trends. You can take advantage of readily available Looker block to deploy spend analytics and block for audit data visualization for your projects and, today!When to useThe following tables provide guidance on using the right tool for monitoring based on the feature requirements and use cases.Following features can be considered in choosing the mechanism to use for BigQuery monitoring:Integration with BigQuery INFORMATION_SCHMA  – Leverage the data from information schema for monitoring Integration with other data sources – Join this data with other sources like business metadata, budgets stored in google sheets, etc.Monitoring at Org Level –  Monitor all the organization’s projects togetherData/Filter based Alerts – Alert on specific filters or data selection in the dashboard. For example, send alerts for a chart filtered by a specific project or reservation.User based Alerts – Alert for specific userOn-demand Report Exports – Export the report as PDF, CSV, etc.1 BigQuery Admin Panel uses INFORMATION SCHEMA under the hood.2 Cloud monitoring provides only limited integration as it surfaces only high-level metrics.3 You can monitor up to 375 projects at a time in a single Cloud Monitoring workspace.BigQuery monitoring is important across different use cases and personas in the organization. PersonasAdministrators – Primarily concerned with secure operations and health of the GCP fleet of resources. For example, SREsPlatform Operators – Often run the platform that serves internal customers. For example, Data Platform LeadsData Owners / Users – Develop and operate applications, and manage a system that generates source data. This persona is mostly concerned with their specific workloads. For example, DevelopersThe following table provides guidance on the right tool to use for your specific requirements:Take actionTo get started quickly with monitoring on BigQuery, you can leverage publicly available data studio dashboard and related github resources. Looker also provides BigQuery Performance Monitoring Block for monitoring BigQuery usage. To quickly deploy billing monitoring with GCP, see reference blog and related github resources. The key to successful monitoring is to enable proactive alerts. For example, setting up alerts when the reservation slot utilization rate crosses a predetermined threshold. Also, it’s important to enable the individual users and teams in the organization to monitor their workloads using a self-service analytics framework or dashboard. This allows the users to monitor trends for forecasting resource needs and troubleshoot overall performance.Below are additional examples of monitoring dashboards and metrics:Organization Admin Reporting (proactive monitoring)Alert based on thresholds like 90% slot utilization rate Regular reviews of consuming projectsMonitor for seasonal peaksReview jobs metadata from information schema for large queries using  total_bytes_processed and total_slot_ms metricsDevelop data slice and dice strategies in the dashboard for appropriate chargebackLeverage audit logs for data governance and access reportingSpecific Data Owner Reporting (self-service capabilities)Monitor for large queries executed in the last X hoursTroubleshoot job performance using concurrency, slots used and time spent per job stage, etc.Develop error reports and alert on critical job failuresUnderstand and leverage INFORMATION_SCHEMA for real-time reports and alerts. Review more examples on job stats and technical deep-dive INFORMATION_SCHEMA explained with this blog.Related ArticleBigQuery Admin reference guide: API landscapeExplore the different BigQuery APIs that can help you programmatically manage and leverage your data.Read Article
Quelle: Google Cloud Platform

Amazon ElastiCache for Redis unterstützt jetzt automatische Skalierung

Amazon ElastiCache for Redis unterstützt jetzt die automatische Skalierung, um die Kapazität automatisch anzupassen und eine gleichmäßige, vorhersehbare Leistung zu den geringstmöglichen Kosten zu gewährleisten. Sie können Ihren Cluster automatisch horizontal skalieren, indem Sie Shards oder Replikationsknoten hinzufügen oder entfernen. ElastiCache for Redis verwendet AWS Application Auto Scaling für die Verwaltung der Skalierung und Amazon CloudWatch-Metriken, um festzustellen, wann es Zeit ist, die Skalierung zu erhöhen oder zu verringern. 
Quelle: aws.amazon.com

AWS IoT Core unterstützt jetzt beibehaltene MQTT-Nachrichten

Beibehaltene Nachrichten sind eine Standard-MQTT-Funktion, die Ihnen eine einfache Möglichkeit bietet, die neuesten wichtigen Nachrichten zu einem Thema für zukünftige Abonnenten zu speichern. Mit AWS IoT Core können Sie beibehaltene Nachrichten jetzt verwenden, um Konfigurationsinformationen oder wichtige Updates einfach auf Geräte zu übertragen, ohne genau zu wissen, wann sie online gehen.
Quelle: aws.amazon.com

IAM Access Analyzer hilft Ihnen bei der Generierung von IAM-Richtlinien basierend auf den Zugriffsaktivitäten, die in Ihrem Organisations-Trail gefunden wurden

Im April 2021 fügte IAM Access Analyzer die Richtliniengenerierung hinzu, damit Sie detaillierte Richtlinien basierend auf den in Ihrem Konto gespeicherten AWS-CloudTrail-Aktivitäten erstellen können. Jetzt erweitern wir die Richtliniengenerierung, damit Sie Richtlinien basierend auf den Zugriffsaktivitäten erstellen können, die in einem bestimmten Konto gespeichert sind. Sie können beispielsweise AWS Organizations verwenden, um eine einheitliche Ereignisprotokollierungsstrategie für Ihre Organisation zu definieren und alle CloudTrail-Protokolle in Ihrem Verwaltungskonto zu speichern, um Governance-Aktivitäten zu optimieren. IAM Access Analyzer hilft Ihnen, indem es die in Ihrem zugewiesenen Konto gespeicherten Zugriffsaktivitäten überprüft und eine detaillierte IAM-Richtlinie in Ihren Mitgliedskonten generiert. Auf diese Weise können Sie ganz einfach Richtlinien mit nur den erforderlichen Berechtigungen für Ihre Workloads erstellen.
Quelle: aws.amazon.com