Congrats to Cory Snider, Outstanding Employee Award Winner

At Mirantis, our strength and success comes from the talent and hard work of our employees, and we believe in recognizing and rewarding excellence among our staff. Recently, our HR team asked managers across the company to nominate candidates for our Outstanding Employee awards who not only embody Mirantis’ core values but also produce outstanding … Continued
Quelle: Mirantis

Getting Started with Mirantis Container Runtime on Windows Server

If you’re a Windows Server container developer who uses Docker Swarm, relies on security features like FIPS 140-2 support, or simply runs in production, you may find yourself needing to migrate to Mirantis Container Runtime.  Sound like you? We’ve got you covered.  In this walkthrough, we’ll show you how to get started with Mirantis Container … Continued
Quelle: Mirantis

Announcing Apigee Integration: An API-first approach for connecting data and applications

Enterprises across the globe are struggling to innovate because their data and applications are siloed, disconnected, and not easily accessible. Today, Google Cloud is announcing the general availability of Apigee Integration, a solution that helps enterprises easily connect their existing data and applications and surface them as easily accessible APIs that can power new experiences. Google Cloud’s Apigee is an industry-leading, full lifecycle API management platform that provides businesses control over and visibility into the APIs that connect applications and data across the enterprise and across clouds. With the launch of Apigee Integration, Google Cloud brings together the best of API management and integration, all in a unified platform leveraging cloud-native architecture principles that allows enterprise IT teams to scale their operations, improve developer productivity, and increase the speed to market.Data and applications are enablers of digital experiences. However, for many enterprises across the world, data and applications are siloed, buried inside various on-premises and cloud servers, and cannot be easily accessed by internal developers or partners. This challenge slows down efforts of digital transformation by extending development timelines from weeks to months. Integration and API management solutions address this challenge by enabling developers to seamlessly connect their data and applications, and surface them as easily consumable APIs.According to Holger Mueller, Vice President and Principal Analyst at Constellation Research, “Organizations across the world have had to fast track their digital transformation efforts to meet the ever-expanding list of customer demands with the key focus on achieving business growth. A successful integration and API strategy is a crucial component of a successful digital strategy. Companies who are able to build the infrastructure that connects data and applications, and makes them accessible via APIs to internal and external developers are more likely to lead their industries in innovation and growth. Looking ahead, it’s critical that companies can address important connectivity challenges by integrating fragmented data and applications, and surfacing them as managed APIs.”Our approach to integration leads with the digital experiences that customers, frontline employees, and partners need, and then the ability to deliver customization of data and services to deliver the impact needed. This “API-first” approach means that APIs are the end products that address a set of specific business requirements. Therefore, the design and development of an API comes before the configuration of back-end data and infrastructure. With this launch, Apigee can now enable enterprise IT teams to accelerate the speed of innovation by reducing the risk associated with data connectivity challenges. Apigee Integration will be generally available to Apigee customers starting October 6th will have the following capabilities:A unified solution that lets developers not only connect their existing applications, but also build and manage APIs within the same interface.A set of pre-built connectors to Salesforce, Cloud SQL (MySQL, PostgreSQL), Cloud Pub/Sub and BigQuery. Connectors for additional third-party applications and databases are coming soon.Advanced integration patterns that enable use cases such as required looping, parallel execution, data mapping, conditional routing, manual approvals,  and event-based triggers. ATB Financial is one of the many Apigee customers who have leveraged the API-first integration approach to power their digital transformation efforts. According to the company’s Vice President of Tech Strategy & Architecture, Innes Holman: “Connecting our enterprise applications to solutions for our employees, partners, and clients requires coordination of many factors including security and data compatibility. With Apigee Integration and API Management, we are planning to facilitate our API integration approach by connecting, securing, and managing the multitude of data & applications required to support digital experiences at ATB Financial.”To learn more please visit this page. We also continue to add rich capabilities to make it easier for enterprise developers and architects to leverage API management alongside other technologies and processes. We are announcing the following updates to Apigee:Software Development Lifecycle Tools: Apigee is adding capabilities that give developers more flexibility to create, modify, test, and deploy Apigee APIs to production using their existing SDLC tools and processes. This includes an extension to the Google Cloud Code Plugin for VS code, integration with GIT-based repos, and a CLI for archive bundling and deployment. Click here to learn more.Native Policies for Conversational AI Integration:To enable faster deployment of conversational AI solutions, Apigee now has out-of-the-box policies to connect with DialogFlow. These capabilities are generally available and will allow users to  parse DialogFlow requests, set responses, and validate parameters captured by DialogFlow. To learn more about these capabilities, watch this video.GraphQL Support: To power more use cases related to data, we are also announcing native Apigee support for GraphQL APIs. Developers can now extend all REST API management capabilities, including productizing APIs, limiting API traffic, publishing to portals, security against BOT attacks, monitoring, and monetizing to GraphQL APIs. Click here to learn more.Want to learn more? Join us at Next ‘21  to hear from our product leaders and customers on how to leverage the Apigee for your next digital transformation initiative.Related ArticleThe time for digital excellence is here—Introducing Apigee XApigee X, the new version of Google Cloud’s API management platform, helps enterprises accelerate from digital transformation to digital …Read Article
Quelle: Google Cloud Platform

Improve your security posture with new Overly Permissive Firewall Rule Insights

Are you a network security engineer managing large shared VPCs with many projects and applications deployed, and struggling to clean up hundreds of firewall rules accumulated overtime in the VPC firewall rule set? Are you a network admin setting up open firewall rules to accelerate cloud migration, but later struggling to close them down without worrying about causing outages? Are you a security admin trying to get a realistic assessment of the quality of your firewall rule configuration, and to evaluate and improve your security posture? If the answer to any of the questions above is a “Yes”, you’ve come to the right place!Firewall Insights and What’s New? In a previous blog post, we introduced the new tool Firewall Insights that provides visibility to firewall rule usage metrics and automatic analysis on firewall rule misconfigurations.  Today we would like to introduce a new module within Firewall Insights called “Overly Permissive Firewall Rule Insights”.Overly permissive firewall rules have been a major issue for many of our customers, both during cloud migration as well as the subsequent operational phase. In the past, some customers have attempted to address this pain point by writing their own scripts or manually reviewing large volumes of firewall rules to detect the problem. The results have not been successful. With the “Overly Permissive Firewall Rule Insights”, customers can now rely on GCP to automatically analyze massive amounts of firewall logs and generate easy-to-understand insights and recommendations to help them optimize their firewall configurations and improve their network security posture. Overly Permissive Firewall Rule InsightsThe type of insights and recommendations that can be generated through the Overly Permissive Firewall Rule analysis include the following: Unused firewall rulesUnused firewall rule attributes, such as IP ranges, port ranges, tags, service accounts, etcOpen IP and port ranges that are unnecessarily wideIn addition, using machine learning algorithms, the Firewall Insights engine can also look for similar firewall rules in the same organization and use its historical usage data to make predictions on the future usage for those unused rules and attributes, so that users could have additional datapoint to help them make better decisions during firewall rule optimization. Now let’s take a look at how you can generate these insights for your projects. Enable and configure the Overly Permissive Firewall Rule InsightsFirst you will need to enable the “Overly Permissive Rule Insights” module on the Firewall Insights page – Configuration:Once enabled, the system will start scanning the firewall logs for the project during the “Observation Window” and generate insight updates on a daily basis. The default observation window for this analysis is 6 weeks, but you adjust it based on your traffic pattern by doing it in the “Observation Period” configuration tab:Discover unused allow rules and attributes to clean upIf you are like most of the network and security admins working with complex cloud networks, you probably have accumulated a set of firewall rules that you know are not optimally configured, but don’t know where to start to clean them up. With the Overly Permissive Firewall Rule Insights, you can rely on GCP to help give you the answer. Once you enable this module and firewall logging for the target project, the system will analyze all network logs to reveal the traffic pattern that is going through the firewall rules. Firewall Rule Insights will automatically generate a list of allowed rules that has no hit, or specific IPs, ports or tags configured in an allow rule that did not have any hit, so you can focus your investigation on this group of rules and attributes for cleanup. Meanwhile, the system will also look at the firewall rules similarly configured in your organization and their hit pattern to make a prediction whether or not the unused rules and attributes are likely to be hit in the near future, so that you can use this information as a reference to decide whether it is safe to remove a rule or attribute from your firewall rule configuration.Get  recommendations on how to minimize permitted IP & port rangesSometimes when you are in a hurry to get  application connectivity established, you may open an overly wide IP or port range on your firewall thinking you will close that down later, but never really do it properly. This is a common problem that many network and security admins run into. A typical scenario where such a thing happens is during the cloud migration. If this is an issue you are struggling with, now you have a solution with the Overly Permissive Firewall Rule Insights. With Overly Permissive Firewall Rule Insights, customer can rely on GCP to automatically scan the firewall logs for a VPC network, analyze its firewall rules and the patterns of the traffic coming in and out of this network, identify these overly permissive IP and port ranges in the allow rules, and make recommendations on how to replace these wide ranges with smaller ranges to close down portions in those ranges that are not needed for legitimate traffic.To ensure this function works properly and make accurate recommendations, you will need to enable firewall logging for all rules you are looking to optimize because the engine relies on Firewall Log as its data source for the analysis. The insights are updated on a daily basis based on incremental analysis done on new log entries processed for that day. For more information on the Firewall Insights product, please refer to our public documentation.Related ArticleTake control of your firewall rules with Firewall InsightsFirewall Insights creates visibility into your firewall rule set so you can organize the chaos and end the headache of managing them.Read Article
Quelle: Google Cloud Platform

Cloud CISO Perspectives: September 2021

We’re busy getting ready for Google Cloud Next ‘21 where we’re excited to talk about the latest updates to our security portfolio and new ways we’re committing to help all of our customers build securely with our cloud. Here are a few sessions you don’t want to miss with our Google Cloud security experts and customers that cover top-of-mind areas in today’s cybersecurity landscape: The path to invisible securitySecure supply chain best practices & toolshareRansomware and cyber resilienceOperate with zero trust using BeyondCorp EnterpriseTrust the cloud more by trusting it less: Ubiquitous data encryptionIn this month’s post, I’ll recap the latest from Google Cloud security and industry highlights for global compliance efforts and healthcare organizations.Thoughts from around the industrySupporting federal Zero Trust strategies in the U.S.: Google Cloud recently submitted our recommendations for the Office of Management and Budget (OMB) guidance document on Moving the U.S. Government Towards Zero Trust Cybersecurity Principles and on NIST’s Zero Trust Starting Guide. We strongly support the U.S. Government’s efforts to embrace zero trust principles and architecture as part of its mandate to improve the cybersecurity of federal agencies under the Biden Administration’s Executive Order on Cybersecurity.  We believe that successfully modernizing the government’s approach to security requires a migration to zero trust architecture and embracing the security benefits offered by modern, cloud-based infrastructure. This is especially true following the recent SolarWinds and Hafnium attacks, which demonstrated that, even with best efforts and intentions, credentials will periodically fall into the wrong hands. This demands a new model of security that recognizes implicit trust in any component of a complex, interconnected system can create significant security risks. To learn more about about our holistic zero trust implementation at Google and products customers can adopt on their zero trust journey, visit: A unified and proven Zero Trust system with BeyondCorp and BeyondProd, BeyondProd: A new approach to cloud-native security BeyondCorp: A New Approach to Enterprise SecuritySovereignty in the cloud:The ability to achieve greater levels of digital sovereignty has been a growing requirement from cloud computing customers around the world. In our previously published materials, we’ve characterized digital sovereignty requirements into three distinct pillars: data sovereignty, operational sovereignty and software sovereignty. These requirements are not mutually exclusive, each requires different technical solutions, and each comes with its own set of tradeoffs that customers need to consider. What also comes through clearly is that customers want solutions that meet their sovereignty requirements without compromising on functionality or innovation. We’ve been working diligently to provide solutions, with capabilities built into our public cloud platform and, with our recent announcement to provide sovereign cloud solutions powered by Google Cloud to be offered through trusted partners. Compliance update across Asia-Pacific:In the APAC region, there have been some key regulatory updates over the course of the last year, including IRAP (Information Security Registered Assessors Program), a framework for assessing the implementation and effectiveness of an organization’s security controls against the Australian government’s security requirements and RBIA (Risk Based Internal Audit), an internal audit methodology that provides assurance to a Board of Directors on the effectiveness of how risks are managed. We’ve posted updates to guidance and resources that help support our customer’s regulatory and compliance requirements as part of our compliance offerings, which include compliance mappings geared toward assisting regulated entities with their regulatory notification and outsourcing requirements.Open Source Technology Improvement Fund:We recently pledged to provide $100 million to support third-party foundations that manage open source security priorities and help fix vulnerabilities. As part of this commitment, we are excited to announce our support of the Open Source Technology Improvement Fund (OSTIF) to improve security of eight open-source projects, including Git, Laravel, Jackson-core & Jackson-databind and others. President’s Council of Advisors on Science and Technology (PCAST): Some personal news I am excited to share this month. I’m honored to be appointed by President Biden to the President’s Council of Advisor on Science and Technology. It’s a role I take with great responsibility alongside my fellow members and I look forward to sharing more about what we can help the nation achieve in important areas like cybersecurity. I’m also very proud to be joining the most diverse PCAST in history. Must reads / listen security stories and podcastsWe’ve been recapping the media and podcast hits from Google security leaders and industry voices. Keep reading below to catch up on the latest security highlights in the news this month:Security for the Telecom Transformation:I sat down with fellow CISOs from major telecommunications providers to discuss the future of security for the industry’s transformation. We covered topics like IT modernization with the cloud, zero trust and best practices for detection and response. WSJ CIO Network Summit:Last week, Google’s Heather Adkins participated in a fireside chat with WSJ Deputy Editor Kim Nash where their conversation covered a broad range of timely cybersecurity topics, including opportunities and challenges for CIOs under the Biden Cybersecurity EO like IT modernization, the definition of zero trust as a security philosophy rather than a specific set of tools based on our lessons learned at Google and best practices for how CIOs and CISOs can work together toenhance security and achieve business objectives in tandem by adopting modern technologies like the cloud. Read more in this article for highlights from their insightful and incredibly timely interview for today’s cybersecurity environment.Washington Post Live – Securing Cyberspace:Google Cloud’s Jeanette Manfra appeared onWashington Post Live to discuss the growing need for heightened cybersecurity across industries to prevent future cyberattacks, the role of the Cybersecurity and Infrastructure Security Agency (CISA) in facilitating conversations between industries and how to deepen the partnerships between the public and private sectors to benefit our collective security. Not Your Bug, But Still Your Problem: Why You Must Secure Your Software Supply Chain:Google Cloud VP of Infrastructure and Google Fellow Eric Brewer and I sat down with Censys.io CTO Derek Abdine for a recent Webinar to discuss how organizations can better understand their software supply chain risks and stay in control of their assets and what software is deployed both inside and outside the network. Debunking Zero Trust in WIRED: Alongside Google’s Sr. Director of Information Security Heather Adkins and Google Cloud’s Director of Risk and Compliance Jeanette Manfra, we help breakdown the true meaning of zero trust in today’s security landscape and that the term is not a magic set of products, but a philosophy that organizations need to adopt across their business when it comes to security architectures. Google Cloud Security Podcast:Our team continues to collaborate with voices from across the industry in our podcast. This month, episodes unpacked topics like malware hunting with VirusTotal, cloud attack surface management with Censys.io CTO Derek Abdine, and cloud certification best practices and tips with The Certs Guy!Google Cloud Security HighlightsUpdated data processing terms to reflect new EU Standard Contractual Clauses:For years, Google Cloud customers who are subject to European data protection laws have relied on our Standard Contractual Clauses (SCCs) to legitimize overseas data transfers when using our services. In response to new EU SCCs approved by the European Commission in June, we just updated our data processing terms for Google Cloud Platform and Google Workspace. For customers, this approach offers clear and transparent support for their compliance with applicable European data protection laws. Along with this update, we published a new paper that outlines the European legal rules for data transfers and explains our approach to implementing the SCCs so that customers can better understand what our updated terms mean for them and their privacy compliance.Toronto Region Launch: We announced our latest cloud region in Toronto, Canada. Toronto joins 27 existing Google Cloud regions connected via our high-performance network, helping customers better serve their users and customers throughout the globe. In combination with our Montreal region, customers now benefit from improved business continuity planning with distributed, secure infrastructure needed to meet IT and business requirements for disaster recovery, while maintaining data sovereignty. As part of this expansion, we also announced the preview availability of Assured Workloads for Canada—a capability which allows customers to secure and configure sensitive workloads in accordance with specific regulatory or policy requirements.Protecting healthcare data with Cloud DLP: Our solutions team recently released a detailed guidefor getting started with Cloud DLP to protect sensitive healthcare and patient data. Cloud DLP helps customers inspect and mask this sensitive data with techniques like redaction, bucketing, date-shifting, and tokenization, which help strike the balance between risk and utility. The guide outlines the steps customers can take to create a secure foundation for protecting patient data.Network Forensics and Telemetry blueprint:To detect threat actors in cloud infrastructure, network monitoring can provide agentless detection insight where endpoint logs can not. In the cloud, with powerful technologies like Google Cloud’s Packet Mirroring service, capturing network traffic across your infrastructure is simpler and more streamlined. Our new Network Forensics and Telemetry blueprint allows customers to easily deploy capabilities for network monitoring and forensics via Terraform to aid visibility through Chronicle or any SIEM. We also published a helpful companion blog comparing the range of analytics options available on Google Cloud for network threat detection.Backup and Disaster Recovery in the Cloud: The ability to recover quickly from security incidents is a fundamental capability for every security program, and cloud technology offers a wide range of options to help make organizations more resilient. Recent posts from our teams provide up-to-date views of workload-specific and enterprise-wide Google Cloud backup and DR options.That wraps up another month of my cybersecurity thoughts and highlights. If you’d like to have this Cloud CISO Perspectives post delivered every month to your inbox, click here to sign-up. And remember to register for Google Cloud Next ‘21 conference happening October 12-14 virtually. Related ArticleCloud CISO Perspectives: August 2021Google Cloud CISO Phil Venables shares his thoughts on JCDC, Whitehouse Cybersecurity Summit, and other cloud security developments.Read Article
Quelle: Google Cloud Platform

Do more with less: Introducing Cloud SQL Cost optimization recommendations with Active Assist

With Cloud SQL, teams spend less time on database operations and maintenance and more time on innovation and digital transformation efforts. This increased bandwidth for strategic work can sometimes lead to significant growth in database fleet size, which in turn can introduce operational complexity when it comes to managing cost. If your financial operations team flags that database instances are exceeding their budget, it can take a substantial amount of toil, expertise and time to identify waste across a large number of projects. And given the mission critical nature of your databases, it can be difficult to make changes with confidence while optimizing costs.We are, therefore, excited to introduce Cloud SQL cost insights and recommendations powered by Active Assist to address these challenges, while minimizing the effort required to keep costs optimized. These new recommenders will help you detect and right-size over-provisioned Cloud SQL instances, detect idle instances, and optimize your Cloud SQL billing. Cloud SQL recommendations use advanced analytics and machine learning to identify, with a high degree of confidence, the over-provisioned and idle instances in your fleet, as well as the ones that may be able to take advantage of committed use discounts. This feature is available for Cloud SQL for MySQL, PostgreSQL and SQL Server via the Recommender API and Recommendation Hub today, which makes it easy for you to integrate this feature with your company’s existing workflow management and communication tools, or to export results to a BigQuery table for custom analysis.Renault Group, a French multinational automobile manufacturer and one of our early customers for Cloud SQL recommendations, is already a fan:When we first ran Google’s early prototype, we were really impressed with its accuracy, given that we know how challenging it can be to analyze and interpret activity on database instances. After thoroughly testing this feature on 140 pilot projects, we ended up realizing that almost 20% of our Cloud SQL instances were idle and took appropriate actions. Not only did these recommendations help us reduce waste, but they also saved us significant effort in the writing and maintaining of custom scripts. We are looking to bring this in as part of our organization-wide optimization dashboard. Stéphane Gamondes Cloud Office Product Leader, Renault GroupWhat are the main sources of waste in cloud databases?Based on our Cloud SQL analysis and customer feedback, we identified the three most common reasons for exceeding budget:Over-provisioned resources. When developers err on the safe side and provision unnecessarily large instances, it can lead to unnecessary spending. It’s also common for database administrators who are used to provisioning larger instances on-premises, where it can be non-trivial to quickly increase instance size, to carry this practice over into the cloud environment, where it’s not as critical due to its elasticity. Idle resources. Cloud SQL makes it extremely easy for developers to create new instances to build a prototype, or run a dev/test environment. As a result, it’s not uncommon to see idle instances left running in non-production environments.Discounts not leveraged. While workloads with predictable resource needs can benefit from the committed use discounts, we see that many customers don’t always utilize those discounts, partially due to the complexity associated with figuring them out at scale.Let’s take a peek at these new Cloud SQL cost recommendations.Recommendation Hub example summary cardRightsize overallocated instancesOne of the key challenges associated with detection and remediation of overallocated instances is the definition of what it means for a database instance to be too large for a given workload. Active Assist uses machine learning and Google’s fleet-level Cloud SQL telemetry data to identify instances that have low peak utilization for CPU and/or memory, to ensure that they can be rightsized with minimal risk and have enough capacity to still handle their peak workloads after they are right-sized.To make it easier for you to act on each of these right sizing recommendations, this feature also provides an at-a-glance view of your instance usage over the past 30 days:example rightsize overallocated instances recommenderStop idle InstancesIdle or abandoned resources are known to be one of the largest contributors to waste in cloud spending, ranging from entire projects to individual Cloud SQL instances that tend to be forgotten about. One of the challenges associated with detecting and remediating such instances is learning to distinguish between Cloud SQL instances that have low level of activity by design, from the ones that are truly idle but that still show some activity due to health monitoring and maintenance, for example. This feature uses machine learning to estimate activity across all the Cloud SQL instances managed by Google and identify, with a high degree of precision, the instances that are likely to be idle.Leverage long term commitments discounts Cloud SQL committed use discounts give you a 25% discount off of on-demand pricing for a one-year commitment and a 52% discount for a three-year commitment. Figuring out the most optimal committed use discounts can be easier said than done, as it requires a thorough analysis of each workload’s usage patterns to establish the stable usage baseline and estimate the impact of the billing model changes. Active Assist detects Cloud SQL workloads with predictable resource needs and recommends to purchase committed use discounts.Unlike the sizing and idle instance recommendations, committed usage discount recommendations for Cloud SQL are only available in private preview today (please use this form if you are interested in early access). The committed usage recommendations offer you an alternative choice between optimizing to cover your stable usage or maximize savings.Getting started with Cloud SQL cost optimization recommendationsHead over to Recommendation Hub to see if there are already some Cloud SQL cost optimization recommendations available on your project. You can also automatically export all recommendations from your Organization to BigQuery and then investigate the recommendations with DataStudio or Looker, or use Connected Sheets that let you use Google Workspace Sheets to interact with the data stored in BigQuery without having to write queries.As with any other Recommender, you can choose to opt out of data processing at any time by disabling the appropriate data groups in the Transparency & control tab under Privacy & Security settings.We hope that you can leverage Cloud SQL cost recommendations to optimize your database fleet and reduce cost, and can’t wait to hear your feedback and thoughts about this feature! Please feel free to reach us at active-assist-feedback@google.com and we also invite you to sign up for our Active Assist Trusted Tester Group if you would like to get early access to the newest features as they are developed.Related ArticleDatabase observability for developers: introducing Cloud SQL InsightsNew Insights tool helps developers quickly understand and resolve database performance issues on Cloud SQL.Read Article
Quelle: Google Cloud Platform

What is Cloud Load Balancing?

Let’s say your new application has been a hit. Usage is growing across the world and you now need to figure out how to scale, optimize, and secure the app while keeping your costs down and your users happy. That’s where Cloud Load Balancing comes in. What is Cloud Load Balancing?Cloud Load Balancing is a fully distributed load balancing solution that balances user traffic (HTTP(s), HTTPS/2 with gRPC, TCP/SSL, UDP, and QUIC) to multiple backends to avoid congestion, reduce latency, increase security, and reduce costs. It is built on the same frontend-serving infrastructure that powers Google, supporting 1 million+ queries per second with consistent high performance and low latency. Software-defined network(SDN) Cloud Load Balancing is not an instance- or device-based solution, which means you won’t be locked into physical infrastructure or face HA, scale, and management challenges. Single global anycast IP and autoscaling:Cloud Load Balancing front-ends all your backend instances in regions around the world. It provides cross-region load balancing, including automatic multi-region failover, which gradually moves traffic in fractions if backends become unhealthy or scales automatically if more resources are needed.Click to enlargeHow does Cloud Load Balancing work?External load balancingConsider the following scenario. You have a user, Shen, in California. You deploy your frontend instances in that region and configure a load-balancing Virtual IP (VIP). When your user base expands to another region, all you need to do is to create instances in additional regions. There is no change in the VIP or the DNS server settings. As your app goes global, the same patterns follow: Maya from India is routed to the instance closer to her in India. If the instances in India are overloaded and are autoscaling to handle the load, Maya will seamlessly be redirected to the other instances in the meantime and route back to India when instances have scaled sufficiently to handle the load. This is an example of external load balancing at Layer 7.Internal load balancingIn any three-tier app, after the frontend you have the middleware and the data sources to interact with, in order to fulfill a user request. That’s where you need Layer 4 internal load balancing between the frontend and the other internal tiers. Layer 4 internal load balancing is for TCP/UDP traffic behind RFC 1918 VIP, where the client IP is preserved.You get automatic heath checks and there is no middle proxy; it uses the SDN control and data plane for load balancing. How to use global HTTP(S) load balancingFor global HTTP(s) load balancing the Global Anycast VIP (IPv4 or IPv6) is associated with a forwarding rule, which directs traffic to a target proxy. The target proxy terminates the client session, and for HTTPs you deploy your certificates at this stage, define the backend host, and define the path rules. The URL map provides Layer 7 routing and directs the client request to the appropriate backend service. The backend services can be managed instance groups (MIGs) for compute instances, or network endpoint groups (NEGs) for your containerized workloads. This is also where service instance capacity and health is determined. Cloud CDN is enabled to cache content for improved performance. You can set up firewall rules to control traffic to and from your backend. The internal load balancing setup works the same way; you still have a forwarding rule but it points directly to a backend service. The forwarding rule has the Virtual IP address, Protocol, and up to five ports. How to secure your application with Cloud Load BalancingAs a best practice,run SSL everywhere. With HTTPS and SSL proxy load balancing you can use managed certs, for which Google takes care of the provisioning and managing the SSL certificate lifecycle. Cloud Load Balancing supports multiple SSL certificates, enabling you to serve multiple domains using the same load balancer IP address and port. It absorbs and dissipates layer 3 and layer 4 volumetric attacks across Google’s global load balancing infrastructureAdditionally, with Cloud Armor, you can protect against Layer 3 to Layer 7 application level attacksBy using Identity Aware Proxy and firewalls you can authenticate and authorize access to backend services. How to choose the right load balancing optionWhen deciding which load balancing option is right for your use case, consider factors such as: Internal vs external, global vs regional and type of traffic (HTTPs, TLS, or UDP)If you are looking to reduce latency, improve performance, enhance security, and lower costs for your backend systems then check out Cloud Load Balancing. It is easy to deploy in just a few clicks; simply set up the frontend and backends associated with global VIP and you are good to go!  For a more in-depth look into the service check out the documentation.For more #GCPSketchnote, follow the GitHub repo. For similar cloud content follow me on Twitter @pvergadia and keep an eye out on thecloudgirl.dev Related ArticleGlobal HTTP(S) Load Balancing and CDN now support serverless computeNow, our App Engine, Cloud Run and Cloud Functions serverless compute offerings can take advantage of global load balancing and Cloud CDN.Read Article
Quelle: Google Cloud Platform

Der Sofortmodus der Amazon-EC2-Flotte unterstützt jetzt gezielte Amazon-EC2-On-Demand-Kapazitätsreservierungen

Ab heute können Sie die EC2-Flotte mit gezielten On-Demand-Kapazitätsreservierungen nutzen. Mit On-Demand-Kapazitätsreservierungen können Sie Rechenkapazität für Ihre Amazon-EC2-Instances in einer bestimmten Availability Zone für einen beliebigen Zeitraum reservieren. Für gezielte Kapazitätsreservierungen müssen Instances speziell auf die Kapazitätsreservierung abzielen, um in der reservierten Kapazität ausgeführt zu werden. Bisher gab es beim Start einer EC2-Flotte keine Möglichkeit, gezielte Kapazitätsreservierungen zu verwenden.
Quelle: aws.amazon.com

Amazon Lex gibt Äußerungsstatistiken für Bots bekannt, die mit der Lex-V2-Konsole und API erstellt wurden

Amazon Lex ist ein Service zur Erstellung von Konversations-Schnittstellen für Sprache und Text in jeder Anwendung. Ab heute stellt Amazon Lex Äußerungsstatistiken über die Amazon-Lex-V2-Konsole und API zur Verfügung. Sie können jetzt Äußerungsstatistiken verwenden, um Bots zu optimieren, die auf der Lex-V2-Konsole und APIs basieren, um die Unterhaltungserfahrung für Ihre Benutzer weiter zu verbessern. Mit diesem Start können Sie vom Bot verarbeitete Äußerungsinformationen anzeigen und analysieren. Diese Informationen können verwendet werden, um die Leistung Ihres Bots zu verbessern, indem neue Äußerungen zu vorhandenen Absichten hinzugefügt werden und Sie dabei helfen, neue Absichten zu entdecken, die vom Bot bedient werden können. Mithilfe von Äußerungsstatistiken können Sie auch die Leistung mehrerer Versionen eines Bots vergleichen. 
Quelle: aws.amazon.com

Amazon Lex ist jetzt in den Regionen Asien-Pazifik (Seoul) und Afrika (Kapstadt) verfügbar

Ab heute ist Amazon Lex in den Regionen Asien-Pazifik (Seoul) und Afrika (Kapstadt) verfügbar. Amazon Lex ist ein Service zur Erstellung von Konversations-Schnittstellen für Sprache und Text in jeder Anwendung. Amazon Lex kombiniert fortgeschrittene Deep-Learning-Funktionen der automatischen Spracherkennung (Automatic Speech Recognition, ASR) für die Umwandlung von Sprache in Text und das Verstehen natürlicher Sprache (Natural Language Understanding, NLU), um die Absicht des Textes zu erkennen. So können Sie Anwendungen mit fesselnden Benutzererlebnissen und lebensechten Interaktionen erstellen. Mit Amazon Lex können Sie einfach anspruchsvolle, Konversation-Bots („Chatbots“), virtuelle Agenten und IVR-Systeme in natürlicher Sprache erstellen.
Quelle: aws.amazon.com