Calling All Creators: Showcase Your Art with a Discounted .art Domain and a New Website Theme

This month, we have an exciting announcement that should appeal to all creators, whether you use traditional artistic mediums or play around with new forms like digital, crypto, and VR art, or NFTs.

All new .art domain registrations are on sale through September 30, 2022. You can now secure a great domain name that reflects your artistic identity and secures your creative brand for just $6 USD for the first year, which is more than 50% off. 

We have also recently launched several brand new themes to showcase your art and give your site a fresh look. 

Get started with .art today

Your Website as a Creative Hub

As an artist, an important part of expressing yourself is being able to control the way your work and brand is presented online. Having your own site is a great way to create and sustain your brand, retain control over your content, and present it in the way you want it to be seen.  

While social media will always remain a great tool to reach your audience and get quick and direct feedback, your own website should serve as your creative hub. It should function as a sort of digital business card and can also be your online store.

.art Domains: A Strong Digital Identity for Creators

A custom domain name at WordPress.com offers you the opportunity to use your own name or any name that describes your artistic identity for your website’s address to build or enhance your online presence. Choosing .art defines you as an artist before anyone even visits your website. 

Having your own site with a domain name that reflects your artistic identity also means you don’t need to worry about aligning all your media platform profile names and handles since your site can serve as a one-stop shop with links to all of your social media profiles.

Choosing a Theme to Showcase Your Work

After you find a domain, you need a great website to show your work to the world. Check out some of our newest themes designed for artists, including Heiwa, Appleton, and Pendant.

Appleton theme for WordPress.com.

Heiwa is a great choice for a clean and elegant theme with sophisticated typography. If displaying a portfolio is what you’re looking for, check out Appleton. And last but not least, Pendant offers a dark background, large hero image, and serif headings to create a contemporary look.

Get Your .art Domain Today!

Head over to WordPress.com and get your .art domain today for just $6 USD for the first year: 

Get started with .art today

Quelle: RedHat Stack

Thank you Partners for three years of growth and winning together

Congratulations to our fast growing ecosystem of global partners for three years of commitment to Partner Advantage, underscored by great collaboration, high energy, innovative ideas, and transformative impact. Together we’ve leveraged our program to drive growth and customer satisfaction. Year to date, there has been more than a 140% year-over-year increase in experts at our partner organizations trained (devs, technical, certifications, solutions) for 2022. This has translated into thousands of happy customers, many of whose stories are available to read in our Partner Directory. Each of you continue to inspire our shared customers and all of us at Google Cloud. And we are only getting started!We are hard at work making sure every aspect of your business with Google Cloud is smooth running, easy to navigate, and profitable. So what’s in store for 2023? Here’s a sneak peak: Expect to see more activity and focus around our Differentiation Journey as a vehicle for driving your growth and success. This includes encouraging partners to offer more in the area of high value and repeatable services, where the opportunity is large and growing fast. You can learn more about the global economic impact our partners are having in this blog post.You’ll also see Partner Advantage focusing more on solutions and customer transformation. All of which will include corresponding incentives, new benefits, and more features.Thank you again for your commitment and hard work. It’s been a fantastic three years of amazing opportunity and growth. Not a partner yet? Start your journey today!The best is yet to come!-Nina HardingRelated ArticleRead Article
Quelle: Google Cloud Platform

Google + Mandiant: Transforming Security Operations and Incident Response

Over the past two decades, Google has innovated to build some of the largest and most secure computing systems in the world. This scale requires us to deliver pioneering approaches to cloud security, which we pass on to our Google Cloud customers. We are committed to solving hard security problems like only Google can, as the tip of the spear of innovation and threat intelligence.Today we’re excited to share the next step in this journey with the completion of our acquisition of Mandiant, a leader in dynamic cyber defense, threat intelligence and incident response services. Mandiant shares our cybersecurity vision and will join Google Cloud to help organizations improve their threat, incident and exposure management.Combining Google Cloud’s existing security portfolio with Mandiant’s leading cyber threat intelligence will allow us to deliver a security operations suite to help enterprises globally stay protected at every stage of the security lifecycle. With the scale of Google’s data processing, novel analytics approaches with AI and machine learning, and a focus on eliminating entire classes of threats, Google Cloud and Mandiant will help organizations reinvent security to meet the requirements of our rapidly changing world.We will retain the Mandiant brand and continue Mandiant’s mission to make every organization secure from cyber threats and confident in their readiness.Context and threat intelligence from the frontlinesOur goal is to democratize security operations with access to the best threat intelligence and built-in threat detections and responses. Ultimately, we hope to shift the industry to a more proactive approach focused on modernizing Security Operations workflows, personnel, and underlying technologies to achieve an autonomic state of existence – where threat management functions can scale as customers’ needs change and as threats evolve.Today Google Cloud security customers use our cloud infrastructure to ingest, analyze and retain all their security telemetry across multicloud and on-premise environments. By leveraging our sub-second search across petabytes of information combined with security orchestration, automation and response capabilities, our customers can spend more time defending their organizations. The addition of Mandiant Threat Intelligence—which is compiled by their team of security and intelligence individuals spread across 22 countries, who serve customers located in 80 countries—will give security practitioners greater visibility and expertise from the frontlines. Mandiant’s experience detecting and responding to sophisticated cyber threat actors will offer Google Cloud customers actionable insights into the threats that matter to their businesses right now. We will continue to share groundbreaking Mandiant threat research to help support organizations, even for those who don’t run on Google Cloud.Advancing shared fate for security operationsGoogle Cloud operates in a shared fate model, taking an active stake in the security posture of our customers. For security operations that means helping organizations find and validate potential security issues before they become an incident. Detecting, investigating and responding to threats is only part of better cyber risk management. It’s also crucial to understand what an organization looks like from an attacker’s perspective and if an organization’s cybersecurity controls are as effective as expected. By adding Mandiant’s attack surface management capabilities to Google Cloud’s portfolio, organizations will be able to continually monitor assets for exposures, enabling intelligence and red teams to move security programs from reactive to proactive to understand what’s vulnerable, misconfigured and exposed. Once an organization’s attack surface is understood, validating existing security controls is critical. With Mandiant Security Validation, organizations will be able to continuously validate and measure the effectiveness of their cybersecurity controls across cloud and on-premise environments.Transforming security operations and incident response Security leaders and their teams often lack the resources and expertise required to keep pace with today’s ever changing threats. Organizations already harness Google’s security tools, expert advice and rich partner ecosystem to evolve their security program. Google’s Autonomic Security Operations also serves as a prescriptive solution to guide our customers through this modernization journey. With the addition of Mandiant to the Google Cloud family, we can now offer proven global expertise in comprehensive incident response, strategic readiness and technical assurance to help organizations mitigate threats and reduce business risk before, during and after an incident.In addition, Google Cloud’s security operations suite will continue to provide a central point of intelligence, analysis and operations across on-premise environments, Google Cloud and other cloud providers. Google Cloud is also deeply committed to supporting our technology and solution partners, and this acquisition will enable system integrators, resellers and managed security service providers to offer broader solutions to customers.Comments on the news“The power of stronger partnerships across the cybersecurity ecosystem is critical to driving value for clients and protecting industries around the globe. The combination of Google Cloud and Mandiant and their commitment to multicloud will further support increased collaboration, driving innovation across the cybersecurity industry and augmenting threat research capabilities. We look forward to working with them on this mission.” – Paolo Dal Cin, Global Lead, Accenture Security“Google’s acquisition of Mandiant, a leader in security advisory, consulting and incident response services will allow Google Cloud to deliver an end-to-end security operations suite with even greater capabilities and services to support customers in their security transformation across cloud and on-premise environments.” – Craig Robinson, Research VP, Security Services, IDC “Bringing together Mandiant and Google Cloud, two long-time cybersecurity leaders, will advance how companies identify and defend against threats. We look forward to the impact of this acquisition, both for the security industry and the protection of our customers.” – Andy Schworer, Director, Cyber Defense Engineering, UberWe welcome Mandiant to the Google Cloud team, and together we look forward to helping security teams achieve so much more in defense of their organizations. You can read our release and Kevin Mandia’s blog for more on this exciting news.
Quelle: Google Cloud Platform

How Google scales ad personalization with Bigtable

Cloud Bigtable is a popular and widely used key-value database available on Google Cloud. The service provides scale elasticity, cost efficiency, excellent performance characteristics, and 99.999% availability SLA. This has led to massive adoption with thousands of customers trusting Bigtable to run a variety of their mission-critical workloads.Bigtable has been in continuous production usage at Google for more than 15 years now. It processes more than 5 billion requests per second at peak and has more than 10 exabytes of data under management. It’s one of the largest semi-structured data storage services at Google. One of the key use cases for Bigtable at Google is ad personalization. This post describes the central role that Bigtable plays within ad personalization.Ad personalizationAd personalization aims to improve user experience by presenting topical and relevant ad content. For example, I often watch bread-making videos on YouTube. If ads personalization is enabled in my ad settings, my viewing history could indicate to YouTube that I’m interested in baking as a topic and would potentially be interested in ad content related to baking productsAd personalization requires large-scale data processing in near real-time for timely personalization with strict controls for user data handling and retention. System availability needs to be high, and serving latencies need to be low due to the narrow window within which decisions need to be made on what ad content to retrieve and serve. Sub-optimal serving decisions (e.g. falling back to generic ad content) could potentially impact user experience. Ad economics requires infrastructure costs to be kept as low as possible.Google’s ad personalization platform provides frameworks to develop and deploy machine learning models for relevance and ranking of ad content. The platform supports both real-time and batch personalization. The platform is built using Bigtable, allowing Google products to access data sources for ads personalization in a secure manner that is both privacy and policy compliant, all while honoring users’ decisions about what data they want to provide to GoogleThe output from personalization pipelines, such as advertising profiles are stored back in Bigtable for further consumption. The ad serving stack retrieves these advertising profiles to drive the next set of ad serving decisions.Some of the storage requirements of the personalization platform include:Very high throughput access for batch and near real-time personalizationLow latency (<20 ms at p99) lookup for reads on the critical path for ad servingFast (i.e. in the order of seconds) incremental update of advertising models in order to reduce personalization delayBigtable Bigtable’s versatility in supporting both low-cost, high-throughput access to data for offline personalization as well as consistent low-latency access for online data serving makes it an excellent fit for the ads workloads. Personalization at Google-scale requires a very large storage footprint. Bigtable’s scalability, performance consistency and low cost required to meet a given performance curve are key differentiators for these workloads. Data modelThe personalization platform stores objects in Bigtable as serialized protobufs keyed by Object ids. Typical data sizes are less than 1 MB and serving latency is less than 20 ms at p99. Data is organized as corpora, which correspond to distinct categories of data. A corpus maps to a replicated Bigtable.Within a corpus, data is organized as DataTypes, logical groupings of data. Features, embeddings, and different flavors of advertising profiles are stored as DataTypes, which map to Bigtable column families. DataTypes are defined in schemas which describe the proto structure of the data and additional metadata indicating ownership and provenance. SubTypes map to Bigtable columns and are free-form. Each row of data is uniquely identified by a RowID, which is based on the Object ID. The personalization API identifies individual values by RowID (row key), DataType (column family), SubType (column part), and Timestamp.ConsistencyThe default consistency mode for operations is eventual. In this mode, data from the Bigtable replica nearest to the user is retrieved, providing the lowest median and tail latency.Reads and writes to a single Bigtable replica are consistent. If there are multiple replicas of Bigtable in a region, traffic spillover across regions is more likely. To improve the likelihood of read-after-write consistency, the personalization platform uses a notion of row affinity. If there are multiple replicas in a region, one replica is preferentially selected for any given row, based on a hash of the Row ID. For lookups with stricter consistency requirements, the platform first attempts to read from the nearest replica and requests that Bigtable return the current low watermark (LWM) for each replica. If the nearest replica happens to be the replica where the writes originated, or if the LWMs indicate that replication has caught up to the necessary timestamp, then the service returns a consistent response. If replication has not caught up, then the service issues a second lookup—this one targeted at the Bigtable replica where writes originated. That replica could be distant and the request could be slow. While waiting for a response, the platform may issue failover lookups to other replicas in case replication has caught up at those replicas.Bigtable replicationThe Ads personalization workloads use a Bigtable replication topology with more than 20 replicas, spread across four continents. Replication helps address the high availability needs for ad serving. Bigtable’s zonal monthly uptime percentage is in excess of 99.9%, and replication coupled with a multi-cluster routing policy allows for availability in excess of 99.999%.A globe-spanning topology allows for data placement that is close to users, minimizing serving latencies. However, it also comes with challenges such as variability in network link costs and throughputs. Bigtable uses Minimum Spanning Tree-based routing algorithms and bandwidth-conserving proxy replicas to help reduce network costs. For ads personalization, reducing Bigtable replication delay is key to lowering the personalization delay (the time between a user’ action and when that action has been incorporated into advertising models to show more relevant ads to the user). Faster replication is preferred but we also need to balance serving traffic against replication traffic and make sure low-latency user-data serving is not disrupted due to incoming or outgoing replication traffic flows. Under the hood, Bigtable implements complex flow control and priority boost mechanisms to manage global traffic flows and to balance serving and replication traffic priorities. Workload IsolationAd personalization batch workloads are isolated from serving workloads by pinning a given set of workloads onto certain replicas; some Bigtable replicas exclusively drive personalization pipelines while others drive user-data serving. This model allows for a continuous and near real-time feedback loop between serving systems and offline personalization pipelines, while protecting the two workloads from contending with each other.For Cloud Bigtable users, AppProfiles and cluster-routing policies provide a way to confine and pin workloads to specific replicas to achieve coarse-grained isolation. Data residencyBy default, data is replicated to every replica—often spread out globally—which is wasteful for data that is only accessed regionally. Regionalization saves on storage and replication costs by confining data to the region where it is most likely to be accessed. Compliance with regulations mandating that data pertaining to certain subjects are physically stored within a given geographical area is also vital.The location of data can be either implicitly determined by the access location of requests or through location metadata and other product signals. Once the location for a user is determined, it is stored in a location metadata table which points to the Bigtable replicas that read requests should be routed to. Migration of data based on row-placement policies happens in the background, without downtime or serving performance regressions.ConclusionIn this blog post, we looked at how Bigtable is used within Google to support an important use case—modeling user intent for ad personalization. Over the past decade, Bigtable has scaled as Google’s personalization needs have scaled by orders of magnitude. For large-scale personalization workloads, Bigtable offers low cost storage with excellent performance characteristics. It seamlessly handles global traffic flows with simple user configurations. Its ease at handling both low-latency serving and high-throughput batch computations make it an excellent option for lambda-style data processing pipelines.We continue to drive high levels of investment to further lower costs, improve performance, and bring new features to make Bigtable an even better choice for personalization workloads.Learn moreTo get started with Bigtable, try it out with a Qwiklab and learn more about the product here.AcknowledgementsWe’d like to thank Ashish Awasthi, Ashish Chopra, Jay Wylie, Phaneendhar Vemuru, Bora Beran, Elijah Lawal, Sean Rhee and other Googlers for their valuable feedback and suggestions.Related ArticleMoloco handles 5 million+ ad requests per second with Cloud BigtableMoloco uses Cloud Bigtable to build their ad tech platform and process 5+ million ad requests per second.Read Article
Quelle: Google Cloud Platform

Amazon SageMaker Autopilot bietet jetzt Optionen für die benutzerdefinierte Aufteilung von Daten sowie eine verbesserte Erfahrung bei der Erstellung eines AutoML-Experiments

SageMaker Autopilot erstellt, trainiert und optimiert automatisch die besten Modelle für Machine Learning basierend auf Ihren Daten und ermöglicht Ihnen gleichzeitig die vollständige Kontrolle und Sichtbarkeit. Ab heute können Sie bei der Erstellung von Autopilot-Experimenten zum Trainieren eines Modells für Machine Learning die Aufteilung der für das Training und die Validierung der Modelle verwendeten Daten anpassen. Standardmäßig teilt Autopilot den angegebenen Datensatz in eine Aufteilung von 80-20 Prozent auf, die jeweils für Training und Validierung reserviert sind. Mit dieser Version können Sie die prozentuale Aufteilung der Trainings- und Validierungsdaten anpassen oder alternativ zwei Datensätze bereitstellen, einen für das Training und einen für die Validierung. Diese Funktion ist sowohl in Amazon SageMaker Studio als auch in der SageMaker Autopilot API verfügbar.
Quelle: aws.amazon.com

AWS Fargate ist jetzt in der Region Naher Osten (VAE) verfügbar

AWS Fargate, die Serverless-Computing-Engine für Amazon Elastic Container Service (ECS), ist jetzt in der Region Naher Osten (VAE) verfügbar. Mit Fargate können Kunden containerisierte Anwendungen bereitstellen und verwalten, ohne die zugrunde liegende Infrastruktur verwalten zu müssen. Fargate erleichtert die Skalierung von Anwendungen und hilft dabei, die Sicherheit durch die konzipierte Isolierung von Anwendungen zu verbessern.
Quelle: aws.amazon.com

AWS GameKit fügt Unterstützung von Unity hinzu

Wir freuen uns, ankündigen zu können, dass AWS GameKit nun für die Spiel-Engine Unity verfügbar ist. AWS GameKit ermöglicht es Spieleentwicklern, Spiel-Backend-Funktionen direkt von der Spiel-Engine aus bereitzustellen und anzupassen. AWS GameKit wurde am 23. März 2022 mit Unterstützung für die Unreal Engine eingeführt. Mit der heutigen Veröffentlichung für Unity können Spieleentwickler die folgenden cloudbasierten Spielfunktionen mit nur wenigen Klicks in Win64-, MacOS-, Android- oder iOS-Spielen sowohl unter der Unreal- als auch der Unity-Engine integrieren: 

Identität und Authentifizierung: Erstellen eindeutiger Identitäten für jeden Spieler und Funktionen zur Anmeldung im Spiel Verifizieren von Spieleridentitäten und Verwalten von Spielsitzungen.
Erfolge: Erstellen und Nachverfolgen von spielbezogenen Belohnungen, die Spieler verdienen können.
Cloud-Speicherung des Spielzustands: Speicherung einer synchronisierten Kopie des Spielfortschritts in AWS, sodass Spieler das Spiel sitzungsübergreifend fortsetzen können.
Benutzerspieldaten: Speichern von spielbezogenen Daten für jeden Spieler, darunter etwa Inventar, Statistiken und Cross-Play-Persistenz.

Quelle: aws.amazon.com