Introducing fine-grained access control for Cloud Spanner: A new way to protect your data in Spanner

As Google Cloud’s fully managed relational database that offers unlimited scale, strong consistency, and availability up to 99.999%, Cloud Spanner powers applications of all sizes in industries such financial services, gaming, retail, and healthcare. Today, we’re excited to announce the preview of fine-grained access control for Spanner that lets you authorize access to Spanner data at the table and column level. With fine-grained access control, it’s now easier than ever to protect your transactional data in Spanner and ensure appropriate controls are in place when granting access to data. In this post, we’ll take a look at Spanner’s current access control model, examine the use cases of fine-grained access control, and look at how to use this new capability in your Spanner applications.Spanner’s access control model todaySpanner provides access control with Identity and Access Management (IAM). IAM provides a simple and consistent access control interface for all Google Cloud services. With capabilities such as a built-in audit trail and context-aware access, IAM makes it easy to grant permissions at the instance and database level to Spanner users.The model for IAM has three main parts:Role. A role is a collection of permissions. In Spanner, these permissions allow you to perform specific actions on Spanner projects, instances, or databases. For example, spanner.instances.create lets you create a new instance, and spanner.databases.select lets you execute a SQL select statement on a database. For convenience, Spanner comes with a set of predefined roles such as roles/spanner.databaseUser, which contains the permissions spanner.databases.read and spanner.databases.write, but you can define your own custom roles, too. IAM principal. A principal can be a Google Account (for end users), a service account (for applications and compute workloads), a Google group, or a Google Workspace account that can access a resource. Each principal has its own identifier, which is typically an email address.Policy. The allow policy is the collection of role bindings that bind one or more principals to individual roles. For example, you can bind roles/spanner.databaseReader to IAM principal user@abc.xyz.The need for more robust access controlsThere are a number of use cases where you may need to define roles at a level that is more granular than the database-level. Let’s look at a few of these use cases below.Ledger applicationsLedgers, which are useful for inventory management, cryptocurrency, and banking applications, let you look at inventory levels and apply updates such as credits or debits to existing balances. In a ledger application, you can look at balances, add inventory, and remove inventory. You can’t go back and adjust last week’s inventory level to 500 widgets. This corresponds to having SELECT privileges (to look at balances) and INSERT privileges (to add or remove inventory), but not UPDATE or DELETE privileges. Analytics usersAnalytics users often need SELECT access to a few tables in Spanner database, but should not not have access to all tables in the database. Nor should they have INSERT, UPDATE, or DELETE access to anything in the database. This corresponds to having SELECT privileges on a set of tables – but not all tables – in the database.Service accountsA service account is a special type of Google account intended to represent a non-human user that needs to authenticate and be authorized to access data from Google Cloud. Each Spanner service account likely needs to have its own set of privileges on specific tables in the database. For example, consider a ride-sharing application that has service accounts for drivers and passengers. Likely the driver service account needs SELECT privileges on specific columns of the passenger’s profile table (e.g., user’s first name, profile picture, etc.), but should not be allowed to update the passenger’s email address or other personal information.The basics of fine-grained access control in SpannerIf you’re familiar with role-based access control in other relational databases, you already are familiar with the important concepts of fine-grained access control in Spanner. Let’s review the model for fine-grained access control in Spanner:Database Privilege. Spanner now supports four types of privileges: SELECT, INSERT, UPDATE, and DELETE. SELECT, INSERT, UPDATE and DELETE privileges can be assigned to tables, and SELECT, INSERT, and UPDATE can be applied to tables or columns.Database Role. Database roles are collections of privileges. For example, you can have a role called inventory_admin that has SELECT and INSERT privileges on the Inventory_Transactions table and SELECT, INSERT, UPDATE, and DELETE privileges on the Products table.Because Spanner relies on IAM for identity and access management, you need to assign database roles to the appropriate IAM principals by managing conditional role bindings. Let’s look at an example. Suppose we want to set up IAM principal user@abc.xyz with fine-grained access to two tables: Inventory_Transactions and Products. To do this, we’ll create a database role called inventory_admin and grant this role to user@abc.xyz.Step 1: Set up the IAM principal as a Cloud Spanner fine-grained access userUntil today, if you wanted to grant database-level access to an IAM principal, you’d grant them either the roles/spanner.databaseUser role, or some privileges that are bundled in that role. Now, with fine-grained access control, you can instead grant IAM principals the Cloud Spanner Fine-grained Access User role (roles/spanner.fineGrainedAccessUser).Cloud Spanner Fine-grained Access User allows the user to make API calls to the database, but does not confer any data access privileges other than those conferred to the public role. By default, the public role does not have any privileges, and this role only grants access to make API calls to the database. To access data, a fine grained access user must specify the database role that they want to act as.Step 2: Create the database roleTo create a role, run the standard SQL CREATE ROLE command:CREATE ROLE inventory_admin;The newly created database role can be referenced in IAM policies via the resource URI: projects/<project_name>/instances/<instance_name>/databases/<database_name>/databaseRoles/inventory_admin. Later on, we’ll show how to configure an IAM policy that allows a specific IAM principal permission to act as this database role.Step 3: Assign privileges to the database roleNext, assign the appropriate privileges to this role:code_block[StructValue([(u’code’, u’GRANT SELECT, INSERTrnON TABLE Inventory_TransactionsrnTO ROLE inventory_admin;rnrnGRANT SELECT, INSERT, UPDATE, DELETErnON TABLE ProductsrnTO ROLE inventory_admin;’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e52caf78490>)])]While you can run these statement individually, we recommend that you issue Cloud Spanner DDL statements in a single batch: Step 4: Assign the role to an IAM principalFinally, to allow user@abc.xyz to act as the database role inventory_admin, grant Cloud Spanner Database Role User to user@abc.xyz with the database role as a condition. To do this, open the database’s IAM Info Panel and add the following conditions using the IAM condition editor:resource.type == “spanner.googleapis.com/DatabaseRole” &&resource.name.endsWith(“/inventory_admin”)You can also add any other conditions to further restrict access to this database role, such as scheduling access by time of day, day of week, or with an expiration date.Transitioning to fine-grained access controlWhen you’re transitioning to fine-grained access control, you might want to assign both  roles/spanner.databaseUser and roles/spanner.fineGrainedAccessUser to an IAM principal. When you’re ready to switch that IAM principal over to fine-grained permissions, simply revoke the databaseUser role from that IAM principal.Using the role as an end userWhen an end user logs into Spanner, they can access the database using the role they’ve been granted, through the Google Cloud console or gcloud commands. Go, Java, Node.js and Python client libraries are also supported, with support for more client libraries coming soon.Learn moreWith fine-grained access control, you can set up varying degrees of access to your Spanner databases based on the user, their role, or the organization to which they belong. In preview today, fine-grained access control is available to all Spanner customers at no additional charge. To get started with Spanner, create an instance, try it out with a Spanner Qwiklab, or create a free trial instanceTo get started with fine-grained access control in Spanner, check out About fine-grained access control or access it directly from the Write DDL statements in the Google Cloud consoleTo get started with Spanner, create an instance or try it out for free, or take a Spanner QwiklabRelated ArticleCloud Spanner myths bustedThe blog talks about the 7 most common myths and elaborates the truth for each of the myths.Read Article
Quelle: Google Cloud Platform

Come for the sample app, stay for the main course: Cloud Spanner free trial instances

Cloud Spanner is a fully managed relational database that offers unlimited scale, strong consistency, and industry leading high availability of up to 99.999%. In our ongoing quest to make Spanner more accessible to every developer and workload, we are introducing Spanner free trial instances. Now you can learn and test drive Spanner at no cost for 90 days using a trial instance that comes with 10 GB storage capacity.At this point you might be thinking, well that’s all well and good, but what can I actually do with the Spanner free trial instance? And how do I actually start?We’re glad you asked.To help you get the best value out of this free trial instance, we built a guided experience in the Cloud console that helps you through some basic tasks with Spanner, such as creating and querying a database. And since databases aren’t very useful without any data in them, we provide a sample data set so you can get a feel for how you might deploy Spanner in a common scenario, such as a bank’s financial application. Along the way, we also highlight particularly relevant articles and videos for you to learn more about Spanner’s full range of capabilities.To get started, create a free trial instance in one of the available regions.Create an instanceOnce you’ve created your Spanner free trial instance, you’ll see a custom guide featuring Spanner’s core tasks – you’ve already completed one of them! Now that you’ve created an instance, you can choose whether to create your own database, or click the “Launch walkthrough” button to follow along with  a step-by-step tutorial to explore some Spanner features and create your first database.Create a database with sample dataOnce you complete the tutorial, you’ll have an empty database ready for data and the sample application. We’ll teach you how to insert the sample data set and query it in the second tutorial, so be sure to complete the first one.Query the databaseAs you progress through the second tutorial, you’ll be able to confirm that the finance application works by querying your data. You can continue to play around with this sample finance app, try it in another database dialect (it’s available in the Google Standard SQL and PostgreSQL), or clean it up and create some new databases on your own. Either way, your Spanner free trial instance is available to you at no cost for 90 days, and you can create up to 5 databases within it.Get startedWe’re incredibly excited to offer the Spanner free trial instance to customers at no cost for 90 days. Organizations across industries such as finance, gaming, healthcare, and retail have built their applications on Spanner to benefit from its capabilities such as industry leading high availability and unlimited scale. It is now possible for any developer or organization to try out Spanner at no cost. For more detailed instructions, check out our latest video demonstrating this experience. We hope you’ll enjoy this glimpse of what Spanner has to offer, and get inspired to build with Google Cloud.Get started today, and try Spanner for free.Related ArticleTry out Cloud Spanner databases at no cost with new free trial instancesCreate a 90-day Spanner free trial instance with 10GB storage at no cost. Try Spanner free.Read Article
Quelle: Google Cloud Platform

Migrate your most demanding enterprise PostgreSQL databases to AlloyDB for PostgreSQL with Database Migration Service

Earlier this year, we announced the launch of AlloyDB for PostgreSQL, a fully-managed, PostgreSQL-compatible database for demanding, enterprise-grade transactional and analytical workloads. Companies across industries are already looking to AlloyDB to free themselves from traditional legacy proprietary databases and scale existing PostgreSQL workloads with no application changes.AlloyDB unlocks better scale, higher availability, and faster performance. In our performance tests, AlloyDB was more than four times faster than open-source PostgreSQL for transactional performance and up to 100 times faster for analytical queries. Full PostgreSQL compatibility makes it easy to take advantage of this technology. However, as our customers look to standardize on AlloyDB, they need a migration path that is easy to set up and use, requires no management overhead, and one they can trust to move data accurately and securely. In addition, it needs to run with minimal disruption to their applications.PostgreSQL to AlloyDB supportToday, we’re excited to announce the preview of Database Migration Service (DMS) support for AlloyDB. With this announcement, you can use Database Migration Service to complete your migrations to AlloyDB from any PostgreSQL database – including on premises databases, self-managed databases on Google Cloud, and cloud databases such as Amazon Aurora or Azure Database for PostgreSQL – in an easy-to-use, secure, and serverless manner. Several Google Cloud customers are looking to DMS as their path to AlloyDB adoption. For example, SenseData is a platform created to improve the relationship between companies and customers, and is a market leader in Latin America in the field of Customer Success. “At Sensedata, we’ve built our customer success platform on PostgreSQL, and are looking to increase our platform performance and scale for the next phase of our growth,” said Paulo Souza, Co-Founder & CTO, SenseData. “We have a mixed database workload, requiring both fast transactional performance and powerful analytical processing capabilities, and our initial testing of AlloyDB for PostgreSQL has given impressive results, with more than a 350% performance improvement in our initial workload, without any application changes. We’re looking forward to using Database Migration Service for an easy migration of multiple terabytes of data to AlloyDB.”Database Migration Service has helped countless Google Cloud customers migrate their PostgreSQL, MySQL and Oracle workloads to the cloud. Now, customers can use the same proven technology and user experience to migrate to AlloyDB.With DMS, migrations are:Fast and easy: Because AlloyDB is a fully PostgreSQL-compatible database, migrations from PostgreSQL are considered “homogeneous”, with no schema conversions or other pre-migration steps required. Today, more than 85% of migrations using Database Migration Service are underway in less than an hour. Reliable and complete: Database Migration Service migrations to AlloyDB utilize the native replication capabilities of PostgreSQL to maximize security, fidelity, and reliability. Minimal downtime: DMS allows you to continuously replicate database changes from your source to AlloyDB and perform the cutover whenever you feel comfortable to ensure minimal downtime and disruption to your applications. Serverless: The serverless architecture of DMS means you don’t need to maintain or provision any migration-specific resources to support this migration, and the migration will auto-scale with your data.Migrating to AlloyDB using Database Migration ServiceYou can start your migration to AlloyDB by navigating to the Database Migration page in your Google Cloud console and creating a new migration job.Migrating is easy with five simple steps:Choose the database type you want to migrate, and see what actions you need to take to set up your source for successful migration.Create your source connection profile, which can later be used for additional migrationsCreate an AlloyDB for PostgreSQL destination cluster that fits your business needsDefine a connectivity method; DMS offers a guided connectivity path to help you achieve connectivity.Test your migration job and get started whenever you’re ready.Once the migration job starts, DMS begins the migration with an initial snapshot of your data, then proceeds to continuously replicate new changes as they happen, and there you go, you have an AlloyDB cluster ready with all your source data.Learn more and start your database journey Get started with the new Database Migration Service for PostgreSQL to AlloyDB migrations. You can also get started with the previously-announced Oracle to PostgreSQL migration previews; to see them in action, you can request access now.For more information to help get you started on your migration journey, head over to the documentation or start training with this Database Migration Service Qwiklab.Related ArticleBest practices for homogeneous database migrationsHomogeneous database migrations—across compatible database engines—helps improve app performance. See how to migrate databases to Google …Read Article
Quelle: Google Cloud Platform

Clarifying Misconceptions About Web3 and Its Relevance With Docker

This blog is the first in a two-part series. We’ll talk about the challenges of defining Web3 plus some interesting connections between Web3 and Docker.

Part two will highlight technical solutions and demonstrate how to use Docker and Web3 together.

We’ll build upon the presentation, “Docker and Web 3.0 — Using Docker to Utilize Decentralized Infrastructure & Build Decentralized Apps,” by JT Olio, Krista Spriggs, and Marton Elek from DockerCon 2022. However, you don’t have to view that session before reading this post.

What’s Web3, after all?

If you ask a group what Web3 is, you’ll likely receive a different answer from each person. The definition of Web3 causes a lot of confusion, but this lack of clarity also offers an opportunity. Since there’s no consensus, we can offer our own vision.

One problem is that many definitions are based on specific technologies, as opposed to goals:

“Web3 is an idea […] which incorporates concepts such as decentralization, blockchain technologies, and token-based economics” (Wikipedia)“Web3 refers to a decentralized online ecosystem based on the blockchain.” (Gevin Wood)

There are three problems with defining Web3 based on technologies and not high-level goals or visions (or in addition to them). In general, these definitions unfortunately confuse the “what” with the “how.” We’ll focus our Web3 definition on the “what” — and leave the “how” for a discussion on implementation with technologies. Let’s discuss each issue in more detail.

Problem #1: it should be about “what” problems to solve instead of “how”

To start, most people aren’t really interested in “token-based economics.” But, they can passionately critique the current internet (”Web2”) through many common questions:

Why’s it so hard to move between platforms and export or import our data? Why’s it so hard to own our data?Why’s it so tricky to communicate with friends who use other social or messaging services?Why can a service provider shut down my user without proper explanation or possibility of appeal? Most terms of service agreements can’t help in practicality. They’re long and hard to understand. Nobody reads them (just envision lengthy new terms for websites and user-data treatment, stemming from GDPR regulations.) In a debate against service providers, we’re disadvantaged and less likely to win.  Why can’t we have better privacy? Full encryption for our data? Or the freedom to choose who can read or use our personal data, posts, and activities?Why couldn’t we sell our content in a more flexible way? Are we really forced to accept high margins from central marketplaces to be successful?How can we avoid being dependent on any one person or organization?How can we ensure that our data and sensitive information are secured?

These are well-known problems. They’re also key usability questions — and ultimately the “what” that we need to solve. We’re not necessarily looking to require new technologies like blockchain or NFT. Instead, we want better services with improved security, privacy, control, sovereignty, economics, and so on. Blockchain technology, NFT, federation, and more, are only useful if they can help us address these issues and enjoy better services. Those are potential tools for “how” to solve the “what.”

What if we had an easier, fairer system for connecting artists with patrons and donors, to help fund their work? That’s just one example of how Web3 could help.

As a result, I believe Web3 should be defined as “the movement to improve the internet’s UX, including for — but not limited to — security, privacy, control, sovereignty, and economics.”

Problem #2: Blockchain, but not Web3?

We can use technologies in so many different ways. Blockchains can create a currency system with more sovereignty, control, and economics, but they can also support fraudulent projects. Since we’ve seen so much of that, it’s not surprising that many people are highly skeptical.

However, those comments are usually critical towards unfair or fraudulent projects that use Web3’s core technologies (e.g. blockchain) to siphon money from people. They’re not usually directed at big problems related to usability.

Healthy skepticism can save us, but we at least need some cautious optimism. Always keep inventing and looking for better solutions. Maybe better technologies are required. Or, maybe using current technologies differently could best help us achieve the “how” of Web3.

Problem #3: Web3, but not blockchain?

We can also view the previous problem from the opposite perspective It’s not just blockchain or NFTs that can help us to solve the internet’s current challenges related to Problem #1. Some projects don’t use blockchain at all, yet qualify as Web3 due to the internet challenges they solve.

One good example is federation — one of the oldest ways of achieving decentralization. Our email system is still fairly decentralized, even if big players handle a significant proportion of email accounts. And this decentralization helped new players provide better privacy, security, or control.

Thankfully, there are newer, promising projects like Matrix, which is one of very few chat apps designed for federation from the ground up. How easy would communication be if all chat apps allowed federated message exchanges between providers? 

Docker and Web3

Since we’re here to talk about Docker, how can we connect everything to containers?

While there are multiple ways to build and deploy software, containers are usually involved on some level. Wherever we use technology, containers can probably help.

But, I believe there’s a fundamental, hidden connection between Docker and Web3. These three similarities are small, but together form a very interesting, common link.

Usability as a motivation

We first defined the Web3 movement based on the need to improve user experiences (privacy, control, security, etc.). Docker containers can provide the same benefits.

Containers quickly became popular because they solved real user problems. They gave developers reproducible environments, easy distribution, and just enough isolation.

Since day one, Docker’s been based on existing, proven technologies like namespace isolation or Linux kernel cgroups. By building upon leading technologies, Docker relieved many existing pain points.

Web3 is similar. We should pick the right technologies to achieve our goals. And luckily innovations like blockchains have become mature enough to support the projects where they’re needed.

Content-addressable world

One barrier to creating a fully decentralized system is creating globally unique, decentralized identifiers for all services items. When somebody creates a new identifier, we must ensure it’s truly one of a kind.

There’s no easy fix, but blockchains can help. After all, chains are the central source of truth (agreed on by thousands of participants in a decentralized way). 

There’s another way to solve this problem. It’s very easy to choose a unique identifier if there’s only one option and the choice is obvious. For example, if any content is identified with its hash, then that’s the unique identifier. If the content is the same, the unique identifier (the hash itself) will always be.

One example is Git, which is made for distribution. Every commit is identified by its hash (metadata, pointers to parents, pointers to the file trees). This made Git decentralization-friendly. While most repositories are hosted by big companies, it’s pretty easy to shift content between providers. This was an earlier problem we were trying to solve.

IPFS — as a decentralized content routing protocol — also pairs hashes with pieces to avoid any confusion between decentralized nodes. It also created a full ecosystem to define notation for different hashing types (multihash), or different data structures (IPLD).

We see exactly the same thing when we look at Docker containers! The digest acts as a content-based hash and can identify layers and manifests. This makes it easy to verify them and get them from different sources without confusion. Docker was designed to be decentralized from the get go.

Federation

Content-based digests of container layers and manifests help us, since Docker is usable with any kind of registry.

This is a type of federation. Even if Docker Hub is available, it’s very easy to start new registries. There’s no vendor lock-in, and there’s no grueling process behind being listed on one single possible marketplace. Publishing and sharing new images is as painless as possible.

As we discussed above, I believe the federation is one form of decentralization, and decentralization is one approach to get what we need: better control and ownership. There are stances against federation, but I believe federation offers more benefits despite its complexity. Many hard-forks, soft-forks, and blockchain restarts prove that control (especially democratic control) is possible with federation.

But we can call it in any other way. I believe that the freedom of using different container registries and the process of deploying containers are important factors in the success of Docker containers.

Summary

We’ve successfully defined Web3 based on end goals and user feedback — or “what” needs to be achieved. And this definition seems to be working very well. It’s mindful of “how” we achieve those goals. It also includes the use of existing “Web2” technologies and many future projects, even without using NFTs or blockchains. It even excludes the fraudulent projects which have drawn much skepticism.

We’ve also found some interesting intersections between Web3 and Docker!

Our job is to keep working and keep innovating. We should focus on the goals ahead and find the right technologies based on those goals.

Next up, we’ll discuss fields that are more technical. Join us as we explore using Docker with fully distributed storage options.
Quelle: https://blog.docker.com/feed/

AWS Controllers für Kubernetes (ACK) für Amazon RDS, AWS Lambda, AWS Step Functions, Amazon Managed Service für Prometheus und AWS KMS jetzt allgemein verfügbar

Fünf zusätzliche Service-Controller für AWS Controllers für Kubernetes (ACK) wurden auf den allgemein verfügbaren Status gestuft. Kunden können AWS-Ressourcen jetzt mit ACK-Controllern für Amazon Relational Database Service (RDS), AWS Lambda, AWS Step Functions, Amazon Managed Service für Prometheus (AMP) und AWS Key Management Service (KMS) bereitstellen und verwalten.
Quelle: aws.amazon.com

AWS App Runner unterstützt jetzt Amazon-Route-53-Alias-Datensatz für Root-Domänennamen

AWS App Runner unterstützt jetzt Amazon-Route-53-Alias-Datensätze zum Erstellen eines Root-Domänennamens. App Runner erleichtert es Entwicklern, containerisierte Webanwendungen und APIs in großem Maßstab und ohne vorherige Infrastrukturerfahrung in kurzer Zeit in der Cloud bereitzustellen. Wenn Sie einen App-Runner-Service erstellen, weist App Runner Ihrem Service standardmäßig einen Domänennamen zu. Wenn Sie Ihren eigenen Domänennamen haben, können Sie ihn Ihrem App-Runner-Service als benutzerdefinierten Domänennamen zuordnen. Jetzt können Sie Amazon-Route-53-Alias-Datensatz verwenden, um eine Root-Domäne oder -Unterdomäne für Ihren App-Runner-Service zu erstellen. Mit Alias-Datensätzen kann Ihr App-Runner-Service beispielsweise direkt auf example.com hören, was nur mit CNAME-Datensatzunterstützung nicht möglich war, wofür Sie einen Hostnamen, wie acme.example.com, voranstellen mussten.
Quelle: aws.amazon.com

Ankündigung der neuen Home-Widgets in der AWS-Konsole für aktuelle AWS-Blogbeiträge und Launch-Ankündigungen

Wir freuen uns, zwei neue Widgets („Neueste Ankündigungen“ und „Aktuelle AWS-Blogbeiträge“) für die Home-Seite der AWS-Konsole ankündigen zu können. Mit diesen Widgets erfahren Sie leichter mehr über neue AWS-Funktionen und erhalten die neuesten Nachrichten zu AWS-Launches, Veranstaltungen und mehr. Welche AWS-Blogbeiträge und Launch-Ankündigungen angezeigt werden, hängt von den in Ihren Anwendungen verwendeten Services ab.
Quelle: aws.amazon.com

Leichte Verarbeitung Ihrer Daten, während Sie Amazon Lookout für Metriken verwenden

Wir freuen uns, bekannt zu geben, dass Sie Ihre Daten jetzt nach ihren Dimensionen filtern können, während Sie Amazon Lookout für Metriken verwenden. Amazon Lookout für Metrics verwendet Machine Learning (ML) zur automatischen Überwachung der wichtigsten Unternehmensmetriken, und das mit höherer Geschwindigkeit und Genauigkeit als traditionelle Methoden zur Anomalieerkennung. Der Dienst erleichtert ebenfalls die Diagnose der Anomalieursachen, wie unerwartete Umsatzeinbrüche, hohe Warenkorbabbruchquoten, schlagartig gestiegene Transaktionsfehler, gestiegene Neukundenregistrierungen und viele mehr.
Quelle: aws.amazon.com