The Future of Plugin, Themes, and Services Purchases on WordPress.com

Recently we shared that you can now purchase six popular Automattic plugins right from your WordPress.com dashboard. We’re intentionally testing this out with our own products before opening it up to the broader community. This is the first step in our plan to make taking your site to the next level faster, easier, and more flexible than ever before.

But it’s really just the beginning.

What’s coming soon

Today, we’d like to share a vision of what’s coming for instant purchases of plugins, themes, and even services – all from right within your WordPress.com dashboard. This will help you level up your site and make any goal you bring to WordPress.com a reality – with increased ease and convenience. 

Everything you need, one click away

WordPress.com already comes with a suite of powerful, adaptable tools to bring your site, blog, or store to life. On top of those tools, our Business and eCommerce customers have the option of making use of thousands of free and paid themes and plugins from across the wider WordPress ecosystem. In the near future, this will be available for all WordPress.com customers.

The new integrated experience will take that one step further, making getting up and running one-click simple and providing customers with:

A curated selection of the best plugins for every need, saving you the hassle of searching for and comparing from the hundreds of options available.Premium themes that are designed to look beautiful the second they’re activated.Professional help to make your vision a reality – even when you don’t have time to do it yourself.Managed Plugins and Themes, giving you the peace of mind that any plugin or theme you purchase is fully managed by the team at WordPress.com. No security patches. No update nags. It just works. Leaving you to focus on the things that matter most.The knowledge that you’re supporting the ecosystem of WordPress community developers and service providers as they support you in turn with your personal or business goals.

Powered by the WordPress community

WordPress isn’t the world’s most popular website builder by accident. Our roots in a huge, and hugely creative, open source community make the platform everything it is and can be.

Giving WordPress.com customers the very best tools and support to achieve their goals will take a village. We’ll be partnering with developers and service providers from across the WordPress ecosystem (and across every part of the world) to make that happen.

As Matt Mullenweg, our CEO and co-founder of the WordPress open source project said recently:

“We’ve got about 2 million people with saved payment details that we can make it one-click easy [for folks] to upgrade, so hopefully this represents a big new potential audience and customer base for people selling things in the WP ecosystem. And of course, we will prioritize working with developers and companies who participate in Five for the Future and contribute back to the WP community.”

Get on the early access list

If you’re a WordPress plugin or theme developer, or you provide professional services for WordPress users, we’d love to hear from you, today.

Drop your details in the form below, and as we work to expand the products and services we’ll bring to WordPress.com customers, you’ll be first on the list when we start reaching out to form new partnerships.

Submit a form.

We can’t wait to work with you!
Quelle: RedHat Stack

It’s time to nail down the Data Center as a Service (DCaaS) definition

It’s no secret that the COVID-19 pandemic has accelerated digital transformation around the world. According to research firm Information Services Group, the pandemic has sped up digital transformation by three to five years as enterprises seek to increase operational efficiency, improve customer experiences, and boost competitiveness. For enterprises that have been cautious about adopting digital … Continued
Quelle: Mirantis

Women Techmakers journey to Google Cloud certification

In many places across the globe, March is celebrated as Women’s History Month, and March 8th, specifically, marks the day known around the world as International Women’s Day. Here at Google, we’re excited to celebrate women from all backgrounds and are committed to increasing the number of women in the technology industry. Google’s Women Techmakers community provides visibility, community, and resources for women in technology to drive participation and innovation in the field. This is achieved by hosting events, launching resources, and piloting new initiatives with communities and partners globally. By joining Women Techmakers, you’ll receive regular emails with access to resources, tools and opportunities from Google and Women Techmakers partnerships to support you in your career.Google Cloud, in partnership with Women Techmakers, has created an opportunity to bridge the gaps in the credentialing space by offering a certification journey for Ambassadors of the Women Techmakers community. Participants will have the opportunity to take part in a free-of-charge, 6-week cohort learning journey, including: weekly 90-minute exam guide review sessions led by a technical mentor, peer-to-peer support in the form of an Online Community, and 12 months access to Google Cloud’s on-demand learning platform, Google Cloud Skills Boost. Upon completion of the coursework required in the learning journey, participants will receive a voucher for the Associate Cloud Engineer certification exam. This program, and other similar offerings such as Cloud Career Jumpstart, and the learning journey for members transitioning out of the military, are just a few examples of the investment Google Cloud is making into the future of the technology workforce. Are you interested in staying in the loop with future opportunities with Google Cloud? Join our community here.Related ArticleCloud Career Jump Start: our virtual certification readiness programCloud Career Jump Start is Google Cloud’s first virtual Certification Journey Learning program for underrepresented communities.Read Article
Quelle: Google Cloud Platform

Leveraging OpenTelemetry to democratize Cloud Spanner Observability

Today we’re announcing the launch of an OpenTelemetery receiver for Cloud Spanner,  which provides an easy way for you to process and visualize metrics from Cloud Spanner System tables, and export these to the APM tool of your choice. We have also built a reference integration with Prometheus and sample Grafana dashboards which customers  can use as a template for their own troubleshooting needs. This receiver is available starting version v0.41.0Whether you are a database admin or a developer, it is important to have tools that help you understand the performance of your database, detect if something goes wrong (elevated latencies, increased error rates, reduced throughput etc), and identify the root cause of these signals. Cloud Spanner offers a wide portfolio of Observability tools that allow you to easily monitor database performance, diagnose and fix potential issues. However, some of our customers would like to have the flexibility of consuming Cloud Spanner metrics in their own observability tooling, which could be either an open source combination of a time-series database like Prometheus coupled with a Grafana dashboard, or it could be a commercial Application Monitoring (APM) tool like Splunk, Datadog, Dynatrace, NewRelic or AppDynamics. The reason is that, organizations have already invested in their own observability tooling and don’t want to switch, since switching to a different vendor or a visualization console will require spending a great deal of effort. This is where OpenTelemetry comes in.OpenTelemetry is a vendor-agnostic observability framework for instrumenting, generating, collecting, and exporting telemetry data (traces, metrics and logs). It integrates with many libraries and frameworks across various languages to offer a large set of automatic instrumentation capabilities. The OpenTelemetry ReceiverAn OpenTelemetery receiver is a component of the OpenTelemetery Collector which is built on a Receiver-Exporter model, and by installing the new receiver for Cloud Spanner and configuring a corresponding exporter, developers can now export metrics to their APM tool of choice. This architecture offers a vendor-agnostic implementation on how to receive, process, and export telemetry data. It removes the need to run, operate, and maintain multiple agents / collectors which send traces and metrics in proprietary formats to one or more tracing and/or metrics backends. Cloud Spanner has a number of introspection tools in the form of System Tables (built-in tables that you can query to gain helpful insights about operations in Spanner such as queries, reads, and transactions). Now, with the introduction of the OpenTelemetry receiver for Cloud Spanner, developers can now consume these metrics and visualize them in their APM tool.Reference ImplementationAs a reference implementation, we have created a set of sample dashboards on Grafana, which consume metrics both from Prometheus (exported by the OpenTelemetery Collector) and Cloud monitoring to enable an end-to-end debugging experience. NOTE: Instead of deploying a self managed instance of Prometheus, customers can now also use Google’s managed service for Prometheus. Using this service will let you monitor and alert on your workloads, using Prometheus, without having to manually manage and operate Prometheus at scale. Learn more about using this service here.PrerequisitesPrometheus installed and configured. OpenTelemetry version v0.41.0 (or higher).Here are the specific configurations of these components:OpenTelemetry collectorBelow is a sample configuration file that enables the receiver and sets up an endpoint for Prometheus to scrape metrics from.[config.yml]code_block[StructValue([(u’code’, u’receivers:rn googlecloudspanner:rn collection_interval: 60srn top_metrics_query_max_rows: 100rn # backfill_enabled: truern projects:rn – project_id: “<YOUR_PROJECT>”rn service_account_key: “<SERVICE_ACCOUNT_KEY>.json”rn instances:rn – instance_id: “<YOUR_INSTANCE>”rn databases:rn – “<YOUR_DATABASE>”rnrnexporters:rn prometheus:rn send_timestamps: truern endpoint: “0.0.0.0:8889″ rnrn logging:rn loglevel: debugrnrnprocessors:rn batch:rn send_batch_size: 200rnrnservice:rn pipelines:rn metrics:rn receivers: [googlecloudspanner]rn processors: [batch]rn exporters: [logging, prometheus]’), (u’language’, u”)])]PrometheusOn Prometheus, you need to add a scrape configuration like so:[prometheus.yml]code_block[StructValue([(u’code’, u’global:rn scrape_interval: 15srnrnscrape_configs:rn – job_name: “otel”rn honor_timestamps: truern static_configs:rn – targets: [“collector:8888″, “collector:8889″]’), (u’language’, u”)])]GrafanaFinally, you need to configure Grafana and add datasources and dashboards. Our reference dashboards use two data sources – Cloud monitoring and Prometheus. This sample configuration file can be used with the dashboards we’ve shared above.[datasource.yml]code_block[StructValue([(u’code’, u’apiVersion: 1rnrndatasources:rn- name: Google Cloud Monitoringrn type: stackdriverrn access: proxyrn jsonData:rn tokenUri: https://oauth2.googleapis.com/tokenrn clientEmail: <YOUR SERVICE-ACCOUNT EMAIL> rn authenticationType: jwtrn defaultProject: <YOUR SPANNER PROJECT NAME>rn secureJsonData:rn privateKey: |rn <YOUR SERVICE-ACCOUNT PRIVATE KEY BELOW>rn —–BEGIN PRIVATE KEY—–rn rn —–END PRIVATE KEY—–rnrn- name: Prometheusrn type: prometheusrn # Access mode – proxy (server in the UI) or direct (browser in the UI).rn access: proxyrn url: http://prometheus:9090′), (u’language’, u”)])]Sample DashboardsThe monitoring dashboard powered by Cloud monitoring metrics.The Query Insights dashboard powered by PrometheusWe believe that a healthy observability ecosystem serves our customers well and this is reflected in our continued commitment to open-source initiatives. We’ve received the following feedback from the OpenTelemetry Community on this implementation: “OpenTelemetry has grown from a proposal between two open-source communities to the north star for the collection of metrics and other observability signals. Google has strengthened their commitment to our community by constantly supporting OpenTelemetry standards. Using this implementation and the corresponding dashboards, developers can now consume these metrics in any tooling of their choice, and will be very easily able to debug common issues with Cloud Spanner.” —Bogdan Drutu, Co-Founder of OpenTelemetryWhat’s next?We will continue to provide flexible experiences to developers, embrace open standards, support our partner ecosystem and continue being a key contributor to the open source ecosystem. We will also continue to provide best-in-cloud native observability tooling in our console so that our customers get the best experience wherever they are. To learn more about our Cloud Spanner’s introspection capabilities, read this blog post, and to learn more about Cloud Spanner in general, visit our website.Related ArticleImproved troubleshooting with Cloud Spanner introspection capabilitiesCloud-native database Spanner has new introspection capabilities to monitor database performance and optimize application efficiency.Read Article
Quelle: Google Cloud Platform

Technical leaders agree: AI is now a necessity to compete

AI is enabling new experiences everywhere. When people watch a captioned video on their phone, search for information online, or receive customer assistance from a virtual agent, AI is at the heart of those experiences. As users increasingly expect the conveniences that AI can unlock, they’re seen less as incremental improvements and more as the core to any app experience. A recent Forrester study shows that 84 percent of technical leaders feel they need to implement AI into apps to maintain a competitive advantage. Over 70 percent agree that the technology has graduated out of its experimental phase and now provides meaningful business value.

To make AI a core component of their business, organizations need faster, responsible ways to implement AI into their systems, ideally using their teams’ existing skills. In fact, 81 percent of technical leaders surveyed in the Forrester study say they would use more AI if it were easier to develop and deploy.

So, how can leaders accelerate the execution of their AI ambitions? Here are three important considerations for any organization to streamline AI deployments into their apps:

1. Take advantage of cloud AI services

There are cloud AI services that provide prebuilt AI models for key use cases, like translation and speech-to-text transcription. This makes it possible to implement these capabilities into apps without requiring data science teams to build models from scratch. Two-thirds of technical leaders say the breadth of use cases supported by cloud AI services is a key benefit. Using the APIs and SDKs provided, developers can add and customize these services to meet their organization’s unique needs. And prebuilt AI models benefit from regular updates for greater accuracy and regulatory compliance.

Azure has two categories of these services:

Azure Applied AI Services that are scenario-specific to accelerate time to value.
Cognitive Services that make high-quality AI models available through APIs for a more customized approach.

2. Empower your developers

Your developers can use APIs and SDKs within your cloud AI services to build intelligent capabilities into apps within their current development process. Developers of any skill level can get started quickly using the programming languages they already know. And should developers need added support, cloud vendors readily offer learning resources for quicker onboarding and troubleshooting.

Azure offers a 30-day developer learning journey for understanding key AI concepts, as well as step-by-step guidance on Microsoft Learn for those who want to build AI-powered applications.

3. Prioritize your most relevant use cases first

With AI, time to value is a matter of selecting use cases that will provide the most utility in the shortest time. Identify the needs within your organization to determine where AI capabilities can deliver the greatest impact.

For example, customers like Ecolab harness knowledge mining with Azure Cognitive Search to help their agents retrieve key information instantly, instead of spending over 30 minutes sifting through thousands of documents each time. KPMG applies speech transcription and language understanding with Azure Cognitive Services to reduce the amount of time to identify compliance risks in contact center calls from 14 weeks to two hours. And Volkswagen uses machine translation with Azure Translator to rapidly localize content including user manuals and management documents into 40 different languages.

These are just a few of the practical ways organizations have found efficiency and utility in out-of-the-box AI services that didn’t demand an unreasonable investment of time, effort, or customization to deploy.

Create business value with AI starting today

Implementing AI is simpler and more accessible than ever. Organizations of every size are deploying AI solutions that increase efficiencies, drive down overhead, or delight employees and customers in ways that are establishing them as brands of choice. It’s a great time to join them.

Learn more

Read the commissioned study by Forrester Consulting, “Fuel Application Innovation With Cloud AI Services”.
Watch the webinar on the Forrester study.
Visit the Azure AI page for more on key AI use cases.

Quelle: Azure

Introducing dynamic lineage extraction from Azure SQL Databases in Azure Purview

Data citizens including both technical and business users rely on data lineage for root cause analysis, impact analysis, data quality tracing, and other data governance applications. In the current data landscape, where data is fluidly moving across locations (on-premises to and across clouds) and across data platforms and applications, it is increasingly important to map the lineage of data. That’s why we’re introducing dynamic lineage extraction currently in preview.

Conventional systems map lineage by parsing data transformation scripts, otherwise called static code analysis. This works well in simple scenarios. For example, when a SQL script is used to produce a target table Customer_Sales by joining two tables called Customer and Sales, static code analysis can map data lineage. However, in many real use cases, the data processing workloads are quite complicated. The scripts could be wrapped in a stored procedure that is parametrized and uses dynamic SQL. There could be a decision tree with an if then else statement executing different scripts at runtime. Or simply, data transactions could have failed to commit at runtime.

In all these examples, dynamic analysis is required to track lineage effectively. Even more importantly, static lineage analysis does not associate data and processes with runtime metadata, limiting customer applications significantly. For instance, dynamic lineage encoding by whom and when a stored procedure was run, and from what application and which server, will enable customers to govern privacy, comply with regulations, increase time-to-insight, and better understand their overall data and processes.

Dynamic data lineage—Azure SQL Databases

Today, we are announcing the preview release of dynamic lineage extraction from Azure SQL Databases in Azure Purview. Azure SQL Database is one of the most widely used relational database systems in enterprises. Stored procedures are commonly used to perform data transformations and aggregations on SQL tables for downstream applications. With this release, the Azure Purview Data Map can be further enriched with dynamic lineage metadata such as run status, impacted number of rows, the client from which the stored procedure is run, user info, and other operational details from actual runs of SQL stored procedures in Azure SQL Databases.

Limited lineage metadata from static code analysis*

The actual implementation involves Azure Purview Data Map tapping into the instrumentation framework of the SQL engine, and extracting runtime logs to aggregate dynamic lineage. The runtime logs also provide actual queries executed in the SQL engine for data manipulation, using Azure Purview can map data lineage and gather additional detailed provenance information. Azure Purview scanners run several times a day to keep up the freshness of dynamic lineage and provenance from Azure SQL Databases.

To learn more about Azure Purview dynamic data lineage from Azure SQL Databases, check out the video:

Get started with Azure Purview today

The native integration with Azure SQL Databases for dynamic lineage and provenance extraction is the first of its kind and Azure Purview is leading the way. Follow the steps below to get started.

Quickly and easily create an Azure Purview account to try the generally available features.
Read quick start documentation on how to connect an Azure SQL Database to an Azure Purview account for dynamic data lineage.

Quelle: Azure

Meet PCI compliance with credit card tokenization

In building and running a business, the safety and security of your and your customers' sensitive information and data is a top priority, especially when storing financial information and processing payments are concerned. The Payment Card Industry Data Security Standard (PCI DSS)1 defines a set of regulations put forth by the largest credit card companies to help reduce costly consumer and bank data breaches.

In this context, PCI compliance refers to meeting the PCI DSS’ requirements for organizations and sellers to help safely and securely accept, store, process, and transmit cardholder data during credit card transactions, to prevent fraud and theft.

Towards confidential computing

In June 2021, the Monetary Authority of Singapore (MAS)2 issued an advisory circular on addressing the technology and cyber security risks associated with public cloud adoption. The paper describes a set of risk management principles and best practice standards to guide financial institutions in implementing appropriate data security measures to help protect the confidentiality and integrity of sensitive data in the public cloud, taking into consideration data-at-rest, data-in-motion, and data-in-use where applicable3. Specifically, at section 21, reported below, for data that is being used or processed in the public cloud, financial institutes (FIs) may implement confidential computing solutions if available from the cloud service provider. Confidential computing solutions protect data by isolating sensitive data in a protected, hardware-based computing enclave.

Data security and cryptographic key management

FIs should implement appropriate data security measures to protect the confidentiality and integrity of sensitive data in the public cloud, taking into consideration data-at-rest, data-in-motion and data-in-use where applicable.

For data-at-rest, that is, data in cloud storage, FIs may implement additional measures e.g. data object encryption, file encryption or tokenization in addition to the encryption provided at the platform level.
For data-in-motion, that is, data that traverses to and from, and within the public cloud, FIs may implement session encryption or data object encryption in addition to the encryption provided at the platform level.
For data-in-use, that is, data that is being used or processed in the public cloud, FIs may implement confidential computing solutions if available from the CSPs. Confidential computing solutions protect data by isolating sensitive data in a protected, hardware-based computing enclave during processing.

Confidential virtual machines

On these premises, FIs can leverage Azure confidential computing for building an end-to-end data and code protection solution on the latest technology for hardware-based memory encryption. The solution presented in this article for processing credit card payments makes use of confidential virtual machines (CVMs) running on AMD Secure Encrypted Virtualization (SEV)—Secure Nested Paging (SNP) technology.

AMD introduced SEV to isolate virtual machines from the hypervisor. Hypervisors are typically considered trusted components in the virtualization security model, and many customers have requested a VM trust model which reduces the exposure to vulnerabilities in the infrastructure. With SEV, individual VMs are assigned a unique encryption key wired in the CPU, used for automatically encrypting the memory allocated by the hypervisor to run a VM.

The latest generation of SEV technology includes SNP capability. SNP adds new hardware-based security by providing strong memory integrity protection from potential attacks to the hypervisor, including data replay and memory re-mapping.

Azure confidential computing offers confidential VMs based on AMD processors with SEV-SNP technology. Confidential VMs are for tenants with high security and confidentiality requirements. You can use confidential VMs for migrations without making changes to your code, with the platform help protect your VM’s state from being read or modified. Benefits of confidential VMs include:

Robust hardware-based isolation between virtual machines, hypervisor, and host management code.
Attestation policies to ensure the host’s compliance before deployment.
Cloud-based full-disk encryption before the first boot.
VM encryption keys that the platform or the customer (optionally) owns and manages.
Secure key release with cryptographic binding between the platform’s successful attestation and the VM’s encryption keys.
Dedicated virtual Trusted Platform Module (TPM) instance for attestation and protection of keys and secrets in the virtual machine.

The provisioning of a confidential VM in Azure is as simple as any other regular virtual machine, using your preferred tool, either manually via the Azure Portal, or by scripting with Azure command-line interface (CLI). Figure 2 shows the process of creating a virtual machine in the Azure Portal, with specific attention to the “Security type” attribute. For provisioning a confidential VM based on AMD SEV-SNP technology, you have to select that specific entry in the dropdown list. At the time of writing (March 2022), confidential VMs are in preview in Azure, and thus limited in availability across regions. As this service enters general availability, more regions will be available for deployment.

Figure 1: Confidential Virtual Machine in Azure Portal.

Credit card tokenization

In the scenario above in Figure 2, the process of tokenization is a random oracle, which is a process that, given an input, generates a non-predictable output. The random output always varies even if the same input is provided. For example, when a customer makes a second payment using the same credit card used in a previous transaction, the token generated will be different. Lastly, when providing that random output back to the service, the tokenization interface fetches the original input.

Not by coincidence that I used the term “interface” for describing this tokenization service. Indeed, the technical implementation of such random generator is a Web API running in the .NET 6 runtime. Figure 3 describes the reference architecture for the solution.

Figure 2: Credit card tokenization architecture reference.

A payment transaction is initiated by the customer and payment data is transferred to the .NET Web API. This API is running on a confidential VM.
The random token is generated by the API based on the input data. Tokenization includes also encryption of such data, with a symmetric cryptographic algorithm (AES specifically).
The encryption key is stored in Azure Key Vault running on a managed Hardware Secure Module (HSM). This is a critical component of the confidential solution, as the encryption key is preserved inside the HSM. The HSM helps protecting keys from the cloud provider or any other rogue administrator. Only the Web API app is authorized to access the secret key.
The following code snippets show the implementation of the key retrieval from AKV inside the Get method of the Web API.

[HttpGet(Name = "GetToken")]
public async Task<TokenTuple> Get(CreditCard card)
{
        // Retrieve the AES encryption key from AKV
        string akvName = Environment.GetEnvironmentVariable("KEY_VAULT_NAME");
        var akvUri = $"https://{akvName}.vault.azure.net";
        var akvClient = new SecretClient(new Uri(akvUri), new Azure.Identity.DefaultAzureCredential());
        var secret = await akvClient.GetSecretAsync("AesEncryptionKey");
        EncryptionKey key = JsonSerializer.Deserialize<EncryptionKey>(secret.Value.Value);

Azure Key Vault Managed HSM is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using FIPS 140-2 Level 3 validated HSMs.

The service is highly available and zone resilient (where availability zones are supported): Each HSM cluster consists of multiple HSM partitions that span across at least two availability zones. If the hardware fails, member partitions for your HSM cluster will be automatically migrated to healthy nodes.

Each Managed HSM instance is dedicated to a single customer and consists of a cluster of multiple HSM partitions. Each HSM cluster uses a separate customer-specific security domain that cryptographically isolates each customer's HSM cluster.

The HSM is FIPS 140-2 Level 3 validated, which means that it meets compliance requirements with Federal Information Protection Standard 140-2 Level 3.

AKV Managed Hardware Security Module (MHSM) also assists with data residency as it doesn't store and process customer data outside the region the customer deploys the HSM instance in.

Lastly, with AKV MHSM, customers can generate HSM-protected keys in their own on-premises HSM and import them securely into Azure.

The obtained encryption key is then used to encrypt the payment data with a symmetric cipher. The encrypted value is associated with a newly generated token and added as a message to the queue. In the code snippet below, the pair token and encrypted data is stored in a tuple object and then enqueued.
// Encrypt the credit card information
string json = JsonSerializer.Serialize(card);
string encrypted = SymmetricCipher.EncryptToString(json, key);

// Generate token
Token token = Token.CreateNew();

// Add the token tuple to the queue
TokenTuple tuple = new (token, encrypted);
QueueManager.Instance.Enqueue(tuple);

The generated token is added to an in-memory queue. There is no persistence of data in the solution. The token expires after a configurable amount of time, typically a few seconds, that allows the payment gateway to process the payment information from the queue. The combination of running this solution on a confidential infrastructure, as well as the volatility of data in the queue, helps customers make their system PCI compliant: no sensitive payment data is stored and processed in clear text.
The queue mechanism can be implemented with any highly reliable queue engine, such as RabbitMQ. By running in a confidential VM, confidentiality of data in the queue is retained also during in-memory processing utilizing a third-party application such as RabbitMQ or similar with no code changes.
The payment gateway implements the Publish-Subscribe pattern (Pub-Sub) for retrieving messages from the queue, using a webhook for registering the endpoint to invoke and de-queue a message.
[HttpGet(Name = "ResolveToken")]
        public async Task Post(string subscriberUri)
        {
            TokenTuple tuple = QueueManager.Instance.Dequeue();
            await HttpClientFactory.PostAsync(subscriberUri, tuple);
        }

Get started

To get started with Azure confidential computing and implement a similar solution, I recommend having a look at our official Azure confidential computing documentation.

More specifically, you may want to start by creating a confidential VM as your test environment for publishing your code. You can follow the instructions described in this article to configure a CVM manually in the Azure Portal, or you may want to leverage an ARM template for automation.

All virtual machines in Azure are protected with policies and access constraints. Confidential VMs add protection in depth at the hardware root. That is, any data and code running in a confidential VM are isolated from the hypervisor and thus protected from the cloud service provider. As any IaaS service, you are still responsible for provisioning and maintenance, including OS patching and runtime installation. And as any other VM, you have the freedom to install and run any software you want that is compatible with the installed operating system. This, basically, enables you to “lift and shift” any existing application and code to Azure confidential computing, and get immediate benefits of the in-memory data protection that Azure confidential computing delivers.

References

1The Payment Card Industry Data Security Standard (PCI DSS).

2The Monetary Authority of Singapore (MAS).

3Advisory on Addressing the Technology and Cyber Security Risks Associated with Public Cloud Adoption, MAS, June 1, 2021.
Quelle: Azure