ChatGPT is now available in Azure OpenAI Service

Today, we are thrilled to announce that ChatGPT is available in preview in Azure OpenAI Service. With Azure OpenAI Service, over 1,000 customers are applying the most advanced AI models—including Dall-E 2, GPT-3.5, Codex, and other large language models backed by the unique supercomputing and enterprise capabilities of Azure—to innovate in new ways.

Since ChatGPT was introduced late last year, we’ve seen a variety of scenarios it can be used for, such as summarizing content, generating suggested email copy, and even helping with software programming questions. Now with ChatGPT in preview in Azure OpenAI Service, developers can integrate custom AI-powered experiences directly into their own applications, including enhancing existing bots to handle unexpected questions, recapping call center conversations to enable faster customer support resolutions, creating new ad copy with personalized offers, automating claims processing, and more. Cognitive services can be combined with Azure OpenAI to create compelling use cases for enterprises. For example, see how Azure OpenAI and Azure Cognitive Search can be combined to use conversational language for knowledge base retrieval on enterprise data.

Customers can begin using ChatGPT today. It is priced at $0.002/1k tokens and billing for all ChatGPT usage begins March 13th.

Real business value

Customers across industries are seeing business value from using Azure OpenAI Service, and we’re excited to see how organizations such as The ODP Corporation, Singapore’s Smart Nation Digital Government Office, and Icertis will continue to harness the power of Azure OpenAI and the ChatGPT model to achieve more:

“The ODP Corporation is excited to leverage the powerful AI technology of ChatGPT from Azure OpenAI Service, made possible through our collaboration with Microsoft. This technology will help [The ODP Corporation] drive continued transformation in our business, more effectively explore new possibilities, and design innovative solutions to deliver even greater value to our customers, partners, and associates. [The ODP Corporation] is building a ChatGPT-powered chatbot to support our internal business units, specifically HR. The chatbot has been successful in improving HR's document review process, generating new job descriptions, and enhancing associate communication. By utilizing ChatGPT's natural language processing and machine learning capabilities, [The ODP Corporation] aims to streamline its internal operations and drive business success. Embracing this cutting-edge technology will help increase our competitive edge in the market and enhance our customer experience.”—Carl Brisco, Vice President Product and Technology, The ODP Corporation

“Singapore's Smart Nation Digital Government Office is constantly looking to empower our public officers with technology to deliver better services to Singaporeans and better ideas for Singapore. ChatGPT and large language models more generally, hold the promise of accelerating many kinds of knowledge work in the public sector, and the alignment techniques embedded in ChatGPT help officers interact with these powerful models in more natural and intuitive ways. Azure OpenAI Service’s enterprise controls have been key to enabling exploration of these technologies across policy, operations, and communication use cases.”—Feng-ji Sim, Deputy Secretary, Smart Nation Digital Government Office, under the Prime Minister’s Office, Singapore

“Contracts are the foundation of commerce, governing every dollar in and out of an enterprise. At Icertis, we are applying AI to contracts so businesses globally can drive revenue, reduce costs, ensure compliance, and mitigate risk. The availability of ChatGPT on Microsoft's Azure OpenAI service offers a powerful tool to enable these outcomes when leveraged with our data lake of more than two billion metadata and transactional elements—one of the largest curated repositories of contract data in the world. Generative AI will help businesses fully realize the intent of their commercial agreements by acting as an intelligent assistant that surfaces and unlocks insights throughout the contract lifecycle. Delivering this capability at an enterprise scale, backed by inherent strengths in the security and reliability of Azure, aligns with our tenets of ethical AI and creates incredible new opportunities for innovation with the Icertis contract intelligence platform.”—Monish Darda, Chief Technology Officer at Icertis

In addition to all the ways organizations—large and small—are using Azure OpenAI Service to achieve business value, we’ve also been working internally at Microsoft to blend the power of large language models from OpenAI and the AI-optimized infrastructure of Azure to introduce new experiences across our consumer and enterprise products. For example:

•    GitHub Copilot leverages AI models in Azure OpenAI Service to help developers accelerate code development with its AI pair programmer.
•    Microsoft Teams Premium includes intelligent recap and AI-generated chapters to help individuals, teams, and organizations be more productive.
•    Microsoft Viva Sales’ new AI-powered seller experience offers suggested email content and data-driven insights to help sales teams focus on strategic selling motions to customers.
•    Microsoft Bing introduced an AI-powered chat option to enhance consumers’ search experience in completely new ways.

These are just a few examples of how Microsoft is helping organizations leverage generative AI models to drive AI transformation.

Customers and partners can also create new intelligent apps and solutions to stand out from the competition using a no-code approach in Azure OpenAI Studio. Azure OpenAI Studio, in addition to offering customizability for every model offered through the service, also offers a unique interface to customize ChatGPT and configure response behavior that aligns with your organization.

Watch how you can customize ChatGPT using System message right within Azure OpenAI Studio.

A responsible approach to AI

We’re already seeing the impact AI can have on people and companies, helping improve productivity, amplify creativity, and augment everyday tasks. We’re committed to making sure AI systems are developed responsibly, that they work as intended, and are used in ways that people can trust. Generative models, such as ChatGPT or DALL-E image generation model, are models that generate new artifacts. These types of models create new challenges; for instance, they could be used to create convincing but incorrect text to creating realistic images that never happened.

Microsoft employs a layered set of mitigations at four levels, designed to address these challenges. These are aligned with Microsoft's Responsible AI Standard. First, application-level protections that put the customer in charge, for instance, explaining that text output was generated by AI and making the user approve it. Second, technical protections like input and output content filtering. Third, process and policy protections that range from systems to report abuse to service level agreements. And fourth, documentation such as design guidelines and transparency notes to explain the benefits of a model and what we have tested.

We believe AI will profoundly change how we work, and how organizations operate in the coming months. To meet this moment, we will continue to take a principled approach to ensure our AI systems are used responsibly while listening, learning, and improving to help guide AI in a way that ultimately benefits humanity.

Getting started with Azure OpenAI Service

Learn more about Azure OpenAI Service and more about all the latest enhancements.
Get started with ChatGPT using Azure OpenAI Service.
Get started with the “Introduction to Azure OpenAI Service” course on Microsoft Learn.
Read the Partner announcement blog, Empowering partners to develop AI-powered apps and experiences with ChatGPT in Azure OpenAI Service.

Seth Juarez, Principal Program Manager and co-host of The AI Show, shares top use cases for Azure OpenAI Service and an example chatbot for retail using ChatGPT.
Quelle: Azure

Distributed Cloud-Native Graph Database with NebulaGraph Docker Extension

Graph databases have become a popular solution for storing and querying complex relationships between data. As the amount of graph data grows and the need for high concurrency increases, a distributed graph database is essential to handle the scale.

Finding a distributed graph database that automatically shards the data, while allowing businesses to scale from small to trillion-edge-level without changing the underlying storage, architecture of the service, or application code, however, can be a challenge. 

In this article, we’ll look at NebulaGraph, a modern, open source database to help organizations meet these challenges.

Meet NebulaGraph

NebulaGraph is a modern, open source, cloud-native graph database, designed to address the limitations of traditional graph databases, such as poor scalability, high latency, and low throughput. NebulaGraph is also highly scalable and flexible, with the ability to handle large-scale graph data ranging from small to trillion-edge-level.

NebulaGraph has built a thriving community of more than 1000 enterprise users since 2018, along with a rich ecosystem of tools and support. These benefits make it a cost-effective solution for organizations looking to build graph-based applications, as well as a great learning resource for developers and data scientists.

The NebulaGraph cloud-native database also offers Kubernetes Operators for easy deployment and management in cloud environments. This feature makes it a great choice for organizations looking to take advantage of the scalability and flexibility of cloud infrastructure.

Architecture of the NebulaGraph database

NebulaGraph consists of three services: the Graph Service, the Storage Service, and the Meta Service (Figure 1). The Graph Service, which consists of stateless processes (nebula-graphd), is responsible for graph queries. The Storage Service (nebula-storaged) is a distributed (Raft) storage layer that persistently stores the graph data. The Meta Service is responsible for managing user accounts, schema information, and Job management. With this design, NebulaGraph offers great scalability, high availability, cost-effectiveness, and extensibility.

Figure 1: Overview of NebulaGraph services.

Why NebulaGraph?

NebulaGraph is ideal for graph database needs because of its architecture and design, which allow for high performance, scalability, and cost-effectiveness. The architecture follows a separation of storage and computing architecture, which provides the following benefits:

Automatic sharding: NebulaGraph automatically shards graph data, allowing businesses to scale from small to trillion-edge-level data volumes without having to change the underlying storage, architecture, or application code.

High performance: With its optimized architecture and design, NebulaGraph provides high performance for complex graph queries and traversal operations.

High availability: If part of the Graph Service fails, the data stored by the Storage Service remains intact.

Flexibility: NebulaGraph supports property graphs and provides a powerful query language, called Nebula Graph Query Language (nGQL), which supports complex graph queries and traversal operations. 

Support for APIs: It provides a range of APIs and connectors that allow it to integrate with other tools and services in a distributed system.

Why run NebulaGraph as a Docker Extension?

In production environments, NebulaGraph can be deployed on Kubernetes or in the cloud, hiding the complexity of cluster management and maintenance from the user. However, for development, testing, and learning purposes, setting up a NebulaGraph cluster on a desktop or local environment can still be a challenging and costly process, especially for users who are not familiar with containers or command-line tools.

This is where the NebulaGraph Docker Extension comes in. It provides an elegant and easy-to-use solution for setting up a fully functional NebulaGraph cluster in just a few clicks, making it the perfect choice for developers, data scientists, and anyone looking to learn and experiment with NebulaGraph.

Getting started with NebulaGraph in Docker Desktop

Setting up

Prerequisites: Docker Desktop 4.10 or later.

Step 1: Enable Docker Extensions

You’ll need to enable Docker Extensions under the Settings tab in Docker Desktop. Within Docker Desktop, confirm that the Docker Extensions feature is enabled (Figure 2). Go to Settings > Extensions and select Enable Docker Extensions.

Figure 2: Enabling Docker Extensions within the Docker Desktop.

All Docker Extension resources are hidden by default, so, to ensure its visibility, go to Settings > Extensions and check the Show Docker Extensions system containers.

Step 2: Install the NebulaGraph Docker Extension

The NebulaGraph extension is available from the Extensions Marketplace in Docker Desktop and on Docker Hub. To get started, search for NebulaGraph in the Extensions Marketplace, then select Install (Figure 3).

Figure 3: Installing NebulaGraph from the Extensions Marketplace.

This step will download and install the latest version of the NebulaGraph Docker Extension from Docker Hub. You can see the installation process by clicking Details (Figure 4).

Figure 4: Installation progress.

Step 3: Waiting for the cluster to be up and running

After the extension is installed, for the first run, it normally takes fewer than 5 minutes for the cluster to be fully functional. While waiting, we can quickly go through the Home tab and Get Started tab to see details of NebulaGraph and NebulaGraph Studio, the WebGUI Utils.

We can also confirm whether it’s ready by observing the containers’ status from the Resources tab of the Extension as shown in Figure 5.

Figure 5: Checking the status of containers.

Step 4: Get started with NebulaGraph

After the cluster is healthy, we can follow the Get Started steps to log in to the NebulaGraph Studio, then load the initial dataset, and query the graph (Figure 6).

Figure 6: Logging in to NebulaGraph Studio.

Step 5: Learn more from the starter datasets 

In a graph database, the focus is on the relationships between the data. With the starter datasets available in NebulaGraph Studio, you can get a better understanding of these relationships. All you need to do is click the Download button on each dataset card on the welcome page (Figure 7).

Figure 7: Starter datasets.

For example, in the demo_sns (social network) dataset, you can use the following query to find new friend recommendations by identifying second-degree friends with the most mutual friends:

Einstein

Figure 8: Query results shown in the Nebula console.

Instead of just displaying the query results, you can also return the entire pattern and easily gain insights. For example, in Figure 9, we can see LeBron James is on two mutual friend paths with Tim:

Figure 9: Graphing the query results.

Another example can be found in the demo_fraud_detection (graph of loan) dataset, where you can perform a 10-degree check for risky applicants, as shown in the following query:

MATCH
p_=(p:`applicant`)-[*1..10]-(p2:`applicant`)
WHERE id(p)=="p_200" AND
p2.`applicant`.is_risky == "True"
RETURN p_ LIMIT 100

The results shown in Figure 10 indicate that this applicant is suspected to be risky because of their connection to p_190.

Figure 10: Results of query showing fraud detection risk.

By exploring the relationships between data points, we can gain deeper insights into our data and make more informed decisions. Whether you are interested in finding new friends, detecting fraudulent activity, or any other use case, the starter datasets provide a valuable starting point.

We encourage you to download the datasets, experiment with different queries, and see what new insights you can uncover, then share with us in the NebulaGraph community.

Try NebulaGraph for yourself

To learn more about NebulaGraph, visit our website, documentation site, star our GitHub repo, or join our community chat.
Quelle: https://blog.docker.com/feed/

AWS Lambda unterstützt jetzt Änderungs-Streams von Amazon DocumentDB als Ereignisquelle

AWS Lambda unterstützt jetzt Änderungs-Streams von Amazon DocumentDB als Ereignisquelle. Die Änderungs-Streams-Funktion in Amazon DocumentDB (mit MongoDB-Kompatibilität) bietet eine zeitlich geordnete Abfolge von Änderungsereignissen, die innerhalb der Sammlungen Ihres Clusters auftreten. Kunden können diese Ereignisse jetzt in ihren Serverless-Anwendungen nutzen, die auf Lambda basieren.
Quelle: aws.amazon.com

Amazon RDS für PostgreSQL unterstützt nun die Hauptversion PostgreSQL 15

Amazon Relational Database Service (Amazon RDS) für PostgreSQL unterstützt jetzt die neueste Hauptversion PostgreSQL 15. Zu den neuen Features in PostgreSQL 15 gehören der SQL-Standardbefehl „MERGE“ für bedingte SQL-Abfragen, Leistungsverbesserungen sowohl für die speicherinterne als auch für die laufwerksbasierte Sortierung sowie die Unterstützung von Two-Phase-Commit und Reihen/Spalten-Filterung für die logische Replikation. Die PostgreSQL 15-Version bietet auch Unterstützung für die neue Erweiterung „pg_walinspect“ und serverseitige Komprimierung mit Gzip, LZ4 oder Zstandard (zstd) mithilfe von „pg_basebackup“.  Weitere Informationen zu dieser Version finden Sie in der Ankündigung der PostgreSQL-Community.
Quelle: aws.amazon.com

In der AWS-Marketplace-Suche sind jetzt Vorschläge zur AutoVervollständigung verfügbar

Heute haben wir Vorschläge zur AutoVervollständigung für die AWS-Marketplace-Suche veröffentlicht. Mit dieser Funktion erhalten Benutzer, die die AWS-Marketplace-Website oder -Konsole besuchen, beim Tippen Suchvorschläge in der Suchleiste. Benutzerabfragen werden in den Vorschlägen zur AutoVervollständigung als fett gedruckte Präfixe angezeigt, die sie auswählen können, um ihre Abfrage abzuschließen und die Ergebnisse auf der Hauptergebnisseite zu sehen. Die Vorschläge sind nach Relevanz sortiert und geben an, was verfügbar ist, wie schwierige Begriffe geschrieben werden und wonach andere suchen.
Quelle: aws.amazon.com

AWS Elemental MediaConvert nimmt jetzt FLAC- und animierte GIF-Eingaben auf

Heute kündigt AWS die allgemeine Verfügbarkeit von FLAC-Audio- und animierten GIF-Videoeingabequellen für AWS Elemental MediaConvert an. Diese neuen Eingabeformate sind mit allen MediaConvert-Ausgaben kompatibel. Sie können beispielsweise verlustfreie FLAC-Dateien in komprimierte Audioformate wie AAC, MP3 und Ogg Vorbis konvertieren oder FLAC-Dateien als Sidecar-Audioquellen verwenden, um sie mit Videodateien zu verbinden. Animierte GIF-Eingaben können in effizientere Videostreaming-Codecs wie AVC und HEVC umgewandelt und als eigenständige MP4-Dateien oder Streaming-Pakete mit adaptiver Bitrate wie HLS oder DASH verteilt werden.
Quelle: aws.amazon.com