Microsoft Cognitive Services enables developers to augment the next generation of applications and enhance their ability to see, hear, speak, understand, and interpret needs using natural methods of communication. Think about the possibilities: being able to add vision and speech recognition, emotion and sentiment detection, language understanding, and search, to applications without having any data science expertise.
Content Moderator is part of Cognitive Services allowing businesses to use machine-assisted moderation of text and images, augmented with human review tools.
What are Content Moderator’s capabilities?
Content Moderator helps track, flag, and assess potentially offensive and unwanted content on social media websites, chat and messaging platforms, enterprise environments, gaming platforms, and more, and flag that content for human review. The following capabilities are included:
Image moderation: The image moderation API enhances your ability to detect potentially offensive or unwanted images through machine-learning based classifiers, custom lists, and optical character recognition (OCR).
Text moderation: The text moderation API helps you detect potential profanity in several languages. You don’t have to restrict matching to the terms included with the service. You can create custom lists with terms specific to your business domain.
Video moderation: The video moderation capability is currently in private preview on Azure Media Services. It enables detection of potential adult content in videos.
Human review tool: The human review tool when used together with the moderation APIs allows you to implement human-in-the-loop processes while benefiting from the cost and speed efficiencies of machine learning.
How Lightspeed Systems powered their online safety tool for schools
Lightspeed Systems provides software systems to help schools protect their students from inappropriate and offensive content. It turned to Content Moderator from Microsoft and found it to be more effective than alternative technologies in identifying the offensive content. It was also easy to integrate with.
“Compared with other solutions, Content Moderator does a better job of categorizing images and assessing whether they contain adult content…I consider it a tool I can really rely on.” Says Rob McCarthy, Founder of Lightspeed Systems.
To learn more, please take a look at Lightspeed Systems case study.
Getting started
We’re providing a lot of tutorials to quickly get started with Content Moderator.
How to start with the human review tool
Follow the steps in the review tool quick-start to check out the automated moderation and human-in-the-loop capabilities without writing a single line of code. The review tool internally calls the automated moderation APIs and presents the items for review right within your web browser. You invite other users to review, track pending invites, and assign permissions to your team members.
How to leverage the automated moderation APIs
If you sign up for the review tool, you will find your free tier key in the Credentials tab under Settings, as shown in the following screenshot:
Use your API key and follow the quick start steps outlined in the Image API and the text API sections. Use the review API to auto-moderate content in bulk and review the tagged images or text within the review tool. Provide your API callback point so that you get notified when the reviewers submit their decisions. This feature allows you to automate the post-review workflow by integrating with your own systems.
Content moderation in an E-commerce scenario
Let’s say I’ve got an E-commerce site and would need learn how to use the Content Moderator platform along with additional cognitive services such as Computer Vision and Custom Vision services.
The purpose would be to combine machine assisted classification with human review capabilities to classify E-commerce catalog images.
I would like to use machine-assisted technologies to classify and moderate product images in these categories:
Adult (Nudity)
Racy (Suggestive)
Celebrities
US Flags
Toys
Pens
Overall, I’ll need to do the following:
A. Sign up and create a Content Moderator team.
B. Configure moderation tags (labels) for potential celebrity and flag content.
C. Use Content Moderator's image API to scan for potential adult and racy content.
We can also go further and use the Computer Vision API to scan for potential celebrities, leveraging the Custom Vision service to scan for the possible presence of flags and present the nuanced scan results for human review and final decision making.
A. First, let’s create a team
I can either sign up with my Microsoft account or create an account on the Content Moderator web site.
Let’s navigate to the Content Moderator sign up page.
Click Sign Up
To create a team, I’ll see a "Create Team" screen. I need to give my team a name. If I want to invite colleagues, I can do so by entering their email addresses.
More information can be found in the Quickstart page to sign up for Content Moderator and create a team. Note the Team ID from the Credentials page.
B. Let’s define custom tags
Please refer to the Tags article to add custom tags. In addition to the built-in adult and racy tags, the new tags allow the review tool to display the descriptive names for the tags. In our case, we define these custom tags (celebrity, flag, us, toy, pen):
Now, let’s list my API keys and endpoints:
The tutorial uses three APIs and the corresponding keys and API end points.
The API end points are different based on your subscription regions and the Content Moderator Review Team ID.
Keep in mind that this walkthrough is designed to use subscription keys in the regions visible in the following endpoints. So I need to be sure to match the API keys with the region Uris otherwise the keys may not work with the following endpoints:
// Your API keys
public const string ContentModeratorKey = "XXXXXXXXXXXXXXXXXXXX";
public const string ComputerVisionKey = "XXXXXXXXXXXXXXXXXXXX";
public const string CustomVisionKey = "XXXXXXXXXXXXXXXXXXXX";
// Your end points URLs will look different based on your region and Content Moderator Team ID.
public const string ImageUri = "https://westus.api.cognitive.microsoft.com/contentmoderator/moderate/v1.0/ProcessImage/Evaluate";
public const string ReviewUri = "https://westus.api.cognitive.microsoft.com/contentmoderator/review/v1.0/teams/YOURTEAMID/reviews";
public const string ComputerVisionUri = "https://westcentralus.api.cognitive.microsoft.com/vision/v1.0";
public const string CustomVisionUri = "https://southcentralus.api.cognitive.microsoft.com/customvision/v1.0/Prediction/XXXXXXXXXXXXXXXXXXXX/url";
C. Scan for adult and racy content
The function takes an image URL and an array of key-value pairs as parameters.
It calls the Content Moderator's Image API to get the Adult and Racy scores.
If the score is greater than 0.4 (range is from 0 to 1), it sets the value in the ReviewTags array to True.
The ReviewTags array is used to highlight the corresponding tag in the review tool.
public static bool EvaluateAdultRacy(string ImageUrl, ref KeyValuePair[] ReviewTags)
{
float AdultScore = 0;
float RacyScore = 0;
var File = ImageUrl;
string Body = $"{{"DataRepresentation":"URL","Value":"{File}"}}";
HttpResponseMessage response = CallAPI(ImageUri, ContentModeratorKey, CallType.POST,
"Ocp-Apim-Subscription-Key", "application/json", "", Body);
if (response.IsSuccessStatusCode)
{
// {“answers”:[{“answer”:“Hello”,“questions”:[“Hi”],“score”:100.0}]}
// Parse the response body. Blocking!
GetAdultRacyScores(response.Content.ReadAsStringAsync().Result, out AdultScore, out RacyScore);
}
ReviewTags[0] = new KeyValuePair();
ReviewTags[0].Key = "a";
ReviewTags[0].Value = "false";
if (AdultScore > 0.4)
{
ReviewTags[0].Value = "true";
}
ReviewTags[1] = new KeyValuePair();
ReviewTags[1].Key = "r";
ReviewTags[1].Value = "false";
if (RacyScore > 0.3)
{
ReviewTags[1].Value = "true";
}
return response.IsSuccessStatusCode;
}
Then, I can also scan for celebrities, classify into flags, toys, and pens, review for human-in-the-loop, submit batch of images and initiate all scans.
All of the additional steps of this tutorial are present in the full tutorial.
Feel free to take a look at our additional scenario of Content Moderator with a sample Facebook page, in which the solution either takes down or allows publishing of images and text by viewers of the Facebook page.
Happy coding!
Sanjeev Jagtap
Senior Product Manager – Content Moderator
Microsoft Cognitive Services Team
Quelle: Azure
Published by