Azure Media Redactor is a powerful cloud video processing service which is capable of automatically detecting and blurring faces in your videos, for use in cases such as public safety and news media. Based on artificial intelligence technology developed in house, Redactor can be used in both automated and semi-manual ways to improve the efficiency of workflows that involve labor intensive manual video editing.
In our previous blog post we discussed the preview release of Azure Media Redactor and the various ways you can use it. This release includes a couple of changes based on your feedback during the preview process, and updates the feature to include full SLA support. You can view updated pricing for this feature here.
Updates in this release include the following:
Greatly improved processing speed
Better face detection and tracking
Stickier face ID association
Multiple blur modes
View our full documentation page for details on using all these features.
See our pricing page on updated GA pricing for Azure Media Redactor.
Improved performance
Speed of processing varies quite a bit depending on video size, framerate, and number of faces in the video. Expect a 720p 30fps video to take between 1x and 2x real time to complete processing.
Another large improvement is in face grouping, where the same that that appears in the video at multiple points will be given the same ID. Previously, the same face could easily be assigned multiple ID’s as they appears throughout a video, which made selectively blurring individual faces much easier.
Accuracy of face detection has also been slightly improved from the previous version.
Blurring changes
We now offer 5 blurring modes you can choose from via the JSON configuration preset. By default ‘Med’ is used.
Example JSON:
{'version':'1.0', 'options': {'Mode':'Combined', 'BlurType':'High'}}
Low:
Med:
High:
Debug:
Quelle: Azure
Published by