Living in the golden age of video is a blessing and a curse.
There’s more video content than ever to choose from, but we can spend more time deciding than actually watching. It’s a frustrating experience for consumers, but it’s especially worrisome for video content providers.
As a result, media and entertainment companies are shifting their focus from merely providing customers with more video to serving up just the right type of content at the right time. While many companies already account for elements like genre or cast, more sophisticated analysis is possible thanks to machine learning technologies like IBM Watson that bring previously unstructured data — objects and faces in a particular scene, for example — into the open. Armed with this new, rich layer of insights, media and entertainment companies can identify ways to serve up the most relevant content to viewers, increasing engagement and reducing churn.
Finding hidden gems
When Watson watches a video, it uses tools such as facial recognition, audio recognition, speech-to-text and tone analytics. This advanced metadata gives companies a more specific, accurate understanding of both their video content and what customers truly want to watch.
For example, Watson could identify that users enjoy sports movies with rousing motivational speeches. Companies can enable viewers to search the video catalogue with these specifications in mind or highlight gaps in the company’s catalogue. The next time, say, a soccer team gets together for movie night before a playoff game, Watson could recommend a movie with a moving locker-room speech.
Next-gen recommendations
Watson can also find patterns in the way people interact with video content, from the selections they make to how often they fast-forward. Insight into viewers’ watching habits can help companies make personalized recommendations that will keep them engaged and coming back for more. It could even find commonalities between the romantic comedies and action movies a viewer enjoys to serve up a surprising, though spot-on, recommendation.
By generating deep, conceptual metadata on what’s happening in specific videos, Watson may one day be able to make recommendations based on everything from the local weather to what a person’s recent tweets suggest about his mood.
Auto-generated, instant highlight reels
Watson’s ability to index, categorize and clip video content has already been put to use in developing a recent horror film trailer. Watson’s AI-enabled clipping capabilities could also soon help broadcasters that stream, rather than create, video content.
An example that’s currently in development is Watson’s ability to “watch” live sports and automatically clip highlights. While watching Sunday Night Football, for example, Watson could clip a wide receiver’s spectacular catch and instantly post the highlight to social media.
Improving the all-important engagement metric
This ability to catalogue, organize and distribute video is essential to today’s video products. U.S. adults already spend 5.5 hours per day watching video programming, and research suggests that all video formats — TV, video on demand (VoD), and internet — will represent 80 to 90 percent of global consumer internet traffic in 2019.
It’s no longer enough to just give people more, or even higher quality, video content. It’s all about engagement. Even the most expansive and diverse library of video content on the planet is useless unless people are able to access the content they’re interested in, now.
Learn how IBM Cloud optimizes content for media and entertainment.
The post Improving video engagement with an assist from Watson appeared first on #Cloud computing news.
Quelle: Thoughts on Cloud