As pay-TV operators and service providers look to boost their monetization, targeted TV advertising is gaining significant traction. Even Netflix has surrendered to the trend, joining the rest of the other streaming giants in the AVOD world by launching its own advertising tier.
With recessions taking their toll and subscription income flatlining, scouting out new monetisation options to boost balance sheets has never been more critical for the media and entertainment industry.
The two basic premises of broadcasting haven’t changed for the last 100 years or so. One, give viewers compelling programming and they’ll watch. Two, effective monetisation is critical for supporting all that programming.
Most of us are social creatures and even when apart, we strive to be together. The bond that we create from shared activity, such as watching movies together, runs deeper than one might think. Research published in a scientific journal which involved chimpanzees watching a video in pairs, showed how they became more sociable with each other after the viewing session ended. This indicates that the bond we form from activities such as watching video content together has deep evolutionary roots.
Researchers in UT-Austin’s LIVE (UT-LIVE), Directed by Professor Al Bovik, have singularly pioneered the use of visual neuroscience to create picture and video quality measurement and monitoring tools that control the quality and bandwidths of a large percentage of all streaming videos, television, and social media. Their breakthrough inventions include the iconic Structural Similarity (SSIM), Multi-scale SSIM (MS-SSIM), and Visual Information Fidelity (VIF) “reference” visual quality tools, which delivered dramatic leaps in performance when introduced, and are still dominant today. These tools are used today to control the quality of most streaming and social media pictures and videos in the US and beyond. Bovik and his team also disrupted the field by inventing the first accurate and practical “blind” visual quality models (BRISQUE and NIQE), using models of neuro-statistical distances, at the neural level, between distorted and distortion-free visual signals. These tools are also globally marketed and used in numerous industry applications, including inspection of streaming and social video uploads, control of cameras, and remote video transcoding in the Cloud.