The media industry has evolved over the past century, from inventions to disruptions in communication and new-age technologies. In the early 1900s, radio was the crucial link to information, followed by television which by the mid-1900s became the most potent medium for news and entertainment. The late 20th century introduced the internet, and service & media providers entered a new evolution of connectivity. Websites and social media platforms flood the market, providing more choices than ever before. In the 21st century, smartphones are standard, and content consumption requires anytime, to any device, and anywhere access. The traditional television model is disrupted with streaming services like Netflix and Hulu and social media becomes a primary source of news and entertainment with Facebook, Twitter, YouTube, etc.
Red Hat began by providing software to run on Linux about 30 years ago. As the largest open-source company in the world, we believe using an open source development model helps create more stable, secure, and innovative technologies. Our portfolio is broader, including hybrid cloud infrastructure, middleware, agile integration, cloud-native development, and management and automation solutions for service providers.
Jeff Bezos once compared Amazon’s approach to customer experience to hosting a party 24/7. “We see our customers as invited guests to a party, and we are the hosts. It’s our job, every day, to make every important aspect of the customer experience a little bit better.” Bezos’ comments came way back in 2004. But they could just as easily be describing the challenges facing broadcast media today as brands look for growth in the OTT market.
Broadcasters and media companies are implementing technologies powered by Artificial Intelligence (AI) and Machine Learning (ML) across the value chain. We see countless use cases for AI based automation or support and new opportunities keep emerging. So far, the focus has been on the usefulness of AI systems in terms of accuracy and performance in relation to a specific task. This is now changing with a wider uptake of AI, new capabilities for ML and public debates on this technology.
FAST. AVOD. SVOD. MVPD. vMVPD. OTA. These services represent the options available to content owners or aggregators to deliver entertainment, sports and news content from centralized hubs to individual consumers. Their goal is simple – expose and monetize their content libraries to as many consumers as possible. However, that doesn’t mean the consumer is top of mind when it comes to facilitating their journey to their content of choice.
Few industries are as fast paced and highly pressurised as the media industry. What was already a competitive field has become even more so, as the demand for content has increased in-line with the explosion of OTT services. To manage this high volume of throughput, content supply chains have become more complex, with multiple teams all contributing towards content preparation.
Linear TV isn’t dead, but the internet has changed it forever. At a time of global downturn, an increasing number of content owners are pinning their growth hopes on ad-funded live or VOD-to-live FAST channels. But drawing eyeballs and maximising ad revenues in this increasingly crowded marketplace will rely on more than just an EPG and good promos.
Researchers in UT-Austin’s LIVE (UT-LIVE), Directed by Professor Al Bovik, have singularly pioneered the use of visual neuroscience to create picture and video quality measurement and monitoring tools that control the quality and bandwidths of a large percentage of all streaming videos, television, and social media. Their breakthrough inventions include the iconic Structural Similarity (SSIM), Multi-scale SSIM (MS-SSIM), and Visual Information Fidelity (VIF) “reference” visual quality tools, which delivered dramatic leaps in performance when introduced, and are still dominant today. These tools are used today to control the quality of most streaming and social media pictures and videos in the US and beyond. Bovik and his team also disrupted the field by inventing the first accurate and practical “blind” visual quality models (BRISQUE and NIQE), using models of neuro-statistical distances, at the neural level, between distorted and distortion-free visual signals. These tools are also globally marketed and used in numerous industry applications, including inspection of streaming and social video uploads, control of cameras, and remote video transcoding in the Cloud.
With growing consumer demand for content across an increasing array of platforms, territories and languages, suppliers are creating and localizing massive amounts of new content and resurfacing existing libraries. This immense content volume requires high-quality metadata for accurate and compelling content description to power search, discovery and recommendation.