The initial euphoria over AL/ML seems to have died down in the M&E industry. Our research shows that many M&E players have run AI initiatives with different vendors but have not achieved anything substantial enough to solve their business problems. Though the demo was impressive, the project hit a wall at the Proof of Concept (PoC) stage because the AI solution did not work for their content! When the cycle was repeated with multiple vendors, they concluded that AI models are not available or mature enough to solve specific M&E business challenges.
One year on from winning at the Dragons’ Den session at the IABM Annual International Business Conference in 2019, we followed up with Geert Vos, Founder and CTO at Media Distillery, to see how things are going. Its winning proposition was a solution that guarantees deep content understanding in real time at large scale using AI.
In this IABM TV interview, Pieter-Jan Speelmans (CTO, THEO Technologies) discusses the biggest challenges customers are facing, how the industry is currently moving, and the biggest innovations the THEO team is working on.
Media & Entertainment (M&E) organizations have for some time been experimenting with AI to solve business challenges with limited success. Near 100% accurate data is an ask in some use cases, for which off the shelf AI solutions have failed to deliver. Add to this data that is not actionable, and the problem of effort, time, and cost overruns get compounded.
While the impact of ML and AI have been discussed and debated for years, practical applications are fast accelerating across the media supply chain. The pace of innovation is moving quickly and with the cloud wars in full force, there are new services becoming available all the time that offer novel ways to automate tasks with ML and AI. Already, the big three cloud providers — AWS, Azure, and Google — have rolled out powerful capabilities that help with essential tasks including captioning, transcription, and even object/facial recognition to bolster compliance edits and to augment metadata. For media organizations, the implications of these solutions are vast, and we’ve already begun to see their power. With things moving so fast, though, it’s challenging to keep up and important to have the right architecture and structure in place to take advantage of these innovations.
IABM Media Tech Trends reports annually track the adoption of specific emerging technologies within the broadcast and media sector. The purpose of these reports is to enable member companies to better understand the drivers of emerging technologies’ adoption within customer organizations. This should provide member companies more tools to better address the challenges lying ahead, from new product development to marketing strategy. These reports contain a discussion on the state of adoption of the emerging technology in broadcast and media as well as an analysis of significant customer deployments.
Artificial Intelligence (AI) and Machine Learning (ML) are technologies that enterprises across industries have been keenly experimenting with to explore the utility they can bring. Is there AI adoption within the M&E industry? Enterprises are seeking automation and can AI be the solution? Have we cracked the AI code or do we have miles to go? If automation is a goal, it should be a priority even at the current times.
axle ai is showing how its radically simple browser interface lets your distributed teams search and manage Qumulo scale-out volumes with real-time updates
Human Decision Making: When Do We Use Confidence Levels?
Confidence levels can be used anytime one is estimating or predicting something. Examples include: business, engineering, medicine, technology…or just day-to-day life.
As humans we use confidence levels regularly. Whether you decide to dodge an aisle at the grocery store because you thought you saw your chatty neighbor, or using evidence and intuition to convict a suspected criminal during jury duty, your mind is in a constant state of perceiving its surroundings. It makes decisions based on those perceptions via an inherent estimate of confidence.
The Multimodal Approach: Explained
“Our intuition tells us that our senses are separate streams of information. We see with our eyes, hear with our ears, feel with our skin, smell with our nose, taste with our tongue. In actuality, though, the brain uses the imperfect information from each sense to generate a virtual reality that we call consciousness. It’s our brain’s best guess as to what’s out there in the world. But that best guess isn’t always right.” – Dr. David Ludden Ph.D.