Perifery – Bringing AI innovation to the Edge

Perifery – Bringing AI innovation to the Edge

IABM Journal

Representing Broadcast & Media Technology Suppliers Worldwide
Articles covering a range of key topics and themes
In depth analysis of the latest trends

Perifery – Bringing AI innovation to the Edge

Tue 11, 07 2023

Jonathan Morgan, Product and Technology, Perifery

The media industry is experiencing the transformative impact of AI and ML technologies. These innovations have revolutionised various aspects of content creation, distribution, marketing, and monetization.

AI on entertainment platforms has led to a host of benefits, including data-driven enhanced efficiency, better personalization, and more informed program and content decision-making capabilities. AI in media production and post production has enhanced light ray rendering capabilities and can even edit the production using prescribed user preferences. In sports, AI editing can go as far as making whole game highlight reels. In archive semantic AI can discover scenes with car chases or even romantic scenes. There is no longer a debate about whether AI will happen; it is here, and it is here to stay.

Putting the User at the Heart of Decisions

By analysing viewer behaviour and demographics, entertainment platforms can leverage data to inform how they plan original production and content creation. They can tie these insights into their marketing strategies, and target audiences much more accurately. With a better understanding of customer preferences and optimised content delivery platforms, not only does revenue generation improve, but the audience also benefits from a better service.

Tracking viewer analytics and engagement, means companies gain even deeper insights into audience behaviour, preferences, and sentiment analysis. Personalised user experiences can focus on individual patterns, with content recommendations, targeted advertising, and tailored experiences. This customisation enhances user satisfaction, and inspires feelings of loyalty, which will in return reduce platform churn.

The Important Work Behind the Scenes

So, with AI and ML algorithms increasingly being utilised to produce, curate, and optimise content on platforms, where does storage and asset management fit in? Brand new, original productions are certainly a big draw for viewers. But there is also a huge selection of big-ticket archive content that can help media companies increase market share in a very competitive landscape. However, the challenge comes when the internal resources needed to edit and process that content outweigh its commercial value.

Content owners could have years’ worth of jaw-dropping nature documentary footage that has never been appropriately tagged with the corresponding metadata. It might be visually stunning, but if editors and post-production teams need to spend hours scrubbing through footage to find specific clips, then it’s not practical to repurpose it. With newer content, the metadata might be assigned, but how useful is it for every instance? If the original content was tagged with basic information such as the season, episode, and handful of keywords, then marketing teams will have a very tough time collating clips. A promotional team’s needs are totally different: they might require a collection of hilarious one-liners from a specific character to promote a new series acquisition and an AI generated highlight reel might be required to fit the need.

AI and Automation in Media Workflows

Automated workflows, intelligent content tagging, and video editing tools powered by AI can significantly speed-up post-production timelines, optimise resource allocation, and lower costs. Content creation is ultimately a mix of both repetitive and creative tasks. Wherever possible, the vendor ecosystem should be looking to reduce mundane actions and leave media professionals with more time for creative decision-making and collaboration. So, with AI and ML technologies transforming the media industry is there a downside?

Unfortunately, the reliance on public cloud means that upload times, security and processing in the cloud fees have had a negative impact, because it limits what work can be done. “Data gravity” argues that it is far less costly and time consuming to bring the application to where the data is, rather than moving large video footage around. From a production perspective much can be gained from processing content closer to the dispersed locations where it is generated and potentially reducing the quantity and increasing the quality of the content that may eventually be placed into a centralised system or into cloud. In post-production, teams may need to access and collaborate on content, both onsite and remote, from anywhere in the world.

Moving AI Beyond Cloud Boundaries

Many AI-embedded products are cloud native, so this has meant that entertainment providers have been stung by unexpected public cloud costs and astronomical processing and egress fees. Virtually unpredictable expenses and complex devops processes mean it’s totally understandable that some companies are hesitant to use AI or have stopped using it altogether. While AI services have focused on providing content creators and owners with the ability to use embedded products in the cloud, edge computing has been underutilised.

Edge computing offers improved security and added efficiencies for media and entertainment use cases. Processing data at the edge means valuable content can be securely stored and accessed locally, reducing the risk of data breaches or unauthorised access. Entertainment providers need streamlined workflows that help them save time and bandwidth requirements, and this is where intelligent edge workflows excel. By splitting processing between the cloud and the edge, media teams can take the path of least resistance. Workflows happen wherever they make the most sense. Instead of assigning significant financial resources to processing all content in the cloud, media companies can now perform many preprocess functions at the edge.

AI and the Edge – The Best of Both Worlds

The applications for AI-enabled functions in M&E are extensive. AI can analyse, categorise, and tag files, based on the content, metadata, and context. This ensures faster, more efficient retrieval and better management of large volumes of media. AI-powered algorithms can quickly identify objects, scenes, faces and text within media files, facilitating accurate content indexing. With automated descriptive metadata generation for media files, AI algorithms can extract relevant information such as object identification, scene description, location and timestamps.

By handling AI at the edge we can reduce the load on central or cloud storage by optimising file formats, performing rough editing and extracting information that is required immediately. AIOps can further enhance IT workflows by allowing natural language to be utilised alongside AI enhanced predictive data movements. Meanwhile, cloud AI can be used to run algorithms that are not yet moved out to the edge, or on data that has already been moved from the edge devices without pre-processing, such as on an historical archive.

A Perfect Partnership

Edge-based media content production is the ideal partner for AI. It enables easy execution of AI functions and ensures predictable costs. The edge enables more efficient resource utilisation, particularly in scenarios with fluctuating workloads, which is a key feature of the M&E industry.

Edge computing delivers both scalability and cost efficiency. By distributing computing resources across the network edge, content owners can dynamically scale their infrastructure based on requirements. By splitting processes between the cloud and the edge, media companies can leverage fast AI-enabled pre-processing, both onsite and remote. The latest developments in AI are hugely impressive, but they need the right infrastructure environment to perform optimally and really streamline workflows. After all, an AI house can only be as strong as its foundations.

Search For More Content


X