Three Media

Three Media’s XEN:Pipeline is a new generation Business Content Management System that will drive transformational change across the content supply chain.

Sitting at the heart of operations, its automated workflows and business processes are designed to discover, transform, curate and manage your metadata and content throughout its lifecycle, from concept through to distribution.

Dejero

At Dejero we know just how critical it is to stay connected when it comes to live video and real-time data. We’ve made it our mission to provide reliable connectivity in any scenario; from the top of a mountain, at a crowded festival, a fan-filled sports venue, in the eye of a storm, in a moving vehicle or even from home during a pandemic.

Our solutions are designed to simplify the workflows of news, sports and live event broadcasts and to deliver fast, cost-effective deployment, not only for remote broadcasting but also for business continuity in disaster recovery situations. Our patented Smart Blending Technology aggregates diverse connectivity paths including cellular, satellite and broadband into a virtual ‘network of networks’ using cloud-based technology, to transfer high-quality live video and real-time data over IP from virtually anywhere. This technology delivers enhanced reliability, expanded coverage, and greater bandwidth.

PlayBox Technology UK Ltd

PlayBox Technology UK Ltd is an international communication and information technology company serving the broadcast and corporate sectors. PlayBox Technology UK Ltd is dedicated to the development and provision of the best products, systems, multi media streaming software, TV broadcast automation software and equipment and services. Users include national and international broadcasters, start-up TV channels, webcasters/OTT channels, DVB/ATSC channels, interactive TV and music channels, film channels, remote TV channels, and disaster recovery channels.

The Switch

In the action-packed world of live video production and delivery, The Switch is always on and always there – setting the industry benchmark for quality, reliability and unmatched levels of service.

From stadium to studio to screen, The Switch offers a unique mix of senior broadcast production experience and network reach. Its comprehensive production services combine mobile, remote and cloud capabilities. Backed by an Emmy Award winning team, The Switch enables its customers to deliver seamless live event coverage. Its delivery network connects production facilities with 800+ of the world’s largest content producers, distributors, and sports and event venues to rights holders, broadcasters, streaming platforms and media outlets.

The Switch’s remote production services support live event productions in locations around the globe, removing the need for large amounts of equipment and production crews on-site. Its cloud-based production-as-a-service offering, MIMiC, handles all aspects of the production workflow in the cloud, from editing and graphics creation to comms and talk-back.

Tiger Technology

In times when switching to remote workflows can be highly disruptive, you can choose to take your M&E workflows to the future with a data-centric hybrid cloud approach that simply lets creators create.

The Tiger Technology solutions give you everything you need for creative content collaboration and media management for post-production, broadcast, and online media workflows. Share high-res media in real time and maximize playback performance for Mac, Linux and Windows clients. Streamline workflows with virtual project workspace management. Gain back resources lost to storage management and aging hardware. Optimize your storage costs with native file system tiering to cloud, disk, and tape. Enable multi-site and remote collaboration, protect your data at any point in time, and benefit from cloud AI processing capabilities.

Our media solutions eco-system and excellent support services cover all bases while allowing your editing team to focus on their craft.

KATHREIN Broadcast GmbH

Kathrein offers top-quality components for broadcast systems. What is so special about Kathrein’s portfolio? Not only do we offer our customers technically perfected components but also a full service: from planning to installation and implementation of your project. Our broadcast systems fulfil the highest customer requirements. No matter whether the antennas or antenna systems are exposed to icy cold or large temperature variations, our broadcast technology can withstand even the most extreme weather conditions. We are also the leading supplier for special frequency antennas.

Florical Systems

Florical Systems is a world leader in broadcast automation solutions. For over 35 years, Florical has catered to the largest public and private broadcasters in the industry including single stations, networks, and hub operations. Florical provides solutions that optimize workflows and protect revenues. The company’s product portfolio includes AirBoss, Acuitas, SMART Central and more.

Subtitle Re-timing: Automated Workflows Using AI Led Video Comparator

Subtitles make videos more accessible to viewers speaking different languages across wider geographies and cultures. This is usually done by retaining the original soundtrack of the video and overlaying audio transcript on the video in textual form. Subtitles have emerged as an important monetization opportunity for media publishers as they allow publishers to gain new markets for their content. Moreover, in the current COVID-19 times when content production has virtually stalled and internet audience continues to grow, media publishers want to maximize reach of their content by providing subtitled versions on different digital channels.

Creation of subtitles requires not only proficiency in respective languages, but also understanding of various subtitling tools available in the market. Such tools ensure subtitles are synchronised with the dialogues and maintain proper reading speed. During this process, the subtitler marks exact time codes where each subtitle should be displayed on the video timeline and where they go off the screen when the dialogue delivery is completed.

When the content is distributed in different geographies and across distribution channels, very often the content needs to undergo certain edits. The edits are to fit the content within specified time frames, provide cuts and zooms for removing certain compliance or content moderation issues and also to add some extra frames with mandatory disclaimers, among others.

The subtitles can go off sync the actual dialogues when there are multiple versions of the content. They need to be re-time coded every time an existing video content is syndicated on different broadcast channels and in different video formats. Usually, in the syndication workflows, video editors create different edited versions from original source master and record details of edits in the EDL files. Edit details in EDL files include information about cuts and/or new inserts with their exact locations in the edited video timeline. A subtitler then uses this EDL file to re-time the subtitles on the edit versions by extracting time code details of the cuts and new inserts.

In many situations when an EDL file is not available, it is not possible for the subtitler to know the prior edited time codes. The subtitler (or a video editor) now has to spend more time watching the edited videos and mark the exact time codes and use it later to re-time the subtitles. With high content volumes and need for quick turnaround time (TAT), this situation may hamper the scaling of video syndication process.

The need of the hour is a subtitle re-timing automation suite. Such an automation suite should leverage a bespoke AI engine to deliver accurate data and action tools for bringing unprecedented operational efficiency. For example, a video comparison tool leveraging several AI technologies in computer vision and deep learning domains can be an innovative construct to solve this problem.

When there is no time code information available about the cuts and new inserts in the edited video, such video comparison tools can be used to extract this information by comparing both source and edited versions of the videos and identifying matched and un-matched segments. This helps to accurately extract time codes marking of cuts and inserts in edited video. This information can then be exported as EDL files in subtitling workflows to reconstruct edit version with exact time code markings of cuts and inserts. This makes it very easy for subtitlers to use the information to re-time the subtitles on the edited video.

After this automation step, what remains to be done manually is a quick round of QC to make sure the subtitles are synced properly in the edited version of the video.

Apart from being able to linearly compare two videos and find matched, un-matched and moved segments, the AI-enabled video comparator tool can also identify image differences between two videos like resolutions, frame rates, edits, crops, zooms, color grading, text detection, VFX, etc. as match or mismatch. These capabilities enable various other use cases in media post production and distribution process such as optimizing QC workflows to compare different masters and picking up the right ones for edit and distribution purposes. It can also help optimize Digital Intermediary (DI) workflows by automating identification of image differences between DI Master and Syndicated Digital Cinema Package (DCP) versions and identify any new changes that need to be done in DI masters.

By harnessing advanced AI technology in video comparison, the possibilities of making AI work for you are truly limitless.

We all need near 100% in everything. Is that possible with AI?

Media & Entertainment (M&E) organizations have for some time been experimenting with AI to solve business challenges with limited success. Near 100% accurate data is an ask in some use cases, for which off the shelf AI solutions have failed to deliver. Add to this data that is not actionable, and the problem of effort, time, and cost overruns get compounded.

Let’s take the case of Basic Metadata. Till now, AI solutions have not been able to solve this comprehensively.

The industry requires AI models that can effectively address unique M&E use cases. There is a need to detect and identify physical segments and content segments from short-form and long-form content. The current practice involves operators visually looking up content and segments like Blacks, Color Bars, Slates, and marking content manually. In Long-form content, Re-caps, Pre-caps, Montages, Credits, Essence, and many other custom segments specific to the content house or broadcaster have to be marked. These are extremely time-consuming efforts and when the output is not near 100% accurate, leads to inefficiencies in the workflows.

Every M&E organization understands the need for automating this laborious task at high speed, especially when dealing with short-form spots or when the content from linear has to be published to streaming platforms. However, technology solutions available in the market today have left much to be desired. Those which identify Blacks using an image or signal processing cannot tolerate the variation in Blacks and Color Bars, from various sources. Detection of more sophisticated segments like Slates, Re-caps, Pre-caps, and Montages, requires deeper cognition. There is also a need to identify these segments accurately every time and also frame accurately, to ensure the workflow can be fully automated.

A solution to automate such a workflow and address this unique business need will require sophisticated AI technologies, powered by deep learning. The variations of the segments and content warrant human-like recognition to identify the segments accurately, while remaining sensitive to the wide variety of content. The solution should also accommodate the possibility of customization to address a specific customer need while fitting seamlessly into an existing workflow. This will allow for recognitions to be done using AI and a quick QC as well on top of that. In some cases, a Manual QC can help the machine continue to keep learning newer patterns and varieties for the segments.

The need of the hour is an AI-enabled segmentations engine that can be trained to recognize segments with accuracy and is customizable for a customer’s workflow and content types. An ideal AI solve for metadata should be able to help identify physical segments with 100% accuracy and 100% frame accuracy:

  • Blacks, Color Bars, Slates, Pre-caps, Re-caps, Montages
  • Essence
  • Text segments
  • Specific captioned segments
  • Textless segments
  • Custom segments based on customer need

Such a solution will ensure complete automation of workflow to extract content segments with significant reductions in cost and manual effort. Once identified, to execute downstream activities, it ought to work seamlessly within an existing workflow that will allow automatic identification of segments followed by a quick QC.

AI platforms for M&E enterprises should ensure better searchability/discoverability and superior metadata. If close to 100% accurate data can be achieved, then AI has done its job. We humans, love ‘more’. We always want more. It’s not too far when AI can not just be accurate, it can also help humans by actioning some of the tasks for them? That would be a delight, wouldn’t it!