Subtitles make videos more accessible to viewers speaking different languages across wider geographies and cultures. This is usually done by retaining the original soundtrack of the video and overlaying audio transcript on the video in textual form. Subtitles have emerged as an important monetization opportunity for media publishers as they allow publishers to gain new markets for their content. Moreover, in the current COVID-19 times when content production has virtually stalled and internet audience continues to grow, media publishers want to maximize reach of their content by providing subtitled versions on different digital channels.
Creation of subtitles requires not only proficiency in respective languages, but also understanding of various subtitling tools available in the market. Such tools ensure subtitles are synchronised with the dialogues and maintain proper reading speed. During this process, the subtitler marks exact time codes where each subtitle should be displayed on the video timeline and where they go off the screen when the dialogue delivery is completed.
When the content is distributed in different geographies and across distribution channels, very often the content needs to undergo certain edits. The edits are to fit the content within specified time frames, provide cuts and zooms for removing certain compliance or content moderation issues and also to add some extra frames with mandatory disclaimers, among others.
The subtitles can go off sync the actual dialogues when there are multiple versions of the content. They need to be re-time coded every time an existing video content is syndicated on different broadcast channels and in different video formats. Usually, in the syndication workflows, video editors create different edited versions from original source master and record details of edits in the EDL files. Edit details in EDL files include information about cuts and/or new inserts with their exact locations in the edited video timeline. A subtitler then uses this EDL file to re-time the subtitles on the edit versions by extracting time code details of the cuts and new inserts.
In many situations when an EDL file is not available, it is not possible for the subtitler to know the prior edited time codes. The subtitler (or a video editor) now has to spend more time watching the edited videos and mark the exact time codes and use it later to re-time the subtitles. With high content volumes and need for quick turnaround time (TAT), this situation may hamper the scaling of video syndication process.
The need of the hour is a subtitle re-timing automation suite. Such an automation suite should leverage a bespoke AI engine to deliver accurate data and action tools for bringing unprecedented operational efficiency. For example, a video comparison tool leveraging several AI technologies in computer vision and deep learning domains can be an innovative construct to solve this problem.
When there is no time code information available about the cuts and new inserts in the edited video, such video comparison tools can be used to extract this information by comparing both source and edited versions of the videos and identifying matched and un-matched segments. This helps to accurately extract time codes marking of cuts and inserts in edited video. This information can then be exported as EDL files in subtitling workflows to reconstruct edit version with exact time code markings of cuts and inserts. This makes it very easy for subtitlers to use the information to re-time the subtitles on the edited video.
After this automation step, what remains to be done manually is a quick round of QC to make sure the subtitles are synced properly in the edited version of the video.
Apart from being able to linearly compare two videos and find matched, un-matched and moved segments, the AI-enabled video comparator tool can also identify image differences between two videos like resolutions, frame rates, edits, crops, zooms, color grading, text detection, VFX, etc. as match or mismatch. These capabilities enable various other use cases in media post production and distribution process such as optimizing QC workflows to compare different masters and picking up the right ones for edit and distribution purposes. It can also help optimize Digital Intermediary (DI) workflows by automating identification of image differences between DI Master and Syndicated Digital Cinema Package (DCP) versions and identify any new changes that need to be done in DI masters.
By harnessing advanced AI technology in video comparison, the possibilities of making AI work for you are truly limitless.