Marquis – 2nd generation digital migration: if it were easy, everyone would do it!

Marquis – 2nd generation digital migration: if it were easy, everyone would do it!

IABM Journal

Representing Broadcast & Media Technology Suppliers Worldwide
Articles covering a range of key topics and themes
In depth analysis of the latest trends

Marquis – 2nd generation digital migration: if it were easy, everyone would do it!

Paul Glasgow

Wed 04, 10 2023

Marquis – 2nd generation digital migration – if it were easy, everyone would do it!

Paul Glasgow, Marquis

Many years ago, digitization offered a panacea; a mechanism to rid the world of analogue and proprietary digital video tape formats and make content more easily accessible and exploitable. Using supposedly non-proprietary encoding schemes, the content became independent of the physical media, so future migrations would be easy. Robotic data libraries and control software automated many processes, removing the need for many staff. Carefully annotated and indexed content using new DAM systems would make assets inherently exploitable, watermarking would offer protection, and early speech-to-text processing would make for the richest set of metadata.

But was this the expected panacea? Well, not entirely. Digitization realized lots of benefits, but it didn’t all work out as anticipated, introducing several unintended consequences and risks for subsequent digital migrations.

Damn vendors…

DAM vendors have become the biggest point of risk – the broadcast media market is not that big and everyone wants something different. The result was DAM systems that were only designed for a few use cases – perhaps for a single big client with a bespoke workflow. However, a DAM system originally designed for a digital archive library may not lend itself well for use in transmission or production, and vice versa.

Think of all the DAM vendors who have ceased trading or have been acquired by trade or a client and then disappeared. Migrating from a legacy DAM system is not likely to be trivial; it can throw up some of the simplest issues – but complex problems. As an example, let’s say you’ve migrated to a new DAM system. You search for some video items and it produces different search results to the original system. Have assets become orphaned, never to be seen again?

Perhaps the simple answer is: don’t buy a DAM system – build it! Ramp the development team up, capture the requirements, develop a solution, deploy it, then ramp the team back down again. This works well for several years, then new codecs come along, new versions of OS, security patches, and there’s soon no way to keep up. The original developers have long gone and the few left plan to retire, while holding the keys to the castle! As it’s a self-build, there is no documented API, since there was no consideration in the original design to migrate away from this proprietary system.

Rise of the PAMs

A PAM is a Production Asset Management system that manages live ‘work-in-progress’ production data (unlike a DAM, which manages finished content). However, a PAM was never intended to become a permanent repository. It doesn’t translate or migrate well to a DAM environment since its data hierarchy is production-data-centric, it may have had data fields added on a per-production basis and is not a carefully managed and structured taxonomy. The result is a PAM that may be many years old and holds business-critical production information, yet the system may have become obsolete. However, there’s no way of migrating away from the system without losing valuable production information, since this can’t be represented appropriately in a DAM system.

Who needs standards anyway?

Standardizing codecs and wrappers has always been an industry ambition but the truth is that everything is a moving target and always will be. Early digital codecs were low quality, inefficient and often required proprietary chips to encode in real time. Some codecs were optimized for acquisition, post-production, transmission and streaming, and many were proprietary to different vendors; often called ‘de facto’ standards.

There’s also a problem with existing standards; different vendors can have different interpretations of a ‘standard’ or can be selective in which parts of the standards they implement. So an archive could contain media that notionally conforms to a standard but that is unsupported in another system with justifiable claims to support the same standard. So, the two systems cannot inter-operate.

Of course, organizations such as the EBU and SMPTE have made valiant efforts to create standards. But it’s become increasingly difficult as change continues to accelerate; manufacturers have differing agendas which makes ‘true’ standardization almost impossible. With some exceptions, wrapper and metadata standardization is still ‘out in the wild’, since people have differing requirements.

Don’t forget audio  

Audio also gets in the way of the perfect standard, with its own set of issues: number of channels, surround sound, Dolby Atmos, etc. Of course, audio will carry its own metadata, but often AAF (Advanced Authoring Format) files are also required to integrate audio post-production tools. What do we do with those production tools – will they also still work in future?

Decline of LTO

Migrating the 1st generation behemoth robotic digital libraries – which are often LTO data tape based – has become a real issue. LTO systems were always scaled for ‘normal’ operation, with enough drives and slots, plus the robotics needed to pick and place data tapes fast enough. Often the software wasn’t conventional HSM (hierarchical storage management) software either. Production video files tend to be large and could span more than one data tape, plus partial file extraction is needed that could enable, say, taking a one-minute section out of a one-hour program, thus accelerating transfers by avoiding transferring the whole file.

So, let’s say we have an old robotic tape library containing 4PB of data that needs to be migrated.  The first thing is, it’s most likely still in use. So perhaps only half of the drives are available for the migration. The remaining drives, which are probably coming to the end of their lives, are going to be hammered and in continuous use, so drive failure rates could be higher than anticipated.

Then, there is the proprietary robotic library controlling system, which all made complete sense when new but is now a huge bottleneck, since the API may be too slow to poll for data. Also, the original software vendors were all acquired, and knowledge and technical support has become difficult.

There’s often complacency when considering migrating digital archives. Let’s say a legacy production system is going to be replaced with a modern production system; it’s highly unlikely that the new system will be backwardly-compatible with all of the legacy content. So, it’s important to determine the potential migration yield and how much the process can be automated. If the library contains petabytes of data and millions of assets, the migration yield could be fundamental to the success of the project.

So how can Marquis help?

First, we don’t make MAM, PAM or DAM systems, or sell storage systems. What we have is the migration technology and years of experience to enable and de-risk automated migrations, working with vendors, partners, service providers, SIs and clients to make them successful. Our metadata translation capabilities have been used by the biggest media enterprises and we’re also the only company who has successfully archived a PAM system for a major studio, so it can still be queried.

We have a vendor-specific codec interoperability library and API library that goes back 20 years, which no other vendor has. The original vendors may be long gone but their original content may still be in the library with the legacy system – which is now end of life – but still in use. These capabilities are fundamental to automating a migration.

The best plan is to bring us in early, since we can analyze content and metadata and work out how best to migrate it; such as to re-wrap, transcode, scale or de-interlace, etc. We can test sample files remotely or in our labs and can pre-determine policies on how to automatically migrate everything. We know how to integrate to legacy archive APIs and, if needed, directly access the database if the API is too slow. We can work out how to interoperate legacy content with new vendors, or even come up with mezzanine framework for interoperability.

Our technology runs on-prem and in-cloud, so migrations – whatever they may be – are easy. We also license our technology just for the migration period, so no sunk costs.

Finally, we can also scope and mitigate risk at the pre-tendering stage. Since we know what to look for, we can ensure risks are identified and fixes are pre-determined. The outcome will always be a more successful migration project, which is much more likely to finish on time and on budget.

 

 

Search For More Content


X