Manage & Support Content Chain Trends

This report focuses on identifying the most important investment drivers in Manage and Support derived from a mixture of sources, including survey data on technology priorities, company announcements, and financial data.

View the interactive report below or click here to access the PDF.

Cloud Adoption Trends

Highlights:

  • Cloud adoption in broadcast and media continues to grow with most media businesses having adopted cloud technology; this was largely driven by COVID. The move to remote production and the increasing volumes of data due to M&E convergence have all contributed to accelerated cloud adoption.
  • Despite the increasing complexity caused by an expanding virtualized infrastructure, a multi-cloud approach is becoming an increasingly popular choice among media companies who want to avoid being locked in with one public cloud service provider.
  • Media businesses’ move towards decentralized remote production is driving investment in edge computing and cloud storage, bringing new efficiencies to TV and film production.
  • From the content supply chain perspective, cloud technology is being implemented predominantly for storing, distributing, producing, and managing content.

View the interactive report below or click here to access the PDF.

The data domino effect

By Duncan Beattie, GB Labs


When designing a storage system for a production or post facility, it is easy to concentrate on capacity and throughput, and think that is all you need. However, a very real issue is data safety and how to guard against it. That is a particular problem in our industry of media production. Once you start considering the issues around data loss, the potential problems start to escalate: a real domino effect.

Media production companies are faced with significant challenges: perhaps an order of magnitude larger than other industries. First, we tend to deal with large files, and therefore very large storage requirements. If you are shooting with today’s digital cinematography cameras you are generating many terabytes a day of raw content. If you are a busy post house with a number of edit suites, grading rooms and audio dubbing theatres, then you will have multiple simultaneous projects on the go. This will create a significant number of files with some being very large.

In parallel, and compounding the challenge, there are significant time pressures. Often you only get paid for a production once it is delivered. There are few things more disastrous in the production industry than failing to deliver a project on time: at best there is the risk of significant reputational damage, and there may well be a financial penalty too.

So we start out with data which has already had a significant investment. The costs of a shoot including cameras, crew, actors, locations and more. That data has to be secured to the highest standards: the loss of data requiring the recreation of an event is almost unthinkable. The cost of restaging a shoot will spiral: actors and key crew may be unavailable, and it may even be impossible to do it again. (Think live sport).

The next domino to drop is that while you are working out how to recover the lost data, you are not making any progress on current projects. Editors and others are standing idle and the delivery deadline looms.

In truth, few would think of any production set-up – even one based on a single edit workstation and its internal disk drives – that did not back up the data on ingest. But backing up takes a finite time, especially when dealing with large files, which means that there are inherent delays in the system.

Larger facilities will have central storage systems, serving multiple creative rooms. Here is another domino: a single storage system supporting multiple projects means that any data failure has the potential to delay many clients. And that enterprise-level data failure means there will be many editors, colourists and others standing idle, costing money and earning no revenue. Completed work will have to be remade, taking time and costing money.

RAID

Using a RAID-protected server gives some insurance against drive failures, although the performance of the server will be impacted while the RAID is rebuilt once the dead drive is replaced. But the whole server also needs to be regarded as a potential single point of failure, with consequent data loss or at least loss of productivity.

Again, you do not need me to tell you that you should be backing up your server. But now we are working in tens or hundreds of terabytes, so backing up is no trivial matter. And even if a good incremental backup system is implemented so there is relatively little overhead as backups are created, there will certainly be a long delay while the repaired server is rebuilt.

Some will choose to back up to LTO tape, which is simple and efficient, but very slow to rebuild: maybe a week to restore all the data. Tape archives should be stored off-site, so there is a physical time to locate and retrieve the tapes before restoration can start.

Others will prefer to back up to the cloud, but downloading many terabytes of data taxes even the fastest internet connections, not to mention the significant egress charges raised by the cloud provider. In either case, again you are facing expensive downtime.

You also need to consider how long it takes to write the data to the backup storage in the first place. In a production or a post house dealing in many terabytes of new data a day, the time to ingest the original material and to write the backups begins to represent a very significant overhead.

You may decide to run backups overnight when there is little other traffic on the server, but that risks losing a full day’s work on multiple projects. And when the facility gets busy there may be a temptation to stop backups to keep everyone working, which means you are back in the completely unsafe zone again, and running into a spiral of inaction from which you may never recover.

Nearline

An option which will reduce risk significantly is to have a second online storage pool that backs up the live work server. Sometimes this is called a nearline server, as it is designed to support as closely as possible the online server.

The advantage of a nearline storage pool duplicating all the data on the primary server is that should any sort of failure occur, there is a complete copy of all media available on the same network with no need to download or restore from tape. You can also arrange another layer of security – to the cloud or to tape – from this second server, without impacting the performance of the primary source.

At GB Labs we specialise in high performance, high security storage systems, particularly for media applications. Our very strong recommendation would be that the minimum you should consider is a second storage pool within the production environment. Ideally you should be able to work directly from it if necessary. This means a structure very similar to the primary server, but the benefit is in the secure knowledge of minimal downtime and delays in production deliverables.

Further, there should be the third level of security, either disk to disk to tape or disk to disk to cloud. With well-designed archive software, you will be able to incrementally add to the cloud or tape backup at any time over the 24-hour period, ensuring this third level of storage remains as close to synchronisation as possible.

A subtle point in the system design is that you might choose the second storage pool as your point of ingest, with automation pushing it to the live server. This gives production teams the security of knowing that if they can see it, there is already at least one backup.

Copy protection

You can also consider immediate copy protection within the primary storage. This is often achieved through clustered storage systems.

A cluster is a collection of servers with a management layer to spread the data across the pool. Should one node within the cluster fail, the way the data is copy protected means that there is no immediate loss of content and work can continue. Although as an entity a cluster is resilient, an entire cluster can fail, resulting in the same data loss as if you had just a single server.

The alternative is a failover server, a completely identical server maintained with precisely the same data at the same time. Should the primary fail for any reason, all the clients can be switched to the secondary storage without any interruption to service. This has the benefit of zero downtime and zero data loss due to primary failure. The two servers can be in different locations, although for optimum performance they have to be on the same network.

Adding it all together, we have an online or primary server delivering content across very high-performance networks to your team of creative artists. There is an exact copy of the primary server on a duplicate, with an automatic failover switch should there be a problem with the primary.

The secondary server also handles all ingest, ensuring that all data is backed up at least to another server before it is released for use. It is also responsible for off-site backups, to the cloud or to tape, scheduled to meet the practical data demands of the operation.

The storage management layer will also provide a dashboard of system health and escalating notifications should any problems arise.

GB Labs has developed the range of tools needed to implement the storage capacity, performance and resilience of your business needs, helping you to deliver on time every time, safe from the domino effect of data loss.

How Resilient is Redundancy?

Christian Struck

Senior Product Manager Audio Production, Lawo

Axel Kern

Senior Director Cloud and Infrastructure Solutions, Lawo


Redundancy has always been a major topic for broadcast operations to ensure that the show goes on despite a defective power supply or other failure. While all eggs were in one basket—i.e. in one place and close to one another—this approach was certainly helpful. Calling such a setup resilient would nevertheless be a stretch.

Conventional solutions launched before open-standards-based IP came along may be redundant up to a point, but that doesn’t make the operation as a whole resilient. Most are able to exchange control, audio and/or video data over an on-prem network, but connectivity to a wide-area network (WAN) spanning several cities or continents is not on the cards for them.

Widen the Area

While a WAN-based infrastructure has enabled operators to leverage processing resources in off-premise data centers—of which there may be two, for redundancy, as in Eurosport’s and many other cases—and while accessing these resources from just about anywhere are indisputable assets of an IP setup, a lot more is required to make an operation resilient.

Separating the mixing console from the processing unit and the I/O stageboxes was an important step. Building WAN-communication into all of these devices through ST2110 and RAVENNA/AES67 compliance led to pinpointed expectations regarding resilience, as a recently conducted trial confirms. The aim was to show the prospective client that immersive audio mixing remains possible even if one of the two processing cores is down. It involved one A__UHD Core in a sporting arena in Hamburg, and a second near Frankfurt, at the production facility.

The team successfully demonstrated that if the preferred core in Hamburg, controlled from an mc² console near Frankfurt for the live broadcast mix, becomes unavailable, the core closer to the console immediately takes over. The physical location of the second core is irrelevant, by the way, as long as it is connected to the same network. WAN-based redundancy is an important element of a solid resilience strategy, even though the unavailability of a processing core is only the seventh likeliest incident in a series of plausible failures from which operators can recover automatically, according to Lawo’s customer service. This degree of redundancy involves so-called “air gapped units”, i.e. hardware in two separate locations, to ensure continuity if the “red” data center is flooded or subject to a fire: the redundant, “blue” data center automatically takes over.

Strictly speaking, the five likeliest glitches—control connection loss, routing failure, media connection failure, control system failure, and power supply failure—require no hardware redundancy when the audio infrastructure is built around an A__UHD Core. That said, having a spare unit online somewhere is always a good idea. It is also required as fail-over for incident number six, DSP/FPGA failure.

Explode to Reinforce

A second important aspect is to decentralize what used to be in one box. Even some IP-savvy solutions are still supplied as a single unit that handles both control and processing. For maximum resilience, one device should do processing, while a COTS server or dockerized container transmits the control commands it receives from a mixing console to the processing core, and a switch fabric does the routing. Separating control, processing and routing, and making all three redundant minimizes the risk of downtimes. Plus, except for at least one switch close to each required component, all devices or CPU services can be in different geographic locations.

It doesn’t stop there. A redundant IP network with red and blue lines is built around a switch fabric. Without going into too much detail, certain management protocols (PIM and IGMP) may cause issues that could seriously affect broadcast workflows or even make them impossible. The first is related to situations where the red and blue lines are routed to the same spine switch. An issue with that switch means that this part of the network not only ceases to be redundant but may also stop working altogether: it is a single point of failure. The second issue is related to how switches distribute multicast streams over the available number of ports when they are not bandwidth-aware. In a non-SDN network, this may lead to situations where one port is oversubscribed, i.e. asked to transmit more gigabits per second than it can muster, which causes errors at the receiving end.

These and other topics are being addressed by companies like Arista and Lawo via a Multi Control Service routine and direct influence of the VSM studio manager software on traffic shaping. The goal is to avoid failures, oversubscription of network ports, and to allow operators of large installations to immediately confirm the status of their switching and routing operations.

Combining the above with the HOME management platform for IP infrastructures adds yet another building block. HOME not only assists operators with automatic discovery and registration, but also with controlling processing cores by hosting the MCX control software for mc² consoles either on networked COTS servers or directly in a virtualized environment—and to dynamically switch from one processing core to the other, one console surface to the next, or one MCX control instance to another if the need arises.

Stay in Control

Resilience necessarily includes control. VSM achieves seamless control redundancy with two pairs of COTS servers stationed in two different locations and automatic fail-over routines. Hardware control panels are not forgotten: if one stops working, connecting a spare, or firing up a software panel, and assigning it the same ID—which takes less than a minute—restores interactive control. (The control processes as such are not affected by control hardware failures, by the way.)

As installs migrate towards a private cloud/data center infrastructure, provisioning two (or in HOME’s case, three) geographically distanced COTS servers with permanent status updates between the main and redundant units allows users to remain in control. If the underlying software architecture is cloud-ready, those who wish can ultimately move from hardware servers to service-based infrastructures in the cloud. Technologies like Kubernetes and AWS Load Balancer can then be solicited to provide elastic compute capacity that instantly grows and shrinks in line with changing workflow requirements. A welcome side effect is that no new hardware servers need to be purchased to achieve this kind of instant, high-level resilience.

After experiencing the benefits of resilient, elastic control, some operators may wonder whether a similar strategy is also possible for audio and video processing. The short answer is: “If you like.” Quite a few operators are wary of the “intangible cloud” and may be relieved to learn that the ability to architect private data centers in a redundant configuration already allows them to achieve a high degree of resilience.

One Leap Closer

A genuinely resilient broadcast or AV network is a self-healing architecture that always finds a way to get essences from A to B in a secure way. Users may not know—or care—where those locations are, but the tools they use to control them do. And they quickly find alternatives to keep the infrastructure humming.

The only remaining snag was to provide operators with an almost failsafe infrastructure. A lot has been achieved to make broadcast and AV infrastructures resilient by design while keeping them intuitive to operate.

Content Protection – more things to consider

Nik Forman

Marketing Director, Friend MTS


Summary of report by Roger Thornton, IABM

Overall, the prioritization of business and consumer trends over pirate-driven ones was consistent across different types of media businesses. However, while pirate-driven trends were ranked as less important, most of them were classified as high priority, with cross-border illegal access to content and security breaches leading to content leakages identified as more important due to the Covid pandemic.

With most businesses predicting their original content investment will increase significantly over the next few years, it seems counter-intuitive that content protection solutions are not a primary business priority given the increase in piracy activity in all its various forms. One of the reasons behind this is the complexity of the advanced content protection landscape, which constantly needs to counter pirate-driven innovation. This means high budgets for content protection and a struggle to keep up with fast-paced developments effectively. In summary, the effects of complexity on technology understanding and pricing are important factors determining media businesses’ deprioritization of content protection technology.

Response to data and analysis circulated in IABM report sponsored by Axinom

The recent IABM report on content security trends in conjunction with our good friends at Axinom made for some interesting reading. As Roger Thornton mentions in his summary article, perhaps the most surprising takeaway is the discrepancy between a stated intention to invest in content, and a far lower priority in investment in content security technology to safeguard against the theft of that content, especially given the financial, operational and potentially creative resources that will be required to produce or acquire it. As Roger summarises, this seems counterintuitive, but budgets are finite and it could be argued that prioritizing content over business processes is where dutiful media providers should concentrate their majority resource.

However, with 80% of respondents indicating the high importance of DRM, and 55% already implementing watermarking technology, it’s clear that using technology to prevent unlawful acquisition and redistribution of content is seen as a critical business driver.

The excellent analysis provided towards the end of the report informs a list of positive and negative drivers of investment in content protection solutions, and as a provider of such products and services, these are the elements that drive the conversations that we have with our customers around the world.

First to the positive drivers:

Content security is no different to other media technologies – the most attractive solutions are those that offer both technical and commercial flexibility. Additionally, the ability to provide digital flexibility without impacting quality of product is key, and with the evolution of technologies such as client-composited watermarking, which is deployed at point of delivery, this flexibility has come a long way and continues to evolve. Crucially, this flexible architecture not only benefits the user operationally, but also enables providers such as Friend MTS to more easily keep pace with ever-evolving piracy technology.

It's no surprise that the other positive drivers all reflect the near-lockstep relationship between content value and the requirement to protect it, with sports and scripted content at the top of this pyramid, and lower-cost, niche programming appearing as a negative driver of investment. The world of video is ever-expanding, however, and the proliferation of unique content across non-traditional environments (for example content creators, corporate and other enterprise users) is already driving an increased requirement for video content security services in these non-traditional sectors.

The negative drivers of investment in content security are, if anything, of greater interest to providers such as Friend MTS; these represent areas of concern to users, and are the issues around which we need to continue to engage with our audiences, to provide greater clarity, if required, and guidance on the most efficient and cost-effective content security solution for every customer’s specific content types and distribution models.

The primary issue to address is at the top of the table, thus stated in the report analysis: “The effects of complexity on technology understanding and pricing are important factors determining media businesses' deprioritization of content protection technology.”

Again, it’s not surprising that a perceived lack of transparency of technology, convoluted pricing structures and a potentially bewildering array of solutions from which to choose are off-putting to buyers. Simple questions such as “Which is the best DRM solution for my organisation?” or “How much should I budget for a watermarking solution?” often don’t have easy answers, and although this is never for reason of deliberate complexity, or to create an aura of mystique, it’s clear that these are areas that can, should and are being addressed by technology providers to help customers arrive at the solution that best fits their requirements as painlessly as possible. As an industry we’re improving, but obviously work remains to be done.

The good news is that, as pointed out by respondents, there continues to be a noticeable move away from previous levels of complexity around security solutions, driven primarily by new technology and deployment models. In the field of watermarking, for example, extremely lightweight, client-composited watermarking can now be deployed in as little as two weeks, with minimal impact on existing infrastructure. This type of watermarking is now amongst the most widely deployed in the world, bringing the obvious commensurate business benefits, and is a direct result of the drive to provide simpler solutions and less complex deployment. Of course, there are different types of watermarking, and some may be more suitable for certain content types and distribution models; likewise with DRM, CAS management, and other content security services such as monitoring and enforcement. The key is understanding how this potentially complex environment fits together, and which components will do the right job for each individual customer’s content model and distribution environments – and communicating this to them clearly and effectively. The mission-critical foundation that underpins this entire process is independent, agnostic advice based on expertise, operational environment analysis and business intelligence. In short, when looking to implement a content security environment, make sure you consult a proven expert who won’t just sell you what’s on their particular van.

Finally, the report touches on platforms and providers, and the role that they can play in anti-piracy efforts; this importance cannot be overstated, and the efforts that large-scale platforms such as YouTube already undertake are to be applauded. Each dedicates significant resources to anti-piracy measures, and the ability to engage and collaborate with them is vital to the monitoring and enforcement services operated by companies such as Friend MTS. As before, work remains to be done, but increased collaboration between platforms, ISPs and other service providers continues to reap rewards. As an example, a recent program of close data collaboration between a major ISP and Friend MTS led to a 90% reduction in illegal streaming of certain content on their platform, and it is this type of collaboration and business analysis that continues to be the way forward in the fight against content piracy.

The IABM and Axinom report has some obvious takeaways for content security providers, and I’d like to thank both for their work in producing it. The research defines some clear goals – most notably greater flexibility and a drive to reduce complexity – and although we know that we’ve some way to go, I am happy to say that we’ve already come a long way towards achieving them.