Matrix – The secret to ad sales success in a changing media landscape

Matrix – The secret to ad sales success in a changing media landscape

Mark Gorman, CEO, Matrix

The media landscape is constantly evolving, and businesses that want to succeed in advertising need to be able to adapt to change. One of the biggest challenges facing media organizations today is how to manage multi-revenue streams. With the rise of FAST (Free, Ad-supported, Streaming Television), businesses need to be able to sell advertising across a variety of channels, including linear TV, digital, and streaming.

The Importance of Unification

One of the keys to success in multi-revenue stream advertising is unification. This means bringing together all of your data and processes into a single platform. This allows you to have a single view of your customers, your inventory, and your sales opportunities. It also makes it easier to automate your sales processes and free up your sales team to focus on more strategic tasks.

There are two main approaches to unification:

  • The generalist approach: This involves using a CRM system that can be customized to meet your specific needs. Implementing it tends to be more expensive, resource hungry and time-consuming than the specialist approach.
  • The specialist approach: This involves using a platform that is specifically designed for media ad sales. This is a good option if you have a large business or complex needs. Specialist,  industry specific, platforms save you time and money and are quicker to set-up.

The Case Study

A leading multi-revenue stream provider used a specialist media ad sales platform to improve their operations. The company had been using a generalist CRM system, but it couldn’t keep up with the demands of its business. They switched to a specialist platform, and as a result, they were able to:

  • Unify their data and processes into a single platform.
  • Automate their sales processes.
  • Free up their sales team to focus on more strategic tasks.
  • Increase their sales revenue.

The Takeaway

The case study shows that unification is essential for success in multi-revenue stream advertising. If you want to stay ahead of the competition, you need to invest in an industry specific platform that can help you unify your data and processes.

Here are some additional tips for success in ad sales:

  • Focus on data-driven insights: Use data to understand your customers, your inventory, and your sales opportunities. This will help you make better decisions about where to allocate your advertising budget.
  • Automate your sales processes: Automating as much of your sales processes as possible will increase efficiencies, reduce errors and redundancies, while also increasing the speed from prospect to sale.
  • Build relationships with your customers:  This will help you understand their needs and create advertising campaigns that are relevant to them.
  • Be creative: Don’t be afraid to be creative with your advertising campaigns. This will help you stand out from the competition.

By following these tips, you can increase your chances of success in ad sales.

Matrix Solutions is a forward-thinking technology company that empowers the media ad sales world with intelligence, technology, and expertise. It provides the technology backbone for the end-to-end workflow for sales organizations transacting in the media marketplace. Its flagship solution, Monarch, is the only global ad sales platform built specifically for media, delivering the CRM and business intelligence necessary to optimize inventory. Matrix manages more than $13 billion annually in media ad revenue, has over 10K users, maintains over 95% renewal rate, and has founded the annual Media Ad Sales Summit and Media Ad Sales Council (MASC) – both of which bring together industry leaders to advance the future of media ad sales. For more information, please visit matrixformedia.com.

Marquis – 2nd generation digital migration: if it were easy, everyone would do it!

Marquis – 2nd generation digital migration – if it were easy, everyone would do it!

Paul Glasgow, Marquis

Many years ago, digitization offered a panacea; a mechanism to rid the world of analogue and proprietary digital video tape formats and make content more easily accessible and exploitable. Using supposedly non-proprietary encoding schemes, the content became independent of the physical media, so future migrations would be easy. Robotic data libraries and control software automated many processes, removing the need for many staff. Carefully annotated and indexed content using new DAM systems would make assets inherently exploitable, watermarking would offer protection, and early speech-to-text processing would make for the richest set of metadata.

But was this the expected panacea? Well, not entirely. Digitization realized lots of benefits, but it didn’t all work out as anticipated, introducing several unintended consequences and risks for subsequent digital migrations.

Damn vendors…

DAM vendors have become the biggest point of risk – the broadcast media market is not that big and everyone wants something different. The result was DAM systems that were only designed for a few use cases – perhaps for a single big client with a bespoke workflow. However, a DAM system originally designed for a digital archive library may not lend itself well for use in transmission or production, and vice versa.

Think of all the DAM vendors who have ceased trading or have been acquired by trade or a client and then disappeared. Migrating from a legacy DAM system is not likely to be trivial; it can throw up some of the simplest issues – but complex problems. As an example, let’s say you’ve migrated to a new DAM system. You search for some video items and it produces different search results to the original system. Have assets become orphaned, never to be seen again?

Perhaps the simple answer is: don’t buy a DAM system – build it! Ramp the development team up, capture the requirements, develop a solution, deploy it, then ramp the team back down again. This works well for several years, then new codecs come along, new versions of OS, security patches, and there’s soon no way to keep up. The original developers have long gone and the few left plan to retire, while holding the keys to the castle! As it’s a self-build, there is no documented API, since there was no consideration in the original design to migrate away from this proprietary system.

Rise of the PAMs

A PAM is a Production Asset Management system that manages live ‘work-in-progress’ production data (unlike a DAM, which manages finished content). However, a PAM was never intended to become a permanent repository. It doesn’t translate or migrate well to a DAM environment since its data hierarchy is production-data-centric, it may have had data fields added on a per-production basis and is not a carefully managed and structured taxonomy. The result is a PAM that may be many years old and holds business-critical production information, yet the system may have become obsolete. However, there’s no way of migrating away from the system without losing valuable production information, since this can’t be represented appropriately in a DAM system.

Who needs standards anyway?

Standardizing codecs and wrappers has always been an industry ambition but the truth is that everything is a moving target and always will be. Early digital codecs were low quality, inefficient and often required proprietary chips to encode in real time. Some codecs were optimized for acquisition, post-production, transmission and streaming, and many were proprietary to different vendors; often called ‘de facto’ standards.

There’s also a problem with existing standards; different vendors can have different interpretations of a ‘standard’ or can be selective in which parts of the standards they implement. So an archive could contain media that notionally conforms to a standard but that is unsupported in another system with justifiable claims to support the same standard. So, the two systems cannot inter-operate.

Of course, organizations such as the EBU and SMPTE have made valiant efforts to create standards. But it’s become increasingly difficult as change continues to accelerate; manufacturers have differing agendas which makes ‘true’ standardization almost impossible. With some exceptions, wrapper and metadata standardization is still ‘out in the wild’, since people have differing requirements.

Don’t forget audio  

Audio also gets in the way of the perfect standard, with its own set of issues: number of channels, surround sound, Dolby Atmos, etc. Of course, audio will carry its own metadata, but often AAF (Advanced Authoring Format) files are also required to integrate audio post-production tools. What do we do with those production tools – will they also still work in future?

Decline of LTO

Migrating the 1st generation behemoth robotic digital libraries – which are often LTO data tape based – has become a real issue. LTO systems were always scaled for ‘normal’ operation, with enough drives and slots, plus the robotics needed to pick and place data tapes fast enough. Often the software wasn’t conventional HSM (hierarchical storage management) software either. Production video files tend to be large and could span more than one data tape, plus partial file extraction is needed that could enable, say, taking a one-minute section out of a one-hour program, thus accelerating transfers by avoiding transferring the whole file.

So, let’s say we have an old robotic tape library containing 4PB of data that needs to be migrated.  The first thing is, it’s most likely still in use. So perhaps only half of the drives are available for the migration. The remaining drives, which are probably coming to the end of their lives, are going to be hammered and in continuous use, so drive failure rates could be higher than anticipated.

Then, there is the proprietary robotic library controlling system, which all made complete sense when new but is now a huge bottleneck, since the API may be too slow to poll for data. Also, the original software vendors were all acquired, and knowledge and technical support has become difficult.

There’s often complacency when considering migrating digital archives. Let’s say a legacy production system is going to be replaced with a modern production system; it’s highly unlikely that the new system will be backwardly-compatible with all of the legacy content. So, it’s important to determine the potential migration yield and how much the process can be automated. If the library contains petabytes of data and millions of assets, the migration yield could be fundamental to the success of the project.

So how can Marquis help?

First, we don’t make MAM, PAM or DAM systems, or sell storage systems. What we have is the migration technology and years of experience to enable and de-risk automated migrations, working with vendors, partners, service providers, SIs and clients to make them successful. Our metadata translation capabilities have been used by the biggest media enterprises and we’re also the only company who has successfully archived a PAM system for a major studio, so it can still be queried.

We have a vendor-specific codec interoperability library and API library that goes back 20 years, which no other vendor has. The original vendors may be long gone but their original content may still be in the library with the legacy system – which is now end of life – but still in use. These capabilities are fundamental to automating a migration.

The best plan is to bring us in early, since we can analyze content and metadata and work out how best to migrate it; such as to re-wrap, transcode, scale or de-interlace, etc. We can test sample files remotely or in our labs and can pre-determine policies on how to automatically migrate everything. We know how to integrate to legacy archive APIs and, if needed, directly access the database if the API is too slow. We can work out how to interoperate legacy content with new vendors, or even come up with mezzanine framework for interoperability.

Our technology runs on-prem and in-cloud, so migrations – whatever they may be – are easy. We also license our technology just for the migration period, so no sunk costs.

Finally, we can also scope and mitigate risk at the pre-tendering stage. Since we know what to look for, we can ensure risks are identified and fixes are pre-determined. The outcome will always be a more successful migration project, which is much more likely to finish on time and on budget.

 

 

MainConcept – Can codecs improve ad-engagement?

MainConcept – Can codecs improve ad-engagement?

Thomas Kramer, VP Strategy and Business Development, MainConcept

With big players in the industry leading the way, ad supported subscriber growth has turned into a key strategy for content owners and broadcasters, with many exploring this offering to reach users in new markets and grow subscribers. Global AVOD revenue is forecast to reach $70 billion by 2027 and while the concept of ‘free’ content is not new, ad-tech hasn’t kept pace with the pixel race for video quality. While technology limits the quality of ad delivery, broadcasters continue to miss out on the full potential of ad revenue. So, what is the current state of ad technology? And can codecs help broadcasters meet consumer expectations while also improving ad-engagement?

The rise of ad-supported offerings

Consumption habits are changing yet again, and ad-funded services such as FAST, AVOD and ad-supported tiers are rapidly growing in popularity. Over the last year, many big-hitters within the industry have adapted their business models to offer ad-supported services. Netflix has led the way with its ad-supported tier of subscription, enabling users to access its content for less money. Disney+, Max (formerly HBO Max) and others have followed, and rumor has it Amazon Prime Video may join the mix.

With customers increasingly turning towards ad-supported content, ensuring that the ads are impactful for the right reasons continues to be important. Repeated exposure to the same, irrelevant ad can worsen the viewing experience and create negative outcomes for the brand. Passive consumption is fine, providing viewers are kept engaged. One way to do this is to capture viewers’ attention by delivering ads which align closely with their personal interests. Targeted ads are not new news – we are well accustomed to this during our daily scrolls through social media and the internet at large. However, broadcasters are now wanting in on the action, with personalization promising to improve engagement and increase return for advertisers, therefore maximizing the value of ad space.

The importance of codecs within broadcast

 Codecs are a critical part of the media supply chain. Video files are of such a large size, that without codecs to compress and decompress files, transmission, storage, and distribution of video as we know it, would be impossible. They also play a vital role in ad insertion, in ensuring that the right ad-content is delivered, without issue, at the right time. With the right codecs, the video compression process can be simplified, leading to a streamlined ad-placement experience, enabling broadcasters to deliver both enhanced and targeted ads.

An increasingly popular method of ad insertion is overlaying the ad over the main content. This type of ad insertion can help keep viewers engaged for longer because the content is not interrupted as is the case with traditional ad-breaks. Ads can be placed over the content in a number of positions. Choosing where you want your ad to be placed, and how it needs to interact with the main video influences which codec is required. In general, newer codecs like HEVC/H.265 and VVC/H.266 are best for flexibility and quality. Tile ad insertion allows for a fixed, defined area of the video to host the ad and is best for HEVC and VVC. Multilayer insertion uses two bitstreams: one to layer content, and one to layer the ad, for which VVC is ideal. Slices create ad space determined on the number of rows being used and is ideal for AVC/H.264.

While ad personalization is possible using traditional ad-insertion techniques with ad-breaks interrupting the content, delivering overlaid ads at scale that are also personalized to users presents a huge technical challenge.

 Server-side vs client-side insertion

 Where the ad is inserted within the workflow can influence the success of ad campaigns. A pain-point within the provision of ads is whether the ad is seen upon delivery. Ad blockers and legacy devices are both common roadblocks which prevent broadcasters from delivering ads to consumers. Server-side ad insertion can alleviate this pain point.

Client-side insertion is routinely used for ads as it reduces the need to manipulate content on a large scale. It inserts ad content at the point of delivery which avoids video processing complexities. However, some of the largest challenges around the business case for ad-supported content is caused by this practice. Ad blocking technologies can identify the ad and prevent the user from seeing the content, while ad delivery to legacy devices can be hampered through technological barriers. The always-growing number of different consumer devices requires an immense effort for engineering to guarantee a seamless ad experience for all possible viewers.

Although more technologically complex, server-side ad insertion delivers significant benefits to broadcasters and advertisers, allowing them to increase the impact of their ad campaigns. The benefits of server-side insertion are possible due to the way in which the ads are packaged and delivered to the viewer. To deliver server-side ads, video content must be encoded together with the ad. This becomes most challenging when you consider delivering this at scale. This delivery enables the content to bypass ad blockers, ensuring that ad content is displayed correctly for the agreed duration. But beyond the technical benefits of server-side delivery sit creative ones too. Server-side insertion allows delivery of the highest level of ad immersion, enabling advertisers to present immersive, personalized ad content to viewers.

However, the challenge lies in the delivery; personalization creates a huge technical challenge, especially during server-side insertion. As with any other content delivery, the content would need to be encoded, in this case alongside a personalized ad for each user. Historically, codecs have prevented this being done on such a scale, limiting ad revenue opportunities for broadcasters. With codecs like HEVC and VVC, these limitations are eliminated.

Codecs and immersive advertising

To really maximize monetization opportunities, broadcasters need to look at improving customer engagement with ads. This will involve ensuring ads cannot be skipped, ensuring good levels of personalization, as well as finding new ways to deliver them in a more immersive way. One way to achieve this will be through using overlay technology to keep ads in the main video as the main content continues to play.

LTN – How to secure your IP-based future the simple way

LTN – How to secure your IP-based future the simple way

Rick Young, SVP, Global Products

 The unbelievable pace with which our industry is changing requires media companies to think ahead and develop robust strategies that help them stay ahead of the curve. As audiences consume content in new and ever-changing ways, there are now many tough challenges and exciting opportunities that all media companies need to be ready for. We are seeing more and more organizations evolving their workforce and workflows to survive and thrive.

Future-proofing a video distribution strategy does not have to be complicated. Here are four simple steps to consider in today’s constantly shifting business environment.

Step 1: Getting comfortable with IP-based video distribution

Every day, Tier 1 organizations harness IP transport for the contribution and distribution of high-value content feeds — both for full-time channels and premium live events.

Fully managed IP-based video distribution has been tested and proven for the highest-value scenarios. Major media organizations, broadcast networks, and consulting firms have run extensive testing, evaluation, and proof-of-concept projects to explore managed IP transport as a reliable, flexible, and scalable alternative to legacy satellite workflows — meeting stringent criteria such as:

  • Reliability
  • Quality
  • Latency
  • Failover and redundancy
  • Disaster recovery

Step 2: Develop a long-term media strategy

Evaluating your organization’s future needs is critical. A robust strategy must incorporate smart technology choices that provide the flexibility to adapt quickly and easily harness emerging opportunities in the future. This approach lays the foundation for unlocking growing revenues, including via versioning of live event content, targeted advertising, and achieving scale.

Live sports is a prime example, as it draws in viewers and keeps them signed up for a service. It’s a critical piece of an overall content strategy that reduces churn while driving value. Forward-thinking content owners should aim for a more intelligent, agile, and flexible distribution model. Leaving complex business rules to basic edge devices like Integrated Receiver Decoder (IRDs) is too limiting and inflexible. With live sports, the stakes couldn’t be higher. Efficiently delivering an ever-increasing amount of live content, with more versions of that content, and enabling 4K and UHD quality, requires reliable, scalable video transport infrastructure.

Step 3: Tailored planning for a seamless migration

As the old saying goes: “Fail to prepare, prepare to fail.” When it comes to planning, this couldn’t be more important. But, there isn’t ‘one way’ or ‘the right way’ to kickstart an IP migration. Different organizations and channel types will require varied processes, and technology providers need to take a tailored approach to ensure a seamless transition.

To achieve this, media companies need to consider the following:

  • Deploying IP as a backup mechanism in initial phases
  • Running a dual illumination period to allow an extended transition for affiliates, endpoints, and partners
  • A faster launch and transition to maximize time and cost efficiencies

Step 4: The right partner for an IP-based future

Media companies shifting to IP need assurances they will have a stress-free changeover and ongoing reliability as new services drive new opportunities. Working alongside a trusted provider with proven expertise, end-to-end management, and always-on support gives content owners the peace of mind to make the full IP transition a simple, frictionless process.

It’s vital not to underestimate the moving parts required for a strong project management organization. It may include internal training, affiliate coordination, hardware deployment, and software configuration, along with the need to deploy or manage internet service provider (ISP) services. On top of that, in today’s hybrid media ecosystem, media customers need interoperable solutions that can slot in seamlessly to existing workflows. They must also integrate with other protocols and standards and scale as necessary – with near-zero impact on operations and minimal CapEx investment.

Executing an IP migration requires a great deal of preparation. Still, technology partners exist that do this every day so that media leaders can stay focused on where they add the most value — content, brand, audience growth, sales, and affiliate deals across all platforms.

A technology services provider will handle all the complicated headaches as they have the experience, knowledge, and resources. Simply put, it makes good sense. In 2023, robust partnerships will be critical for content owners looking to re-imagine global media distribution for high-value live content.

Jigsaw24 Media – Fringe benefits: post houses, soaring power fees and sustainability

Jigsaw24 Media – Fringe benefits: post houses, soaring power fees and sustainability

How post houses can reduce the impact of energy hikes & improve environmental credentials without compromising creativity.

Judging by the number of trade publication articles and speaking sessions that focus on the topic, you’d think that the entire media and entertainment industry is focused on cutting carbon costs. But is that really the case?  True, broadcasters have set ambitious targets to reach net zero, the streaming giants are following suit, and they’re putting pressure on production companies to reduce their environmental impact and include sustainability messaging in the content they produce. Carbon emissions have even become a critical consideration in planning new studio builds. But not every part of the production chain is putting the environment first.

To date, most of the industry’s sustainability efforts have focused on production, which makes sense because it’s the most carbon intensive stage of the content supply chain. Production companies are also direct suppliers to the broadcasters and streaming companies that must prove they’re making good on their carbon neutral pledges. But broadcasters often don’t deal directly with post-production facilities, and it looks like post processes are yet to undergo significant changes in the name of sustainability.

A question of priorities … and fringe benefits

To their credit, plenty of post facilities have adopted general corporate sustainability initiatives, like switching to green energy providers, recycling waste and introducing electric vehicle incentive schemes. But, when it comes to the actual processes of post-production – designing facilities, choosing technology and making workflow decisions – sustainability doesn’t seem to be a deciding factor for most facilities or production clients.

It’s an attitude reflected in the membership of organizations like Albert, the number of post houses that hold the DPP’s Committed to Sustainability mark, and in the less-than-enthusiastic responses to our LinkedIn poll asking post houses how important sustainability is in your organization. And we can understand why. Post-production is a high-performance environment and a high pressure one. Machines need to be ‘always on’ and ready to perform at maximum output – characteristics that are intrinsically at odds with most sustainability advice. As the DPP’s Committed to Sustainability Programme manager, Abdul Hakim points out, the industry’s attitude to sustainability has also changed over time; he says, “When we launched the programme in 2019 it was a first priority for many of our members and this was maintained during the pandemic when a lot of improvements were made. But sustainability just doesn’t have the limelight that it did before because we’re going through an economic crisis and the priority is now about being able to do more with less money.”

But increasing efficiency can have fringe sustainability benefits. Specifically, cutting down on power consumption to combat rising energy costs will automatically reduce your organization’s carbon emissions. So, the question becomes – how can post houses cut power costs without compromising on their creative output?

Cutting power costs in post

The biggest culprits in post house power consumption are undoubtedly the double act: storage and cooling. As Jigsaw 24 Media’s head of innovation, Chris Bailey wryly observes, “Media isn’t insignificant in terms of storage needs,” and hard drive arrays inherently use a lot of power because they’re always spinning. The higher the wattage of the power supply, the more heat is generated, and the more air conditioning is needed – which further increases the power supply requirements. Here are some ways that post houses can break this expensive cycle:

Replace aging infrastructure

While it may seem like sweating old hardware saves money, you could be spending more than you save on increased power costs for legacy technology. EIZO claims that switching to the EIZO FlexScan EV2795 can cut the energy costs of running a simple computer monitor from £150 to just £11 per year. As they say, every little helps and small efficiencies like this can quickly add up to significant power and carbon savings across your organization.

When it comes to specialist post hardware the focus seems to be on increasing performance over reducing energy costs, like the latest version of the HP Z8 that retains two processors but provides double the performance of its predecessor in the same box.

Optimize your machine room design

Most machine rooms in Soho (and in post houses across the UK) are in converted basements, repurposed storage areas, or whatever other space is available. Our facilities are often chosen based on form rather than function (we’ll pick a Victorian teahouse over a datacenter every time) and we pay the price in the inflated cooling costs of our inefficient machine rooms. Badly maintained and managed MCRs exacerbate this situation as dirty and dusty machines run hotter and take more power to cool.

Cut down on data

Digital workflows, reality formats and ever-increasing resolutions have combined to create a data explosion in media and entertainment. Facilities are expected to store all this data – and keep it readily available – throughout what can be lengthy post-production process, adding to the storage and cooling conundrum. Post houses that work with production teams to reduce the amount of footage coming into the facility – and carefully manage the archival and retrieval of high-resolution media – will also benefit from reduced power costs associated with lower storage requirements.

Reduce your reliance on on-prem hardware   

There are some issues with the previous points. Obviously, you can’t replace hardware if you’re mid-term on a repayment plan and, even if you invest in the most efficient tools, with the speed of tech advancements there’s a good chance that any hardware you buy may become outdated before you’ve finished paying for it. Similarly, while new facilities can ensure that efficient machine rooms are factored into their plans, there’s a limit to how much established post facilities can do to improve existing MCRs, and post houses can’t always influence production workflows to reduce data storage and duplication.

Perhaps the best way for post houses to cut power costs without compromising on creativity is to simply reduce the amount of hardware in your facility. One of the ways to do this is by migrating to a hyperconverged infrastructure. Hyperconvergence provides enormous efficiencies in shared power supply and shared GPU – just three nodes with two power supplies can run a total of 15-20 computers or offline machines. And, because you cut down on the amount of hardware in your MCR, you save on both the power to run and cool those machines. Hyperconverged post facilities also need a smaller physical footprint because suites that were previously dedicated to one function now become multipurpose – and a smaller footprint normally means lower power costs for everything from lighting to heating. It’s the smart alternative to the public cloud for running lots of machines that aren’t too process intensive.

While migrating post to a hyperconverged infrastructure requires an initial investment, it pays off in the long term – both in saved power costs and improved sustainability credentials. But post houses can also reap the benefits of hyperconvergence on an ad-hoc basis through managed virtualized services including our Editorial as a Service solution.

Post houses may currently be flying under the sustainability radar, but attention will inevitably turn to this stage of the content supply chain. Savvy facility owners are already preparing for that eventuality and the first step is to measure and reduce your energy consumption. Whether your motivation for doing so is to improve your green credentials for the greater good or just to cut down on your overheads is your business.

IMSC-Rosetta: A new era for subtitle formats – bridging broadcasting and streaming

IMSC-Rosetta: A new era for subtitle formats – bridging broadcasting and streaming

Rob Cranfield, Director Media Supply Chain Technology, Warner Bros. Discovery 

In the realm of media, delivering subtitles consistently across various platforms has posed challenges. Warner Bros. Discovery (WBD) identified the pressing need for an innovative subtitle format. This format should seamlessly suit both conventional TV broadcasts and contemporary streaming services. Historical subtitles have been fragmented, existing in diverse proprietary and generalized formats. However, none of these formats proved universally fitting for all content types and languages.

In response, WBD partnered with Yella Umbrella, a company with proven extensive experience in subtitle formats and linguistic support going back over 30 years. Their joint endeavor aimed to create a fresh subtitle file format that resolves these complexities. After evaluating multiple options, they selected IMSC V1.2 as a foundational framework. While IMSC V1.2 provided the framework, its inherent variability posed challenges in achieving consistent and reliable outcomes. The heart of the matter was devising a method to harness TTML’s potential while enforcing a standardized representation of subtitle text, timing, and style information, thereby eliminating the complexity of adapting generic TTML files for diverse purposes.

The outcome of this collaboration culminated in the IMSC-Rosetta subtitle file format. This innovation draws inspiration from IMSC V1.2, while streamlining the structure for clarity and uniformity. This rectifies challenges tied to XML-reliant formats, well known for their intricacy and incompatibility. IMSC-Rosetta champions ease of use, rendering parsing, modification, and creation accessible without necessitating mastery of exhaustive technical minutiae of XML and TTML.

IMSC-Rosetta files showing mixed English, Arabic and Japanese including vertical presentation in the Stellar editor from Yella Umbrella.

IMSC-Rosetta retains the full spectrum of features seen in alternative subtitle formats, encompassing color, outlines, boxing, and text placement. Its distinctiveness lies in its definitive construction, facilitating seamless translation across disparate formats and languages via the IMSC-Rosetta standard. For entities adhering to proprietary formats, transitioning to IMSC-Rosetta guarantees minimal feature loss and significantly reduced development efforts, in contrast to the more intricate IMSC or TTML routes.

IMSC-Rosetta’s versatility extends across a range of applications, serving as a solution for authoring, delivery, interim storage, and archiving. It stands out for catering to translation requirements, preserving the nuance created by subtitlers – often compromised during conversion processes.

The necessity for IMSC-Rosetta emerged from observations concerning the slow adoption of TTML. Despite its merits, complexities in implementation spawned an array of TTML-based formats that operated effectively as proprietary standards due to the demanding nature of implementation and the variable interpretations of standards. Existing implementations often overlooked subtleties that elevate the viewer’s experience, as compatibility with distinct media and streaming platforms took precedence.

IMSC-Rosetta emerges as a remedy for these disparities, streamlining development while setting forth a pathway to quality within a comprehensible, standards-compliant subtitle file format. Unlike many TTML-derived derivatives with limited reusability, IMSC-Rosetta catalyzes change by offering seamless conversion to ‘proprietary’ IMSC formats.

This is especially timely ahead of the roll-out of Max, WBD’s streaming service that is currently live in the U.S. and is launching in European countries early next year with launches also planned for LatAm and Asia-Pacific.

Commencing September 8, 2023, IMSC-Rosetta will be accessible to all. The complete specification, coupled with samples, example source code, and a public wiki, will be accessible at https://github.com/imsc-rosetta. Queries, observations, and contributions are warmly welcomed through the issue-raising channel on the platform.

In an industry perpetually evolving, IMSC-Rosetta provides a method to store and deliver consistent, high-quality subtitling across languages and distribution channels.

By providing a single universal standard format, IMSC-Rosetta helps the whole media industry supply quality localized content while reducing production costs.

IMSC-Rosetta is the result of over 18 months work primarily by Simon Hailes of Yella Umbrella and Robert Cranfield of WBD, with input and review from various industry partners.

 

Imagine Communications – During the transition from ground to cloud, a hybrid approach to playout offers the best of both worlds

Imagine Communications – During the transition from ground to cloud, a hybrid approach to playout offers the best of both worlds

Brendon Mills, General Manager, Playout & Networking, Imagine Communications

While the shift from ground to cloud playout is well under way, there are several roadblocks on the path to a fully cloud-based infrastructure that will leave the broadcast industry in a transitional phase for years to come. Here, we delve into those obstacles and propose a hybrid approach for this interim period that allows broadcasters to continue utilizing their existing on-premises equipment, while reaping the benefits of cloud technology.

A quick primer on traditional on-premises and cloud-based playout

With traditional on-premises video playout, dedicated software and hardware — servers, storage devices, controllers, and other equipment — are installed and maintained within a broadcaster’s facility. These components handle the various aspects of content playout, including ingest, storage, merging multiple video feeds into a single stream, ad insertion, scheduling, encoding, and transmitting media content via terrestrial, satellite, or the internet. The entire playout workflow, from content preparation to transmission, requires an extensive physical infrastructure and a large team of dedicated staff to operate it.

Conversely, with cloud playout, the functions traditionally performed by physical hardware are virtualized and hosted in the cloud on platforms like Amazon Web Services, Microsoft Azure, or Google Cloud. By leveraging the cloud, broadcasters can streamline their playout workflows, dramatically reduce infrastructure costs, and easily scale their operations to accommodate changing demands — all while enjoying the benefits of remote collaboration, enhanced accessibility to content from multiple locations, and exceptional efficiency in content generation.

Cloud-based playout isn’t without its drawbacks

With the introduction of cloud technology — and the considerable excitement it generated — it’s easy to see why many in the broadcast industry assumed that playout would quickly transition to the cloud and render on-premises systems obsolete. However, while cloud-based playout systems offer numerous advantages over their ground counterparts, they aren’t without their own drawbacks.

One area of particular concern is the increased latency associated with cloud-based operations. Transmitting data over networks introduces latency, as does processing at remote cloud servers, encoding and decoding processes, and the monitoring and feedback loop. And for broadcasters that require seamless live feeds, even a delay of a fraction of a second is considered unacceptable.

Another issue is that cloud-based playout systems rely heavily on stable and reliable internet connectivity. Any disruptions or outages in the internet connection can impact the ability to access and control playout operations. Especially in regions with unreliable internet infrastructure, this dependency introduces a level of vulnerability. Also, cloud-based playout systems often come with predefined features and configurations, which limit the level of control and customization that broadcasters have over their playout workflows.

Additionally, while cloud playout reduces infrastructure costs, it introduces other costs — primarily related to cloud service subscriptions and data transfers. Video content, especially in high resolutions like 4K, requires a substantial amount of data for transmission — a typical compressed channel operates at 15Mbps. To meet broadcasters’ standards for delivery without interruptions or glitches, service providers like AWS, Google Cloud, and Azure have to provide a flawless data pipeline, which comes with a hefty price tag.

Finally, most cloud playout offerings are based on OPEX-oriented subscriptions or SaaS cost structures.  While it is possible to procure and capitalize the software licenses for cloud infrastructure, the compute, storage, and ingress/egress charges for cloud services are an ongoing and costly expense, especially for UHD content. On-premises infrastructure commercial terms are much easier to capitalize, and the software licenses tend to be perpetual as opposed to subscription or SaaS. In times of global economic stress, broadcasters can feel more comfortable with capitalizing infrastructure costs and limiting ongoing OPEX charges.

The solution: taking the best from both worlds

For cloud playout, the technology to replicate all the critical requirements of on-premises systems is still evolving. So instead of trying to make the switch to a completely cloud-based infrastructure now, Imagine Communications recommends that broadcasters take a hybrid approach that combines on-premises equipment for essential channels and cloud technology for less complex operations. Enabling a gradual transition from one operational world to the next, this concept allows broadcasters to optimize their existing capital investments in on-premises infrastructure — taking advantage of the low latency, high reliability, and full operational control provided by ground playout.

At the same time, they can leverage unique benefits provided by cloud technology, including lowering the costs of disaster recovery. To ensure seamless continuity in the event of equipment failure, broadcasters operate channels with a backup channel running simultaneously. If a failure occurs, the feed is instantly switched to the backup without any noticeable interruption or loss of video frames. When conducted on-premises, such recovery measures necessitate duplicate investments. By leveraging the cloud, however, broadcasters can establish and operate the back channel at a reduced cost while maintaining the necessary redundancy for uninterrupted broadcasting.

Imagine goes one step beyond traditional disaster recovery by enabling pooled backup channels via the Aviator Orchestrator unified management platform and its multi-site capabilities. The ability to pool backup channels lowers the cost of licenses and hardware requirements typically used by traditional disaster recovery. This pooled backup feature is unique to Imagine and offers a cost-effective way to enable hybrid playout disaster recovery.

For free ad-supported streaming television (FAST), which has been the hottest, headline-making acronym of the past couple of years, the cloud offers flexibility in content delivery that far surpasses on-premises systems. With cloud-based systems, all files — including pre-recorded shows and other content — are stored in the cloud, enabling seamless playback directly from the cloud itself. FAST channels can function autonomously with minimal playout monitoring — only requiring sporadic checks every two days or so — and allow broadcasters to extend their reach to a broader audience and increase revenue through additional advertising opportunities.

Cloud-based playout systems also offer broadcasters the opportunity to explore innovative channel concepts and specialized content delivery to provide a more immersive and tailored viewing experience for audiences. For instance, during major events like the Olympics, broadcasters can create pop-up channels that offer alternative camera views and unique perspectives. These channels can be set up quickly in the cloud and have a predetermined lifespan. Once the designated period is over, the channel can be easily deactivated. Additionally, broadcasters can create thematic channels — also known as specialty or niche channels — that focus on specific themes or content genres rather than offering a broad range of programming, providing a dedicated platform for viewers interested in a particular subject.

Looking ahead

Eventually, the broadcast industry will complete its transition to a fully cloud-based playout infrastructure — the pieces are already starting to fall into place. For one, the cost of data transmission is decreasing rapidly and will continue to do so. Furthermore, in recent years, technological advancements like adaptive bitrate streaming, advanced video transcoding and compression algorithms, faster and more stable network connections, and more have made significant progress in meeting the broadcast industry’s requirements for flawless video playback. However, the stage between on-premises and fully cloud-based systems is still likely to persist for the next five to 10 years.

One reason for this is that the on-premises solutions already purchased by broadcasters have lifespans that can range up to 15 years, and the gradual depreciation of these existing investments needs to be taken into consideration. Financially speaking, it isn’t practical for broadcasters to simply abandon this equipment overnight when it still meets their needs. Furthermore, it will take time for cloud-based technology to catch up to and align with the full capabilities offered by today’s on-premises solutions. Latency issues alone will require significant technological advancements to overcome. In the meantime, adopting a hybrid approach allows broadcasters to leverage the benefits of both on-premises and cloud solutions, optimizing their operations and paving the way for the future of television broadcasting.

G&L – Elasticsearch features for incident detection: are they worth considering?

Ivan Drobyshev, Business Data Analyst, G&L

 

Media content delivery generates a lot of logs. This is a fact well understood at G&L, since we facilitate the distribution of audio and video content, live and on-demand, for some major broadcasters and official bodies to end users. We know well that log data has no lesser commercial value than the content itself. Log misdelivery can lead to short-term profit losses for streaming and broadcasting service providers. These issues can affect advertising exposure assessment, long-term planning, and more. Providing accurate data and analytics alongside our core services is our dedication, duty, bread and butter.

Maintaining log consistency stands as a critical task. With log sizes fluctuating, especially during soccer tournaments or any other major events, distinguishing between normal situations and incidents, such as indexing errors or misdelivery, becomes crucial.

A noticeable part of CDN logs generated while distributing our customers’ content is indexed to Elasticsearch 8.6.0. It incorporates two built-in features that hold promise for identifying consistency issues: Anomaly Detection (to identify an incident) and a classification model. We assessed both to determine their suitability for our customers’ needs.

First one out

Anomaly Detection didn’t make the cut due to its purely analytical (not predictive) purpose: to gain insights into the overall past picture. When configured properly, it seems to be “more focused” on sudden drops in figures rather than peaks, which is exactly what we need. Yet, the method lacked sensitivity: it identified discrete data points, not periods.

The Single Metric Viewer displaying the results of Elastic Anomaly Detector job. Only the central portion is related to an actual incident! 

So, the Anomaly Detection tool from Elasticsearch may hint that some abnormal activity might occur at a specific moment. Whether it is true or not and the length of the incident is up to your manual check.

Machine Learning with Elasticsearch

The built-in ML functionality’s documentation doesn’t exactly unfold the red carpet of clarity, but at least it drops the model type hint: a gradient-boosted decision tree ensemble. That’s the only explanation we get. Feature rescaling? Class weighing? Who knows? Ah, the mysteries of proprietary magic! We get the warning to avoid “…to avoid highly imbalanced situations”, however. That’s our situation with an obnoxious 10 to 0.47 class-to-class ratio, so let’s keep this warning in mind.

Now, let’s get down to business – metrics business. We wanted to minimize the number of missed incidents while maximizing rightfully detected anomalies; other outcomes are slightly less relevant. So, we kept track of three metrics:

  • ROC AUC score – a numerical whiz at quantifying the sensitivity-versus-noise dance.
  • Accuracy – a share of correct predictions among all predictions.
  • Recall – a ratio of correctly detected incidents to themselves + missed incidents, which answers the question “How many relevant items are retrieved?” (thanks, Wikipedia, for the question phrasing assist).

The ideal (and rarely obtainable) score of each metric is 1. In Data Science, anything close to the highest score possible raises an eyebrow and urges a scientist to double- and triple-check the results instead of celebrating them immediately.

And that’s what we had to do during our initial tests with the Elastic classification model. All three metrics returned results no less than 0.995! Okay, the dataset was labeled good. The values of features varied significantly between classes, making the classification task supposedly easy to compute. There were some mispredicted classes, yet the overall results seemed TOO impressive.

When results are too promising

Log misdelivery/mis-indexing is a rare event, so no validation subsample was available to us. The only way to validate it was to apply other strategies to the current data. Just a little overview of what we did to retest the findings:

  • Removing multicollinearity by deleting correlated features.
  • Undersampling the majority class to match the minority.
  • Upsampling the minority class with synthetic records (the SMOTE technique) to two different values.
  • Running each procedure in Elastic, and then doing the same using Python implementations of Logistic regression (with feature rescaling), Random Forest classification, and alternative gradient boosting on decision trees from the CatBoost library. The class weighing was not forgotten, too.

Metrics and a confusion matrix from the Elastic classification model.
0% errors are not equal to 0 errors!

The results obtained with standalone models were coherent with our prior findings from Elastic and sometimes even exceeded the latter. Some metrics went as low as 0.985, which is still incredible in the Data Science world. Yet we learned from class-balanced tests that it’s possible and isn’t necessarily a sign of the model being overfitted.

Observing worse confusion matrix figures and better metric values from the Elasticsearch tool was a bit confusing. The only plausible explanation is some tricky Elastic metric calculation procedure, whereas independent models were evaluated with tools from the open-source scikit-learn library widely accepted in the Data Science community.

So, is Elastic ML any good for incident detection?

Elasticsearch offers a good entry-level solution for classification problems. Its performance on well-labeled data is enough to identify a continuous series of an event, a category log misdelivery and indexing issues fall under. The user-friendly Kibana interface provides means to get the model ready (train it) without extensive prior knowledge of ML libraries and environment (being familiar with general concepts might be handy, though). Deploying the model so that it would detect incidents in real time requires a more in-depth understanding of Elastic operations. Nevertheless, those operations are managed from the same Kibana interface (or corresponding APIs), using the same stack: no additional software, plugins, scripts, etc.

What’s the catch, then? Isn’t that ideal? Well, let’s start with the fact that ML instruments are available only with “Platinum” and “Enterprise” subscriptions. Free license users must rely solely on standalone models with their own stack and resources.

Also, there’s a trade-off: lower performance. Yes, the metrics are incredible! Yet, confusion matrices show that native Elastic models may be more likely to give false alarms or miss an actual anomaly. Luckily, log misdelivery/mis-indexing is rarely a single discrete event. Its consecutive nature mitigates the risk of missing an incident: at least some of the abnormal values in a series will be detected.

When there’s a need to identify more discrete standalone events that generate a single log message, one might want to rely on other tools independent from the Elastic stack. In our experience, the CatBoostClassifier outperforms any other library, even without hyperparameter tuning. Yet, any open-source library of choice offers more controllability and transparency, which proprietary solutions often lack.

What did we choose for our application? After giving it a long thought, we went down the third road. We recently started a project to improve our log metadata collection pipelines. We know from first-hand experience that log misdelivery can be identified from this metadata. The tests we performed on Elastic built-in instruments and standalone models made it clear that implementing a single computation-costly feature would require some effort. So, we decided to take a whole other approach with log metadata. But that’s a story for another article.

farmerswife – Automating workflows with Cirkus

farmerswife – Automating workflows with Cirkus

Carla Molina Whyte, Marketing Executive at farmerswife

In today’s fast-paced business environment, automation has become an indispensable element in project management. The advantages of automation are extensive and diverse, encompassing everything from reducing costs and increasing productivity to optimizing performance and streamlining workflows. When it comes to implementing solutions for SaaS automation, there are numerous advantages in choosing a cloud-based service, not only in controlling CAPEX expenditure, but also in the flexibility these systems can offer.

These are some of the benefits that cloud-based automation bring to organizations:

Streamline repetitive tasks, saving time and effort:

Repetitive tasks can often become a drain on time and energy. However, with the power of automation tools, these tasks can be streamlined and made more efficient. From automating email sending to gathering technical requirements, and even scheduling social media updates, automation can save you valuable time and allow you to focus on other crucial aspects of your business.

Enhance workflow efficiency and productivity:

Automation empowers your team to break free from the monotony of mundane tasks and unleash their creativity and innovative thinking. By automating tasks, you can significantly reduce the chances of errors while boosting overall productivity, which ultimately fuels revenue growth and drives improved business outcomes.

Cirkus serves as an excellent example of a tool that harnesses the power of automation in project management. It is a user-friendly and intuitive task collaboration tool designed specifically for teams. Cirkus simplifies the project management process by seamlessly scheduling, assigning, and managing projects and tasks. With Cirkus, teams can easily track project status, report time spent on tasks, and collaborate efficiently with anyone, from anywhere. Data security is also a top priority with Cirkus as it’s a cloud-based task management tool. The platform utilises an intuitive web-based UI or an iOS app to securely share data.

One of the key advantages of Cirkus is its ability to coordinate resources and share files in a centralized hub. This ensures that all team members have access to the necessary resources and can collaborate seamlessly. Whether it’s sharing media files or deliverables, Cirkus provides a secure platform for effective communication and collaboration.

Furthermore, Cirkus offers an intelligent workflow that maximizes efficiency through automation. By leveraging Cirkus’ task templates feature you can create a checklist of deliverables customized to your needs. Ensuring the right information and specifications are gathered for each job, and providing a centralized communication platform ensures the highest quality deliverables. Adapted to your own workflow, these reusable job templates improve the quality of your deliverables by defining the technical specifications at a job or project level.

Able New Zealand, a satisfied Cirkus client, chose the product because they needed a more efficient and automated process to reduce resource requirements. They stated, “Cirkus has improved and simplified our internal processes and made us an easy and efficient organization to work with. It’s easy to navigate and intuitive. It is straightforward to see what tasks are upcoming – especially those that are designated urgent – and to download and upload files.”

To further enhance Cirkus’ automation capabilities, integration with Zapier is available. Zapier, a leading automation platform, offers a vast array of online resources and guides to assist users in creating custom automations and seamlessly connect various systems. By integrating Cirkus with Zapier, users gain the power to create simple yet powerful workflows that streamline their processes and significantly enhance productivity.

The possibilities with Zapier and Cirkus integration are endless. You can connect Cirkus with hundreds of other apps and services, such as Drive, Slack, and more. This allows you to automate data transfer between these platforms, ensuring that relevant information is always up to date and easily accessible.

In conclusion, automation plays a crucial role in project management, and Cirkus truly showcases the immense power of automation in creating seamless workflows. By harnessing the remarkable features of Cirkus, teams can significantly boost their productivity, optimize resource allocation, and achieve exceptional outcomes. With the unwavering support of platforms like Zapier, the possibilities for automation are truly limitless, making project management more efficient and effective than ever before.

Fabric – Reinventing IFE Sales & Syndication with Fabric Connect

Fabric – Reinventing IFE Sales & Syndication with Fabric Connect

Andrew Holland, Director of Data Services, Fabric

A chasm has opened up in the in-Flight Entertainment industry – between the slick technology used by airline passengers, and the creaking legacy systems and processes that the IFE industry and their airline clients use for the process of sales and syndication.

The In Flight Entertainment experience has improved immeasurably for airline passengers. Seat-back touchscreens, massive catalogs of global content – Movies, Box-sets, TV channels, radio, music – available in multiple languages. Things have really moved on for the end-user. Gone are the days of everyone watching a grainy VHS from one overhead screen. There has been a relentless focus to deliver better content to passengers via smarter technology on the planes.

However, the same cannot be said for the systems and processes that are used behind the scenes. The inflight entertainment industry has one of the most complicated media supply chains in the world. The complexity begins first with hardware. For instance, most commercial aircraft fly for around 25 years, going through a retrofit every eight years, which means that IFE suppliers must try to support 25 years of audio and video formats, with hundreds of different metadata standards and seatback systems. And it only gets busier from here!

Added to the myriad versions required for all the various types of hardware, are the continuously evolving and updating content lists, which are all dependent on different rights in different territories. Combined with this, adding a layer of even more intricacy, are the huge localisation requirements: for in-flight entertainment assets and metadata to be available in a multitude of languages on different flight plans.

Given this level of complexity, it seems almost miraculous to learn that the existing paradigm of the IFE industry remains one of manual processes, i.e. countless spreadsheets, pdfs, word documents and phone calls, all requiring continuous updating from week to week as licensing deals change. Not only is this inefficient, but it is a business ecosystem riddled with the potential for task duplication, human error and risk.

Adopting a cloud-native digital supply chain would clearly bring enormous benefits and efficiencies to businesses in an industry that faces these challenges, helping to deliver the high standards of entertainment services that air passengers now expect as standard.

Resolving this complexity, and bringing harmony to the cacophony of information was the challenge that Fabric sought to overcome with the development of the ‘Connect’ platform – an elegant, cutting edge sales and syndication platform that allows clients to share collections of Fabric titles with their customers for preview, selection and review.

Fabric Connect – The Sales & Syndication platform purpose built for the IFE Industry.

Fabric Connect allows suppliers of In Flight Entertainment to manually or automatically create lineups and catalog sets to share with their airline clients, presenting their preview, sales and syndication titles through a beautiful and cutting-edge platform that has an intuitive consumer-facing styled interface. Connect dramatically simplifies the process of sharing content and maximizing the value of your catalog.

Clear information and attractive presentation are key to any sale. When the Media & Entertainment industry spends billions of dollars on high quality bespoke video, copy and imagery to market their content to consumers, it makes sense to utilize that material in a B2B context as well. The comparison between a dry spreadsheet and the rich media displayed in the Connect UI clearly represents a paradigm shift for Sales and Syndication.

Videos are an invaluable part of the monetisation process, and Connect allows for the inclusion of full-length screeners, trailers, supplemental content, promos and more, alongside the crucial selection of programme information that is relevant to sales – e.g. Origination, Synopsis, Ratings, Measurement, Language Availability and Rights.

How does it work?

The configuration of the Connect platform is managed through dedicated tabs and pages in the Fabric content metadata management platform. Clients can be assigned into User groups, with countries and rights platforms assigned to them, which allows bespoke Collections and Lineups of content to be shared with them. Inherent perpetual rights verification ensures the validity of lineups.

Customers are able to browse selections of media and group them into orders, or select them for review, budgeting, or further discussion. They will also be able to choose from Alternate Language Audio Tracks, Subtitles or to filter by distributor of rights platforms.

Fabric’s advanced supply chain integration even extends to the process of configuring ordering and delivery, enabling – where business allows – Fabric to take orders directly from Connect, validate against pre-set rules, and thus automate the instructions for scheduling and content delivery.

Why Change?

Connect creates an attractive piece of virtual real-estate for your business to monetize and market your entire media back catalog. Instead of focusing on squeezing every last drop from a tiny percentage of your titles, why not open up your entire back-catalog, beautifully displayed in a crisp new online shop-front, complete with all the promotional media, relevant box-office performance data, ratings and reviews, and language information that inform your customers and help to push sales over the line?

In all – this is a comprehensive game-changer in the field of media sales and syndication, that has been developed with the close feedback of our existing clients, over-delivering to create a best-in-class sales portal. Fabric is used by the World’s leading Film Studios, Platform Owners and Distributors including: Paramount Global, Amazon Studios, Sinclair Broadcast, MGM, Warner Bros. Discovery, HBO and Anuvu.

For a demonstration of Connect’s core functionality or to find out how Fabric could help transform your titles & metadata visit www.fabricdata.com.