TAG Video Systems – Unlock business success in the cloud: gain unmatched agility and efficiency

TAG Video Systems – Unlock business success in the cloud: gain unmatched agility and efficiency

Paul Schiller, Product Marketing Manager, TAG Video Systems

In the fast-paced landscape of the broadcast and media industries, staying ahead of the technology curve requires adaptability and agility. To overcome the limitations of hardware-specific devices and embrace the future, broadcasters, content providers and distribution and delivery service providers are turning to cloud-based solutions. By transitioning to the cloud, they can unlock new levels of flexibility, efficiency, and scalability. In this article, we will explore the advantages of cloud and cloud-based solutions, overcoming migration challenges, the importance of becoming more agile, cost efficiency and scalability, security and regulatory considerations, and the rapid adoption of IP workflows in the industry.

Embracing cloud for business agility

The transition to cloud infrastructure offers a multitude of advantages for companies committed to achieving greater business agility. By reducing their on-premises footprint and leveraging the power of the cloud, organizations gain the ability to scale resources dynamically, respond quickly to changing demands and technologies, and drive operational efficiency. The cloud promises greater performance, increased availability, and operational cost savings through improved resource allocation and streamlined workflows.

Advantages of cloud-based solutions

Cloud-based solutions bring a host of benefits to media companies. Firstly, they enable increased efficiency and simplified processes. By leveraging cloud technologies, companies can automate manual tasks, leverage artificial intelligence and machine learning capabilities, and optimize their content production and distribution workflows. This results in improved productivity and reduces operational costs.

Moreover, adopting cloud-based solutions leads to the potential for lower administrative costs and centralized data security. With the cloud, businesses can eliminate the need for extensive hardware infrastructure and associated maintenance costs. Additionally, cloud providers offer robust security measures, including data encryption, access controls, and built-in redundancy, ensuring the protection and integrity of valuable content and customer data.

Overcoming challenges in cloud migration

Cloud migration presents its own challenges that need to be addressed for a successful transition. Engaging the right people and acquiring the proper resources and tools to ensure a smooth migration process are significant considerations. This includes training employees on new systems and technologies and potentially re-architecting existing applications for the cloud environment.

Additionally, businesses must consider potential performance issues such as latency and interoperability when moving their workflows and data to the cloud. Ensuring seamless integration of existing systems with cloud-based solutions requires careful planning and testing to avoid downtime and disruptions.

Despite these challenges, the benefits of cloud migration far outweigh the obstacles. Cloud-based solutions’ scalability, cost-efficiency, and enhanced agility justify the effort and investment required to overcome these challenges.

Cost efficiency and scalability

One of the key advantages of cloud-based solutions is cost efficiency. With traditional on-premises infrastructure, businesses often overprovision or underutilize resources. Cloud computing allows organizations to pay only for the resources they use, avoiding unnecessary costs. The ability to scale resources up or down based on demand ensures optimal resource allocation, cost optimization, and increased operational efficiency.

Additionally, cloud infrastructure provides unparalleled scalability. Broadcast companies can easily scale their operations to accommodate growing needs or handle sudden spikes in demand. This scalability allows businesses to deliver content more effectively, improve viewer experiences, and quickly respond to market trends or changes in audience preferences.

Security and regulatory considerations

Data security is a top priority and potential security concerns must be addressed when moving operations to the cloud. Cloud providers offer comprehensive security measures, including robust data encryption, advanced access controls, and regular security audits. It is crucial for organizations to select reputable cloud service providers that comply with industry regulations and standards to ensure data privacy and protection.

Rapid adoption of IP workflows

In addition to cloud migration, the rapid adoption of IP workflows is transforming the media and entertainment industry. Traditional SDI workflows are being replaced by IP-based solutions, which offer greater reliability, scalability, and flexibility. IP workflows allow for seamless integration of different systems and devices, enabling efficient content creation, management, and delivery across multiple platforms. This transition to IP workflows further enhances the agility of broadcast companies, enabling them to respond quickly to changing market demands and reach wider audiences.

Integration of various systems and devices may seem ‘seamless’ or a non-issue to the viewer, but it’s important to consider the multitude of vendors typically involved in IP-based workflows. In some cases, 30-40 or more solutions are stitched together from capture to customer. This is where the TAG Realtime Media Performance platform comes into play, specifically designed to reduce complexity in IP workflows, providing broadcasters with simplicity that minimizes challenges in this intricate landscape. Each of these vendors needs to have a proven track record that delivers on interoperability between all solutions. And TAG Video Systems does just that, ensuring that its 100% software, IP-native solution constantly supports new formats, codecs, and feature requests, maintaining the seemingly increasing pace of industry change. Proper integration of multiple vendors and solutions is crucial in today’s environment, where maintaining stable Quality of Service (QoS) is essential for providing the best Quality of Experience (QoE) to viewing audiences, especially with younger demographics where seconds matter.

That’s a wrap

In the face of a rapidly evolving industry, media companies must prioritize business agility to succeed. Embracing cloud-based solutions, overcoming migration challenges, and adopting IP workflows enable organizations to break free from the limitations of hardware-specific devices and traditional on-premises infrastructure. By transitioning to the cloud and embracing IP workflows, companies can unlock unparalleled flexibility, operational efficiency, scalability, and responsiveness.

The advantages of cloud migration and the rapid adoption of IP workflows empower companies to adapt swiftly to changing market demands, optimize content production and distribution workflows, and remain competitive in the dynamic media landscape.

In conclusion, the path to business agility in the media industry lies in embracing the cloud, overcoming migration challenges, and leveraging IP workflows. By doing so, organizations can drive innovation, improve operational effectiveness, and position themselves at the forefront of the industry. Embrace the cloud, IP workflows, and the cost efficiency and scalability they provide to unlock the full potential of your media business.

 

Simplestream – Needle in a haystack: the challenge of normalizing metadata

Simplestream – Needle in a haystack: the challenge of normalizing metadata

Ashley Reynolds-Horne, Technical Director

The world of video content moves quickly. It’s in ceaseless motion, and this goes hand in hand with technological advancement. In this scenario, it becomes paramount for operators and distributors in the streaming space to create seamlessly functioning architectures. It’s all about tech stacks that must normalize workflows and bring together data from multiple existing services. Of course, this is far easier said than done as content owners wish to enhance their offering with a feed of growing requirements which platform operators have for their own streaming services. Progress is perpetual, think of ratings for movies and series, specific categories for niche programming, or even broadcast identifiers.

As platform operators look to solve the existing challenges of ‘going OTT’, key items still need plenty of attention from technology providers to allow for a transition, or in many cases dual running between the old and new. You don’t want to throw the baby out with the bathwater. An export from a playout service, for instance, can include 20 or 30 different values and even then, a single provider may not have all the information. While a content owner might know exactly which actors are in a specific film, it may be unlikely that they know when this will playout. The data from an EPG provider may be needed to provide this information, and again, content can need normalizing to remove adverts or enhance information, adding in a third or fourth source of data.

This is only the beginning; once the content metadata has been obtained, services then need the videos or thumbnails which require more integrations. Media Asset Management (MAM) is a specialized form of Content Management System and serves platforms as a large repository of any type of media files. From large video items to imagery, the archive of content that powers a streaming service is the backbone content providers usually avoid replacing. The migration of often chunky backlogs of content is expensive, as well as essential storage services. Needless to say, the ability to operate in a totally agnostic way – when it comes to existing online video platforms or MAMs – is integral to functioning deployments for tech providers.

What does the future look like?

To date, no best practice has been established to deal with what can be considered one of the biggest challenges operating in the streaming space presents. Specifications have been outlined by DVB, Google, the DTG and various technical working groups, yet tech providers are still dealing with a set of requirements that are often quirky, and certainly unique to clients.

The way to go – for companies like Simplestream – is to devise workflows to convert data and values into a standardized format, to work seamlessly with APIs, and to support custom metadata values using a common format; for example outputting metadata as part of JSON API payloads, then passed on to downstream applications or transferred again to form output standards. This way not only does a better understanding of the customer’s requirements become possible, but it’s also much clearer to understand the addressable market in a more impactful way. Does a customer need to target ads to a user based on a specific genre, or do operators on a specific platform wish to include service information for downstream providers? These are all valid points, to be managed accordingly, with technology that can adapt to the needs of customers operating in numerous verticals. Think of news broadcasting channels needing presenter data to be always available, or teleshopping brands needing to showcase updated pricing, especially when discounted.

The normalization of metadata is an ongoing challenge. Reorganizing data that comes as part of the ‘video’ package is integral to understanding business needs. It can be like finding a needle in a haystack, so the removal of any unstructured or redundant data to enable a more logical way of compounding that set of data (where possible) is a must. In today’s streaming and OTT landscape, the aggregation of services and technology sits on the shoulders of metadata.

Signiant – Bridging the gaps between files and objects for media

Signiant – Bridging the gaps between files and objects for media

Ian Hamilton, Chief Technology Officer, Signiant

File and object storage are both common technologies used for persistently storing digital media. While files and objects have many similarities, there are some notable differences.

A key difference is that while the contents of a file can be changed at any time, the content of an object can only be set when it’s originally created. An object can be replaced with an entirely new object with the same name (or “key”), but parts of an object cannot be changed independently. Simplifications like this facilitate improved scaling, reliability and data durability when all that’s needed is a mechanism to reliably store and retrieve fixed blobs of data.

Files and objects are also identified differently. A file is part of a file system with a hierarchical folder structure. Files can be placed in folders at any point in this hierarchy.   Files can be renamed and moved efficiently within the file system folder structure without copying data. Objects, on the other hand, are referenced by a key, which remains constant over the object’s lifetime. Object keys can be created to mimic file paths, but changing the key associated with a blob of data can only be accomplished through adding a new object and deleting the old object.

These naming differences tie into differences in how files and objects are accessed. File systems are first “mounted” on a computer system.  Applications then use local Operating System API (with operations like: open, read, write, close) to access files on mounted file systems.   Object storage is accessed like a web server over a network (with operations like: put, get). There are standards for both methods of interacting, including protocols for accessing file systems over Internet Protocol (IP) networks, but accessing objects over IP networks is native to object storage.

Of course, most people interact with files and objects through applications. Applications can abstract many of these differences, but applications can also be rigid with the types of storage they support. Many web applications bundle storage and only allow access to the storage through the application. This provides minimal choice over how and where data is stored … an important consideration when working with large media files.

Most desktop media tools, like editors, can’t natively interact with object storage. Media stored as objects must first be copied into working file storage. If only part of an asset is required, the entire object must be copied into working file storage before trimming the asset. This wastes time, bandwidth and working storage space.

Signiant Media Engine facilitates user-friendly search and preview of media assets stored using objects or files and allows assets to be clipped before they are moved, all while retaining the original professional media format. This type of functionality helps bridge the gap between object storage and desktop applications like editing software.

 While accessing object storage over IP networks is integral to the design, long distance networks still present challenges. The HTTP-based object storage access protocol performs much better over long distances than protocols for accessing files over a network like NFS, but we can do better.

Signiant’s patented intelligent transport technology accelerates access to storage by optimizing the choice of transport protocol (Signiant-UDP or TCP) and other esoteric transfer parameters like concurrent parts and part size based on the data set and operating environment.  This optimization is done using a machine learning model trained on anonymized information from millions of transfers performed by Signiant under widely varying conditions.

When media is stored as files, built-in tools like the Mac Finder or Windows File Explorer can be used to browse files, view thumbnails, search and sometimes perform basic media operations like playing and trimming video. Folder structures can also be used to organize and group files. However, the file system containing these files must be mounted on the user’s computer. Tools for browsing object storage may present a file-system-like hierarchical view, assuming keys have been formatted to mimic file paths, but richer interaction with media stored as objects requires better tooling.

The Signiant Platform abstracts differences between public and private cloud and file and object storage.  Signiant does this without having to take over ownership of the storage or the media. We simply connect to the storage and present stored media in a secure, intuitive and performant manner. Other applications can continue to interact with storage as they always have and Signiant stays in sync with any changes.

The Signiant Platform also enables normalized powerful search across all these types of storage using metadata discovered in files and objects either embedded with the media or in common sidecar formats like Adobe XMP. With support for a broad range of professional media formats, Media Engine expands what’s traditionally possible when using both file and object storage.

Qvest – Touchdown for American football in Europe: reaching more sports fans with FAST channel playout

Qvest – Touchdown for American football in Europe: reaching more sports fans with FAST channel playout

The roar of American football is echoing across the European landscape, igniting the passions of fans far beyond stadium stands. With the kick-off of the new season in early summer this year, the European League of Football (ELF) embarked on a groundbreaking journey to expand its reach, both digitally and economically. In a strategic alliance, the ELF has partnered with Qvest, a leading technology provider, to leverage their innovative FAST Channel playout solution makalu. This partnership is redefining how the sport engages fans, propelling the league to new heights of success.

Playbook for innovation: expanding beyond the arena

In a world where digital realms dictate modern engagement, traditional broadcasting methods are no longer sufficient to captivate global sports aficionados. This realization prompted the ELF to pivot toward innovation, joining forces with the Qvest Stream team and deploying makalu. This all-in-one cloud playout automation ushers in seamless content delivery and amplified viewer engagement. Frank Mistol, Managing Director of Qvest Stream, draws a parallel between the teamwork integral to sports and software development: “Team spirit and experience are key pillars for success both in sports and software development. As a technology partner, Qvest offers the European League of Football a holistic solution, from technical implementation to distribution on various platforms”.

A thrilling shift: the ripple effect of FAST channel playout

The league has a fast-growing fanbase to reach. The adaptive nature of makalu empowers fans to relish games on their preferred digital devices, creating not just viewers but participants. This expansion of engagement unveils novel revenue streams, making it very attractive for sponsors and advertisers.

Unveiling the power of a cloud-based solution

makalu emerges as a technological champion, tailored to meet the demands of modern broadcasting while preserving an eye on the future. Its user-friendly interface positions it as the ELF’s ideal choice for delivering high-caliber FAST channel content effectively. The solution encompasses:

  • Streamlined workflow: makalu optimizes the entire broadcasting workflow, from ingest to distribution, ensuring the ELF can focus on the heart of the game while Qvest handles the technical intricacies.
  • Multi-platform delivery: The playout solution facilitates simultaneous distribution across various digital platforms, opening doors to a wider audience and heightened engagement.
  • Dynamic graphics and branding: With makalu, the ELF can deliver additional on-screen elements, enhancing the overall viewing experience.
  • Flexibility and scalability: The FAST channel solution accommodates the surge for digital content, ensuring seamless streaming even during peak times.
  • Insights and revenue generation: By analyzing viewer data, makalu provides ELF with insights into fan behavior and preferences. Armed with these insights, the league can tailor content, ads, and partnerships for maximum revenue potential.

The partnership paints a vivid future for American Football in Europe, combining the Qvest expertise with ELF’s electrifying passion, all woven into the fabric of cutting-edge sports broadcasting. The stage is set, and the game has just begun.

 

Norsk – Build vs. buy: the best of both worlds

Norsk – Build vs. buy: the best of both worlds

Adrian Roe, CEO, Norsk

Build vs. buy might not be the oldest dilemma in the streaming technology book, but it’s close. And when it comes to complex live streaming, the horns of that dilemma are particularly pointed.

The streaming technology market is typified by off-the-shelf, line-of-business applications that do a few things very well, but are extremely difficult or impossible to extend if they don’t do exactly what you want. That lack of customization can be a dealbreaker.

On the other hand, for a broadcaster (or large enterprise, or betting company, or …) to build its own streaming platform from scratch requires a daunting investment of time and resources—resources that would be much better spent on their core business proposition.

So let’s dig a little deeper into both buying and building, as well as look at a middle path that offers media companies the best of both worlds.

When you have a hammer, everything looks like a nail

Off-the-shelf solutions are great, if they do exactly what you need. The problem is, “what you need” is often not what you think you need. Here’s a case in point: About 12 years ago, Nasdaq was in the market for a tech refresh of their live event video capability for a simple “source in, ABR ladder out” workflow to deliver their financial fair disclosure webcasts. So they implemented source in, ladder out, along with a very clever field encoder with hundreds of settings and very clever central encoders that build the ladders, publish the live streams, and generate on-demand copies of everything. Very clever.

The simplicity illusion

Here’s the problem: Source in, ladder out really isn’t what financial fair disclosure is about. Source in, ladder out is just what everything looks like when source in, ladder out is the “hammer” at your disposal.

So what is financial fair disclosure all about for Nasdaq customers?

  1. Fulfilling a legal obligation not to unfairly disadvantage potential investors.
  2. Maintaining a relationship with customers and the wider market.
  3. Understanding how the event was received by viewers and the wider market.
  4. Making sure the CEO looked their sparkling best!

And that’s exactly what is important to Nasdaq. Delivering the above to their customers, with great quality, around 100,000 times a year. Here the sheer volume of events is a major contributor to the simplicity illusion. Doing something that frequently is a fundamentally different challenge than doing so a few times a week.

So what started life seemingly as a “source in, ladder out” slam dunk, really wasn’t. Instead it was a case of “Use video source 1 if you can. If not, use source 2 (sent over a completely different network). And let’s make a phone call to the venue so that even if the internet is down entirely we can still broadcast the audio over a customer-specified fallback video. And let’s be able to patch in trailers and promotional videos. And let’s add timecodes into the signal so we can sync accurately with the slides and data release points. And … And …”

And let’s do that flawlessly 100k times a year. With minimal human intervention. So let’s just pop to the shops and pick one of those up.

Except, of course, there isn’t “one of those”. And even when there is a solution in your space, it’s likely to come with quite the price tag and quite the integration project. We spoke to the CTO of a large regional sports broadcaster recently, and they bemoaned the amount of time and money they were having to spend trying to customize off-the-shelf solutions to meet their needs.

When the fit you want doesn’t exist, what do you do?

The roll your own trap

So you roll your own solution from scratch, writing hundreds of thousands or even millions of lines of code. Of course, that diverts even more resources—time, money, staff—from your core business.

When you’re done, congratulations! You used to be a stock exchange (or a sports broadcaster or a fitness studio or …), and now you’re a media technology company. For every line of code describing your business process—your amazing viewer experience—you probably have hundreds or even thousands of lines of media technology code.

And the problem is that nobody outside the streaming media industry gives a damn!

Public companies care about meeting legal obligations and delivering clear communications to investors and the markets. Sports broadcasters care about compelling viewing experiences, rights management, and the communities they build with fans.

Not a single consumer cares whether you use HLS or DASH. They care how entertaining it is, what it looks like, whether they can interact with their peer group, how much it costs, how reliable it is, how interactive and informative it is, etc.

So what did Nasdaq do? They chose id3as, a company foolish enough to have our own “roll your own” tech stack. We implemented their business rules, and they saw the number of errors in their webcasts plummet. At the same time, the number of events that their existing call center could monitor increased immensely due to the increased automation of QoE. id3as is able to keep tailoring our solution so that Nasdaq keeps a sustained market advantage.

So Nasdaq lived happily ever after, right? The End?

A better way

While the above is all true, it is not and should not be the end of the story. id3as has some exceptional technology, and we are very unusual in being in the business of building custom media workflows. We create compelling viewer experiences and implement our customers’ core business processes for them.

And therein lies the problem. Why is core customer IP gated behind id3as’ (or indeed anyone’s?) technology and our ability to react to client needs?

It doesn’t have to be like that!

We believe that  media companies—or systems integrators, or large enterprises—should control their own IP while being able to build live streaming experiences that satisfy their needs and their customers’ demands. We have poured tens of developer-years into building Norsk, a low-code live streaming SDK that allows you to describe, implement, and update even the most complex live streaming workflows—all in just a few lines of code, using the programming languages your teams are already familiar with.

Under the covers there is immense complexity, but you can leverage the value customization brings without having to wrestle with this complexity.

Norsk frees you from the limitations of the simplicity illusion and removes the tyranny of “roll your own” complexity. It opens a world in which you can build amazing live streaming experiences with a few dozen lines of simple code, and where the focus is not on technical details but on value to your customers.

It is your control over this business logic that means you end up with a live streaming workflow that embodies your processes and KPIs. That’s what enables consistent quality and a compelling user experience for your audience.

Norsk is the media technology expert so you don’t have to be.

Net Insight – How IP turns the page on sports video production and distribution

Net Insight – How IP turns the page on sports video production and distribution

Kristian Mets, Head of Sales Business Development at Net Insight

 The landscape of sports streaming is evolving rapidly. Recent studies show that a staggering 71% of US sports enthusiasts now opt for live viewing, underscoring a significant opportunity for the media industry and rights holders alike. As viewers expand their preferences across platforms like OTT, digital channels, and FAST, the media industry must move forward to cloud-driven production and distribution processes to serve the burgeoning demand for real-time sports content.

Addressing the surge in feed demand

The financial incentive that comes with providing fans with high-quality live content combined with a worsening economic climate forces the rights holders to get creative and look for new ways to diversify their income. It’s no longer just the major leagues dominating the media landscape; smaller niche sports, youth leagues, and lower divisions are realizing the economic potential of broadcasting. However, it is not as simple as just distributing – preserving the high quality of this content is vital for maintaining its value. Scalability or availability challenges should no longer inhibit growth. Content owners can leverage IP technology to ensure quality control while not getting slowed down by complex tools that hinder the process.

Decentralizing production with IP

The shift toward IP technology signals a new age for media companies. IP provides the pathways to explore new business models and achieve cost and resource efficiencies while gaining crucial capabilities. Remote production, powered by IP, has revolutionized production workflows across borders. Remote production optimizes the utilization of time and resources by diminishing complex logistics and allowing production teams to cover multiple events. This IP-driven remote approach allows media companies to get creative, leading to improved viewer experiences through cost-effective advancements like remotely-controlled robotic camera heads.

The cloud has also played a role in creating a decentralized production environment, empowering production units and teams to work in tandem across geographies. This framework eradicates the challenges posed by distance and time zones, broadening the talent base available to media houses.

 Maximizing live content value with IP

Once the live video is produced, IP is pivotal in ensuring the content’s worth is fully realized and channeled to audiences over the right platforms. Traditional distribution models are witnessing a shift because of IP. Instead of a “one for all” approach of sending a uniform feed to all outlets, content delivery is being tailored to resonate with distinct consumer needs. With this audience-centric model, media entities can upscale their premium offerings while crafting region-specific experiences. Customized feeds, local commentary, and culturally-tuned viewing experiences allow media companies to truly provide relevant content to a global audience, amplifying the value of high-profile sports content.

Preparing for the future

In the competitive sports streaming landscape, where new entrants continually emerge, media companies need to carve a niche.  In this setting, IP emerges as a linchpin of innovation. The capability to upscale premier content, coupled with ease of control, feed management, and delivering super high-quality content adapted for varied audiences, enables rights owners to harness market demand, setting them on a path of sustained growth.

 With the sports markets continuing to expand, efficient management of live feeds from source to screen becomes essential. By integrating IP and cloud solutions, media entities are well-equipped to serve diverse platforms, tap into new revenue sources, and deliver superior, bespoke content to audiences worldwide. IP technology and a well-defined content strategy are crucial for media organizations wishing to remain at the forefront of sports streaming, and future-proof their business.

 

 

 

 

MediaKind – Using the latest ‘green’ video encoding tech can help broadcasters slash their CAPEX, OPEX, & energy rates

MediaKind – Using the latest ‘green’ video encoding tech can help broadcasters slash their CAPEX, OPEX, & energy rates

Tony Jones, Principal Technologist, MediaKind

Adopting real-time streaming experiences such as live events, interactive video, cloud gaming, video communications, and virtual worlds is soaring. Meeting this demand with CPU-based codecs can often be expensive and inefficient, unnecessarily boosting CAPEX, OPEX, and carbon emissions generated by CPU-based encoding. In a breakthrough for the video processing sector, Tony speaks to us about how organizations can tap into GPU-based solutions that substantially trim down operating costs, capital expenditure, and energy usage.

How does video encoding innovation ensure the delivery of top-notch picture quality and optimization of rack density while offering a user-friendly experience?

Video encoding is a highly resource-intensive function, and because of the complexity, the number of data calculations and permutations in options, it has a near-limitless appetite for compute power. Of course, in the real world, it is necessary to draw the line at some point, either because it is simply infeasible to have more compute power in the right place, or because the cost of additional processing does not translate into sufficiently valuable savings for the delivery costs.

Different applications can have different places where this balance may sit.

Within this overall framework, top-tier video compression specialist companies use video algorithm research to use the available compute resources optimally: each time an optimization is found, it translates into a CPU processing power saving that can either become a cost-saving or can be recycled into allowing the algorithm to drive down video bitrates further while retaining the desired visual quality level.

Why does this matter? Costs and environmental impacts are linked via the overall energy used to deliver a video service. There are two components to this: the video processing to compress the video and the energy used to deliver it to the consumer. Reducing bitrate means lowering the amount of data that needs to be delivered, reducing costs and energy required, whether that’s satellite transponder or cable bandwidth utilization or the Content Delivery Network (CDN) needed to deliver streaming.

Committing to meeting the most stringent environmental, social, and governance (ESG) benchmarks and supporting everyone in helping reduce their carbon footprint in the M&E industry is important. Why?

We all share this one planet, and the well-being of our current and future generations is a common concern. Preserving it is essential and is an obligation that we must accept. It is, therefore, important to strive for energy efficiency to reduce our carbon footprint. Many companies and customers share this commitment, and it is now common for businesses to attach mandatory ESG requirements to drive environmental sustainability.

While the media and entertainment industry isn’t the largest emitter of carbon, it is still important to do our part and take our planet’s future seriously. Every sector within it is responsible for examining its practices closely and minimizing its impact. This involves efficiency improvements and active measures to mitigate our carbon footprint.

How can GPU-based video encoding technology reduce the carbon footprint of video streaming and enable significant cost savings for content providers?

As noted previously, video processing is extremely resource-intensive for compute. Many of these operations are relatively simple calculations, but they need to be performed at a massive scale. Graphics processing units (GPUs) are an incredibly useful addition to this space, as they are optimized for this kind of parallel processing structure – and more so with the advent of GPUs that have considered the specific needs of video processing.

By using an accelerator to execute these large-scale calculations (rather than a general-purpose CPU), the energy and silicon footprint required can be reduced dramatically. That’s because calculations are implemented in silicon dedicated to that type of calculation.

This means that a large proportion of the most compute intensive operations can be offloaded to the acceleration, leaving the CPU with much less to process. Video processing also needs a significant level of sequential processing, which does map well to a CPU. In combination, it is a powerful architecture.

The offloaded calculations are more power efficient, so the net result is lower CPU compute power needed, higher density per physical server and lower power per channel. This means both lower costs and lower power consumption (as well as less building space and lower requirements for cooling).

There can be, however, some disadvantages of such approaches: GPUs can only ever perform the calculations and data flows provided in their silicon design, so it may become difficult or even impossible to achieve new capabilities, for example, a new CODEC (such as VVC) might not map well to an existing GPU, or the use of GPUs might prevent certain algorithmic flows to be implemented, so there may remain cases where a CPU-based approach is more appropriate. A CPU-based approach has the greatest level of flexibility, allowing more flexibility in what algorithms can be implemented. So this remains the optimal choice for the cases when outright video bitrate efficiency is the most important need: for example, the massive audience case, where a CDN’s cost and energy might be a bigger consideration than the compute needed for encoding.

How can GPU-based video encoding technology reduce the carbon footprint of video streaming and enable significant cost savings for content providers?

GPUs can significantly reduce the carbon footprint by virtue of their energy-efficient nature, especially when their strengths align well with the task at hand.

When a GPU’s capabilities match the specific use case, it often becomes a favorable solution due to its potential for energy efficiency. However, it’s important to note that there are exceptions. Sometimes, the broader context of energy conservation requires a different optimization strategy.

Zooming out, the larger picture comes into focus. For instance, while a GPU might outshine a CPU in terms of efficiency on a microscopic level, the entire ecosystem needs consideration. CDNs come into play here. In certain scenarios, allocating more resources to video processing in order to achieve lower bit rates could be more prudent, even if it results in a temporary increase in the carbon footprint. This consideration is made for the greater purpose of minimizing data traffic and its corresponding carbon impact, particularly considering the scale of distribution via CDNs.

In essence, a comprehensive approach is essential. Focusing on optimizing a single component might not yield the most environmentally efficient result. The bigger picture, encompassing the entire system, warrants attention when determining the most effective strategy for environmental reasons. The goal is achieving the most significant overall energy and cost reduction, which may involve optimizing different components in diverse ways, depending on the use case.

 

Matrox Video – How to communicate the value of technology solutions to buyers

Matrox Video – How to communicate the value of technology solutions to buyers

Francesco Scartozzi, Vice President of Sales and Business Development at Matrox Video

Why is it so important to communicate the value of technology solutions to potential buyers? How will this improve their decision-making or outcomes? Because beyond the ever-present sales and marketing imperative is a more important driver: media businesses can’t fully benefit from the ecosystem of the future without understanding its significance.

That future ecosystem is being shaped by the “IT-ification,” or “computerization,” of the broadcast industry, be it across on-prem and private data centers, hybrid models, or the cloud. It’s an open, non-proprietary, computer-based environment that delivers all the advantages that IT has finally opened up for our industry. The promise made by virtualization is being kept.

One reason for confusion across the industry has been the pervasive “move to the cloud” message. Most people hear the word “cloud,” and they understand that it signals remote capabilities with scalability and flexibility. They get that the cloud gives them a dial they can spin to turn up or down the computing resources they’re using. But the reality, at this point, is that the cloud is just table stakes — just part of doing business in the modern media landscape. One way or another, some part of media operations will rely on the cloud. The cloud is more about the “where” than the “what” and the “how.”

And the “what” and the “how” are what potential buyers want to know. To some degree, it is important to distinguish the difference between “lift and shift” solutions in the cloud as opposed to cloud-native solutions. After all, bolting existing products onto the cloud (re-platforming), rather than engineering them to exist and function optimally within it (refactoring) can only get a technology supplier so far. As vendors continue their incremental introduction of cloud capabilities, investment in traditional workflows may not make so much sense. What buyers need to know is how to differentiate bolt-on functionality from true innovation. Is a faster horse really the solution if a flying car, or even teleportation, is an option instead?

If you can communicate how technology will eliminate boundaries, remove constraints, and make it cheaper and easier for buyers to reach specific goals, then you’re effectively translating for them the value of that technology in a way that makes sense and relates to their business. Maybe they can seize the opportunity to start thinking bigger and demanding more from the world of IT.

More than just a technical shift, this is also a shift in mindset — and that’s what makes effective, compelling communication so important. For many buyers, the technologies and solutions they’re being asked to consider today are nothing like anything they’ve ever seen (much less imagined) before. The rate of technological innovation has proceeded at such a rapid pace that even in-the-know analysts are stunned by solutions coming to market. “How is this happening already?” they wonder as they see new solutions on the trade show floor and learn about the latest proofs of concept in play within early adopters’ test workflows.

In the case of IT-ification, or virtualization, the idea is not completely foreign; people take advantage of it every day in other domains of life. They’re exposed to it when they use their iPhone, M365, Google Drive, gaming, or whatever it may be. They are already aware of how the barriers can fall away. Yet, they work in a world where they’ve grown accustomed to the constraints of walls, in the form of synchronized video, they need to work around. In this world all machines must be genlocked and synchronized, and very careful timing work must be performed before one machine can talk to another. For so long, the thinking has been that there isn’t any other way.

But that’s no longer true, and no amount of one-on-one technical reasoning will do the work of practical communication to help buyers see that. While engineers and other technologists certainly (and reasonably) want to understand how it’s possible, a much larger audience simply wants to know that they no longer need to connect this machine to that one, or genlock the graphics system, or pre-render all their bumpers. They use their smartphones and the apps on them to complete specific tasks and to make life easier or better, and knowing that it works is enough. For a large number of prospective buyers, the same is true for the technologies that will bring them into the IT-ified television ecosystem of the future.

This shift will be transformational (not unlike the advent of the smartphone, in fact), removing all the conventional rules that have applied to broadcast over decades. As OEMs, manufacturers, early adopters, and other influencers come awake to the possibilities from virtualization, the industry as a whole will move toward a new reality without traditional limitations. And the sooner the practical implications of this newfound freedom are clearly and simply illustrated for potential buyers, the better-equipped broadcasters will be to remain relevant, delight audiences, and make money.