MediaKind – Using the latest ‘green’ video encoding tech can help broadcasters slash their CAPEX, OPEX, & energy rates

MediaKind – Using the latest ‘green’ video encoding tech can help broadcasters slash their CAPEX, OPEX, & energy rates

IABM Journal

Representing Broadcast & Media Technology Suppliers Worldwide
Articles covering a range of key topics and themes
In depth analysis of the latest trends

MediaKind – Using the latest ‘green’ video encoding tech can help broadcasters slash their CAPEX, OPEX, & energy rates

Wed 04, 10 2023

MediaKind – Using the latest ‘green’ video encoding tech can help broadcasters slash their CAPEX, OPEX, & energy rates

Tony Jones, Principal Technologist, MediaKind

Adopting real-time streaming experiences such as live events, interactive video, cloud gaming, video communications, and virtual worlds is soaring. Meeting this demand with CPU-based codecs can often be expensive and inefficient, unnecessarily boosting CAPEX, OPEX, and carbon emissions generated by CPU-based encoding. In a breakthrough for the video processing sector, Tony speaks to us about how organizations can tap into GPU-based solutions that substantially trim down operating costs, capital expenditure, and energy usage.

How does video encoding innovation ensure the delivery of top-notch picture quality and optimization of rack density while offering a user-friendly experience?

Video encoding is a highly resource-intensive function, and because of the complexity, the number of data calculations and permutations in options, it has a near-limitless appetite for compute power. Of course, in the real world, it is necessary to draw the line at some point, either because it is simply infeasible to have more compute power in the right place, or because the cost of additional processing does not translate into sufficiently valuable savings for the delivery costs.

Different applications can have different places where this balance may sit.

Within this overall framework, top-tier video compression specialist companies use video algorithm research to use the available compute resources optimally: each time an optimization is found, it translates into a CPU processing power saving that can either become a cost-saving or can be recycled into allowing the algorithm to drive down video bitrates further while retaining the desired visual quality level.

Why does this matter? Costs and environmental impacts are linked via the overall energy used to deliver a video service. There are two components to this: the video processing to compress the video and the energy used to deliver it to the consumer. Reducing bitrate means lowering the amount of data that needs to be delivered, reducing costs and energy required, whether that’s satellite transponder or cable bandwidth utilization or the Content Delivery Network (CDN) needed to deliver streaming.

Committing to meeting the most stringent environmental, social, and governance (ESG) benchmarks and supporting everyone in helping reduce their carbon footprint in the M&E industry is important. Why?

We all share this one planet, and the well-being of our current and future generations is a common concern. Preserving it is essential and is an obligation that we must accept. It is, therefore, important to strive for energy efficiency to reduce our carbon footprint. Many companies and customers share this commitment, and it is now common for businesses to attach mandatory ESG requirements to drive environmental sustainability.

While the media and entertainment industry isn’t the largest emitter of carbon, it is still important to do our part and take our planet’s future seriously. Every sector within it is responsible for examining its practices closely and minimizing its impact. This involves efficiency improvements and active measures to mitigate our carbon footprint.

How can GPU-based video encoding technology reduce the carbon footprint of video streaming and enable significant cost savings for content providers?

As noted previously, video processing is extremely resource-intensive for compute. Many of these operations are relatively simple calculations, but they need to be performed at a massive scale. Graphics processing units (GPUs) are an incredibly useful addition to this space, as they are optimized for this kind of parallel processing structure – and more so with the advent of GPUs that have considered the specific needs of video processing.

By using an accelerator to execute these large-scale calculations (rather than a general-purpose CPU), the energy and silicon footprint required can be reduced dramatically. That’s because calculations are implemented in silicon dedicated to that type of calculation.

This means that a large proportion of the most compute intensive operations can be offloaded to the acceleration, leaving the CPU with much less to process. Video processing also needs a significant level of sequential processing, which does map well to a CPU. In combination, it is a powerful architecture.

The offloaded calculations are more power efficient, so the net result is lower CPU compute power needed, higher density per physical server and lower power per channel. This means both lower costs and lower power consumption (as well as less building space and lower requirements for cooling).

There can be, however, some disadvantages of such approaches: GPUs can only ever perform the calculations and data flows provided in their silicon design, so it may become difficult or even impossible to achieve new capabilities, for example, a new CODEC (such as VVC) might not map well to an existing GPU, or the use of GPUs might prevent certain algorithmic flows to be implemented, so there may remain cases where a CPU-based approach is more appropriate. A CPU-based approach has the greatest level of flexibility, allowing more flexibility in what algorithms can be implemented. So this remains the optimal choice for the cases when outright video bitrate efficiency is the most important need: for example, the massive audience case, where a CDN’s cost and energy might be a bigger consideration than the compute needed for encoding.

How can GPU-based video encoding technology reduce the carbon footprint of video streaming and enable significant cost savings for content providers?

GPUs can significantly reduce the carbon footprint by virtue of their energy-efficient nature, especially when their strengths align well with the task at hand.

When a GPU’s capabilities match the specific use case, it often becomes a favorable solution due to its potential for energy efficiency. However, it’s important to note that there are exceptions. Sometimes, the broader context of energy conservation requires a different optimization strategy.

Zooming out, the larger picture comes into focus. For instance, while a GPU might outshine a CPU in terms of efficiency on a microscopic level, the entire ecosystem needs consideration. CDNs come into play here. In certain scenarios, allocating more resources to video processing in order to achieve lower bit rates could be more prudent, even if it results in a temporary increase in the carbon footprint. This consideration is made for the greater purpose of minimizing data traffic and its corresponding carbon impact, particularly considering the scale of distribution via CDNs.

In essence, a comprehensive approach is essential. Focusing on optimizing a single component might not yield the most environmentally efficient result. The bigger picture, encompassing the entire system, warrants attention when determining the most effective strategy for environmental reasons. The goal is achieving the most significant overall energy and cost reduction, which may involve optimizing different components in diverse ways, depending on the use case.

 

Search For More Content


X