Witbe – Greening the streaming: how real device testing can drive sustainability in the media industry

Witbe – Greening the streaming: how real device testing can drive sustainability in the media industry

Yoann Hinard, COO, Witbe

As millions of users consume streaming video content across various platforms daily, video app providers and mobile network operators face immense pressure to manage data usage and bandwidth efficiently. This challenge presents an opportunity to make the industry more sustainable. Leveraging data analytics captured through real device testing, streaming video service providers can not only optimize data transmission but also reduce energy consumption and promote sustainable user behavior. Vodafone, Telefonica, and Meta have already communicated interesting results in this field.

The increasing demand for high-quality video streaming

The demand for video streaming services has skyrocketed in recent years, with platforms like YouTube, Netflix, and TikTok becoming central to how people consume media. However, this growth comes with significant resource spend and environmental challenges. Video streaming requires substantial data transmission, which in turn leads to increased energy consumption. This energy is often derived from non-renewable sources, contributing to the industry’s carbon footprint. With video now consuming a large portion of network bandwidth, mobile operators and video service providers must take measures to manage this surge in data usage.

The challenges of streaming with limited resources

The biggest challenge for video service providers and mobile network operators is finding the sweet spot between data consumption, video quality, and viewer expectations. Some of the techniques that are used today to improve video streaming quality or reduce data consumption may not reach a common goal.

HTTP adaptive bitrate streaming, in its default implementation, will try to use as much as data as possible to reach the highest available video quality level. “Pre-fetching”, where the first few seconds of the next video set to play are automatically loaded in advance to allow users scroll between videos without any buffering, can lead to “over-fetching”, where more data is downloaded than necessary. Throttling, where mobile operators intentionally slow down or limit the data transfer rate, can result in lower resolution, longer buffering times, and overall degraded user experience.

While mobile network operators try to keep their service working properly without building costly new infrastructure to deal with the expanded load, video service providers are under pressure to limit their app’s own data usage or have it throttled and risk losing frustrated viewers.

Data analytics can lead to sustainable streaming

Data analytics can play a crucial role in addressing these challenges by enabling more efficient data transmission and energy use while boosting streaming quality of experience (QoE). One of the key areas where data analytics can contribute is in optimizing video encoding and adaptive bitrate streaming. For instance, video service providers can optimize their bitrate ladders — a process that determines the different quality levels a video can be streamed at based on network conditions. By fine-tuning this process using data analytics, service providers can ensure that they are not “over-encoding” content, which would consume more data and energy than necessary. This approach not only conserves bandwidth but also reduces the load on servers, leading to lower energy use​. By striking the right balance between video quality and data consumption, providers can significantly reduce their environmental footprint.

Why quality of experience and data usage benchmarking is essential

Quality of experience and data usage benchmarking technology can help mobile network operators and video service providers strike an ideal balance between streaming video quality and bandwidth usage. Using QoE and data usage benchmarking technology enables operators and providers to gain accurate measurements of an app’s mobile data usage, responsiveness, and video quality across different builds, devices, and networks. With this valuable data, teams can fine-tune their video app or mobile network settings to avoid using more bandwidth than necessary or building entirely new infrastructure.

Teams can also leverage this technology to compare their mobile network or video app with local and global competition. Leveraging data from real devices, they can see how much data a video app consumes on one mobile network versus another, as well as how their video quality offerings compare with other streaming apps. With this valuable information, teams can optimize their bandwidth usage to stay competitive, reduce energy consumption, and offer their viewers the best QoE possible.

Successful deployment of data usage benchmarking technology

Vodafone and Meta are excellent examples of a mobile network operator and a video app provider working together to improve streaming quality while decreasing egress, resulting in improved efficiency. Vodafone and Meta recently announced that they are collaborating on a new way to free up network capacity for all mobile customers, including allowing them to view more high-quality short-form videos.

In an initial three-week test conducted in the UK in April 2024, the companies recorded a meaningful reduction in network traffic for Meta applications across Vodafone’s mobile network. Vodafone freed up additional network resources on some of its most popular 4G/5G sites so that all mobile customers at these busy locations, such as shopping centers and transport hubs, benefitted. This was achieved leveraging state-of-the-art QoE and data usage benchmarking technology, and has already been applied in eleven European markets since June 2024. In July, Telefonica similarly shared via a joint press release with Meta announcing that “the tested configuration significantly reduced video traffic in real network conditions while maintaining the user experience.”

 Collaborating on a better future

These accurate measurements can only be obtained through testing and benchmarking on the same real devices and networks that customers are using. With proper real device monitoring, teams can identify areas where quality is being overdelivered (wasting resources) and adjust it to hit the sweet spot between received streaming quality and mobile data consumption. Since both mobile network operators and video service providers want to reduce their own resource spend on streaming video, collaborating together to find an optimal balance from real device monitoring technology is essential. With the exponential growth of data consumption, largely influenced by streaming video, now is the time to optimize network bandwidth management while delivering great video quality.

Projective – Breaking free from the hardware cycle: a new era in postproduction

Projective – Breaking free from the hardware cycle: a new era in postproduction

Derek Barrilleaux, CEO, Projective

When considering traditional technology deployments within post-production and broadcast companies, the term “cycle of hardware” highlights a common situation for media technology buyers. Every 2-5 years, companies find themselves entangled in a relentless cycle of sourcing, implementing, powering, and retiring physical hardware – servers, storage systems, backup solutions.

The cycle of hardware is often an arduous and time-consuming process that can span years, to the extent that a new cycle often begins as soon as the last one concludes. It reminds me of window cleaners on skyscrapers, who, just as they finish cleaning the last pane of glass on the bottom floor, must start again from the top. Similarly, the operational expenditure (OpEx) investment cycle on both hardware and ancillary costs (such as power, cooling, migration and backup) continues in a seemingly endless loop.

This recurrent hardware migration is not just labor-intensive; it’s inefficient and costly. It drains valuable resources from your IT team, preventing them from focusing on higher-value tasks.

However, there’s a better way to break free from this purchasing and deployment cycle. In this article, we explore the inefficiencies of traditional hardware cycles in post-production and discuss a more sustainable and efficient solution through cloud-based technology.

The inefficiency of the hardware cycle

The hardware cycle is a significant operational burden for businesses in the media and entertainment space. Here’s why:

  1. Resource intensive: Implementing new hardware requires extensive time and effort, from evaluating platform needs, sourcing, negotiating prices, and redesigning infrastructure to migrating applications, data, backups, and archives.
  2. Continuous maintenance: Physical hardware needs constant monitoring, updates, and troubleshooting, which diverts your IT team from strategic projects.
  3. High costs: Frequent hardware upgrades escalate capital expenditures (CapEx) and operational expenditures (OpEx), impacting the bottom line.
  4. Environmental impact: Powering and cooling physical servers contribute to high energy consumption and carbon footprints, contradicting many companies’ sustainability goals.

The Case for cloud-based solutions

Transitioning to a cloud-based approach can break the hardware cycle, but true value cannot be realized if it doesn’t also enhance working practices. Numerous cloud tools can assist with specific areas in post-production, such as review and approval, file sharing, or serving as an asset library. However, creative project workflows are still left to their own devices—both figuratively and literally. Seeing our customers ship hard drives to deliver projects to their teams in 2024 highlights the immaturity of current cloud technology implementations. Merely having cloud storage or platforms on the edges of post-production workflows fails to address their core: the projects themselves.

These were the principles that guided us at Projective in creating Strawberry Skies, a cloud-based platform that addresses the challenges of hardware cycles. As a well-established post-production collaboration solution, Strawberry has always delivered intelligent project management, sharing, permissions, as well as comprehensive search capabilities and automation for our customers worldwide.

With Skies we wanted to enhance those capabilities by adding secure cloud access, remote flexibility and unparalleled scalability for post-production workflows. The fully cloud-hosted platform delivers a seamless, scalable solution for post-production environments. It provides a creative project framework that allows teams to collaborate, share resources, and manage projects efficiently, all without the limitations of physical hardware.

Post-production in the cloud

Transitioning from on-premises hardware to cloud-based solutions streamlines operations and offers numerous workflow-specific advantages for post-production teams:

  • Global access: Teams can collaborate seamlessly from anywhere, reducing travel needs and costs.
  • Security and compliance: Enhanced access control and automated project structures keep your data secure, with tailored permissions and quota settings.
  • Flexibility and scalability: Cloud solutions like Strawberry Skies allow businesses to easily adjust infrastructure to meet demands, transitioning from CapEx-heavy investments to manageable OpEx models. This mitigates the need for ongoing hardware investment and maintenance, enabling IT teams to focus on innovative projects.
  • Increased efficiency: Cloud-hosted tools reduce administrative tasks, allowing teams in the media and entertainment sector to concentrate on creating and distributing compelling content.
  • Enhanced sustainability: Moving to cloud-based infrastructure lowers the environmental impact by reducing power consumption from physical servers, aligning with corporate eco-friendly strategies.

This strategic shift enhances creative output and productivity while optimizing costs and promoting sustainability.

Real-world example: TUI’s success

Let’s look at a real-world example of how cloud adoption has revolutionized post-production operations. TUI, a global leader in the leisure, travel, and tourism industry, produces a substantial amount of video content for its international portfolio of hotels, cruises, aircraft, and travel agencies. The business was facing inefficiencies with their on-premises production systems. In addition, their geographically dispersed team struggled with collaboration, slow content creation, and high energy consumption.

Through the implementation of Strawberry Skies powered by Skies Drive by LucidLink, TUI’s Western Region team transformed their content production process, enabling real-time collaboration across geographically dispersed teams, minimizing inefficiencies and costs. Creative projects can now be completed in a fraction of the time, while the need to physically move crews and equipment has been significantly reduced.

TUI achieved significant improvements including cost-effective scalability, predictable pricing, operational efficiency and energy savings from reduced server reliance – all contributing to better financial planning and more sustainable operations.

Bart Saerens, TUI’s Solution Architect, states: “The Projective solution has enabled TUI to be very cost efficient and have a more aggressive marketing strategy.”

You can read the full story HERE.

Ateme – Reducing streaming’s carbon footprint through innovation

Ateme – Reducing streaming’s carbon footprint through innovation

Daniel Patton, Vice President, Product – Origination and Delivery, Ateme.

The video streaming industry, which now accounts for 60-80% of global internet traffic, is facing increasing scrutiny due to its significant contribution to carbon emissions. According to a report from The Shift Project, internet activity is responsible for approximately 4% of global greenhouse gas emissions, a figure that is expected to rise as demand for streaming services grows. This surge in video consumption has driven the expansion of data centers, network infrastructure, and consumer devices, all of which add to the industry’s environmental impact. In response, companies like Ateme and other video processing vendors are focusing on innovations such as advanced video codecs, efficient compute platforms, and AI-driven optimizations to reduce data size, energy consumption, and overall carbon footprint in the streaming ecosystem.

How codecs reduce data size and energy consumption

A key area of innovation in video streaming is the implementation of highly efficient codecs designed to reduce the data required for an optimized user experience. Codecs such as H.264 (AVC), H.265 (HEVC), and newer ones like AV1 and VVC (Versatile Video Coding) achieve this by compressing video files and eliminating redundancies. HEVC, for example, can reduce bitrates by up to 50% compared to its predecessor, H.264, while newer codecs like AV1 and VVC offer additional efficiencies of 30-50% over HEVC.

These improvements in video processing efficiency lead to cascading energy reductions across storage, delivery, and network infrastructure. By compressing video files more effectively, advanced codecs minimize the storage space required on servers and data centers. This increased efficiency also reduces the overall throughput needed to deliver streams to large audiences, thereby decreasing the capacity demands on Content Delivery Networks (CDNs).

The impact of bitrate reduction varies depending on the last-mile connectivity. For mobile networks, lower bitrate content requires less power to deliver, creating a direct correlation between bitrate and energy consumption. While this power reduction doesn’t apply to static network components of a fixed network, adopting advanced codecs with lower network requirements for both mobile and fixed networks can curb infrastructure growth and its associated power demands, even as the streaming video market expands.

Finally, lower bitrates can also decrease the power required on consumer devices to playback content. However, broader adoption of newer codecs necessitates devices that support hardware decoding, which is significantly more resource-efficient (4-10x) than software decoding. For example, Apple recently added support for AV1 hardware decoding in the iPhone 15, and Qualcomm has indicated that upcoming Snapdragon processors will support VVC decoding.

Reduce compute requirements

While software itself does not consume power, the compute platforms it runs on certainly do. So the challenge to technology vendors – “do more with less.” ARM and x86 processor manufacturers continue to innovate by developing CPUs that reduce energy consumption both during operation and, more critically, when idle. Additionally, cloud compute providers are actively lowering their carbon footprints by transitioning to renewable energy sources.

Software vendors are also evolving their solutions to contribute to carbon footprint reduction. Video processing vendors, for example, are enhancing video encoding efficiency to enable more encodes on the same server platform. Video delivery vendors are implementing just-in-time packaging to process only the content being requested, which reduces compute and CDN storage requirements. Finally, by bolstering security and anti-piracy measures, content owners can diminish the usage—and associated carbon footprint—of unauthorized viewers.

Leveraging viewer data and AI

In traditional broadcasting, linear content was transmitted over-the-air, with viewership estimates made after the fact. In contrast, the streaming model provides detailed data on every viewer—tracking when they started watching, how long they watched, the devices they used, and the profiles delivered to them. One way to leverage this new wealth of data is to assess whether enough viewers are using devices with advanced codecs. If the energy savings from these devices, using a more efficient codec, outweigh the energy cost of processing the video in an additional format, this can significantly reduce the carbon footprint of the audience watching the content. Leveraging other data sources and training AI models to make these dynamic decisions offers a glimpse into the future.

Data can also be used to enhance decisions during the encoding process. By analyzing a vast history of video-quality test encodings, which have been evaluated by skilled video quality engineers, Machine Learning (ML) techniques can train algorithms to match outputs to the visual quality assessments of experts. Combined with other video processing algorithms, this not only improves the quality of the viewing experience but also, from a carbon footprint perspective, can recommend using fewer variants or lower bitrates for each variant. This results in lower storage space requirements (up to 50%) and reduced network bandwidth needed to deliver the video experience.

Consumers play an integral role

While technology vendors will continue to innovate to reduce the carbon footprint, consumers also play a crucial role. According to a recent Parks Associates study, the average US household now watches over 43 hours of video per week. As consumers make daily viewing choices, they can have a direct impact on the power required for video playback by selecting the quality level of their viewing experience. By empowering consumers to choose the quality of their individual experience, each person can actively contribute to reducing the carbon footprint associated with streaming video.

Summary

As video streaming continues to dominate global internet traffic, the industry’s carbon footprint is increasingly under scrutiny. Advanced video codecs, efficient compute platforms, and AI-driven optimizations offer promising solutions to reduce data size, energy consumption, and the environmental impact of streaming. However, the collective effort of technology vendors and informed consumer choices is crucial for achieving meaningful sustainability in the streaming ecosystem. By adopting more efficient technologies and making conscious decisions about viewing quality, both industry players and consumers can significantly contribute to reducing the carbon footprint associated with video streaming.

Appear – How to be an immersive and green broadcaster

Appear – How to be an immersive and green broadcaster

Matthew Williams-Neale, VP of Marketing, Appear

Delivering immersive live events, whether it’s the thrill of the Olympics’ 100-meter sprint or the suspense of an awards ceremony, while simultaneously meeting sustainability goals, is no easy feat. Broadcasters are rising to this challenge by leveraging cutting-edge technologies and innovative production methods. Today’s audiences expect nothing less than best-in-class coverage, and the recent summer of sports presented broadcasters with the dual challenge of delivering high-quality live content, while adhering to stringent sustainability standards.

ITV Sports’ coverage of Euro 2024 exemplifies this approach, with its adoption of a ‘reverse remote’ production workflow focused on sustainability. This setup included multiple remote galleries in Berlin for live shots, a centralized MCR for connectivity, and a remote gallery transmitting signals back to London. Similarly, the Paris Olympics set a goal to halve its emissions compared to previous games by integrating greener technologies and practices, such as electric vehicles, energy-efficient systems, and carbon credits. These shifts in broadcasting priorities underscore how major live events are increasingly embracing technological innovation to achieve carbon neutrality.

Embracing cloud-based technology

As we move further into 2024, broadcasters are increasingly turning to cloud-based solutions to boost operational efficiency and significantly cut carbon emissions. By reducing the need for physical infrastructure and on-site staff, these remote production platforms are leading the charge in minimizing the broadcast footprint. This shift not only lessens the need for on-site crews and equipment but also dramatically cuts the energy consumption typically associated with large-scale live events. Embracing flexible, high-density solutions that support low-latency coverage allows broadcasters to deliver superior content while simultaneously advancing sustainability efforts.

Sustainable practices are also targeting waste reduction and resource optimization, from using eco-friendly materials to optimizing energy consumption and pursuing zero-waste goals through recycling initiatives. For instance, the Paris Olympics adopted cloud-based technology to lower the event’s carbon footprint and reduce power requirements. Olympic Broadcast Services (OBS) has highlighted that cloud solutions enable them to “do more with less,” showcasing the value of efficient technologies and strategic partnerships in broadcasting. Cloud-based solutions are increasingly understood to be pivotal in reducing the broadcast footprint and energy demands, enabling broadcasters to deliver the excitement of live sports in an environmentally responsible way.

Innovative solutions for high-quality, sustainable broadcasting

Technology solution providers like Appear are at the forefront of transforming the broadcasting landscape by developing sustainable technologies. These advances not only reduce the environmental impact of media production, distribution, and consumption but also allow broadcasters to offer compelling new viewing experiences that bring audiences closer to the action.

In an industry traditionally associated with high carbon emissions, broadcasters are increasingly adopting high-density solutions to produce high-quality live content. Appear’s advanced live production solutions, such as the X Platform, have been employed in most of this summer’s major international sporting events due to their spatial density, functional versatility, and power efficiency. This technology is crucial in ensuring that live feeds from sports venues are sustainably delivered to broadcasting hubs with minimal latency and high quality.

The X Platform offers high-capacity video processing, supporting multiple formats and resolutions required for a diverse range of live events. These solutions minimize rack space in OB trucks and production control rooms, reduce on-site power consumption, and simplify shipping logistics. By embracing innovative technology, broadcasters set a new gold standard for the industry while ensuring that fans can enjoy live events sustainably.

The importance of transparency and collaboration

Transparency and collaboration are essential for advancing sustainability within the media and broadcast industry. Leading companies are making bold climate pledges, emphasizing transparency, energy efficiency, and the reduction of operational impact. As the industry confronts environmental challenges, setting ambitious targets for lowering carbon footprints and increasing renewable energy use is imperative. Given the media’s influence on public perception, integrating sustainable practices has the potential to inspire broader societal shifts towards greener lifestyles.

Collaboration among broadcasters, technology providers, and environmental groups is key to establishing industry-wide standards and best practices. Companies that welcome third-party scrutiny and maintain transparency lay the foundation for continuous improvement. Incorporating these principles allows firms to effectively utilize innovation and engineering expertise, positioning themselves as leaders in the drive toward a more sustainable future.

As the media and broadcast industry strives to balance delivering exceptional viewer experiences with achieving sustainability goals, embracing innovation and collaboration is crucial. Advanced technologies like cloud-based solutions and high-density production systems are instrumental in reducing carbon footprints and optimizing resource use. By working together, broadcasters, technology providers, and environmental groups can establish and adhere to standards that support both outstanding content and environmental responsibility. This approach not only ensures engaging live events but also contributes to a more sustainable future.

 

 

Amagi – Embracing cloud technology for a sustainable future

Amagi – Embracing cloud technology for a sustainable future

Arpit Malani, Director of Product Development, Amagi

Like many others, the broadcast industry is facing increasing pressure to reduce its environmental impact. While essential for delivering high-quality content, traditional on-premises infrastructure often consumes significant energy and resources. This is due to several factors, including a few that I will be discussing below.

Inefficient power usage: On-premises data centers can be energy-intensive, especially in regions with high energy costs. This is often due to inadequate cooling systems, outdated hardware, and inefficient power distribution. For example, traditional air-cooled data centers may require large amounts of energy to maintain optimal operating temperatures. Additionally, obsolete hardware may be less energy-efficient than newer models.

Excess capacity: On-premises infrastructure often requires overprovisioning to accommodate peak workloads, leading to wasted resources. This means that data centers may need to purchase and maintain more hardware than is necessary to meet average workloads. This excess capacity can consume unnecessary energy and increase operational costs.

Carbon emissions: The energy consumption of on-premises data centers contributes to greenhouse gas emissions, exacerbating climate change. Burning fossil fuels to generate electricity releases carbon dioxide and other greenhouse gases into the atmosphere. These emissions contribute to global warming and climate change, which can have severe consequences for the environment and human health.

The benefits of cloud adoption

  • Greener regions: Cloud providers often offer regions that prioritize sustainability, such as areas with abundant renewable energy resources. By choosing these regions, broadcasters can further reduce their carbon footprint.
  • ARM-based instances: ARM-based instances, such as Graviton from AWS and Ampere from GCP, are known for their energy efficiency. These instances can offer significant performance gains while consuming less power, making them an attractive option for broadcasters seeking to optimize their sustainability efforts.
  • Reduced energy consumption: Cloud providers operate large-scale data centers that can achieve significant energy efficiency through economies of scale and advanced cooling technologies. By leveraging these facilities, broadcasters can significantly reduce their overall energy consumption.
  • Optimized resource utilization: Cloud platforms offer flexible resource allocation, allowing broadcasters to scale their infrastructure up or down based on demand. This eliminates the need for excess capacity, reducing energy waste.
  • Sustainable data centers: Many cloud providers are committed to sustainability and invest in renewable energy sources to power their data centers. By partnering with these providers, broadcasters can support their efforts to reduce their environmental impact.

Leveraging modern infrastructure

In addition to cloud adoption, broadcasters can further enhance their sustainability efforts by adopting modern infrastructure. There are processors on the market today that offer improved energy efficiency and performance, enabling broadcasters to optimize their workloads and reduce their carbon footprint. For example, these processors often feature advanced power management technologies that can dynamically adjust power consumption based on workload demands. Additionally, they may support virtualization and containerization technologies, which can further improve resource utilization and reduce energy consumption.

By adopting these modern processors and leveraging their energy-saving features, broadcasters can significantly reduce their environmental impact.

A call to action

The broadcast industry has a unique opportunity to lead the way in sustainable practices. By embracing cloud technology and adopting modern infrastructure, broadcasters can reduce their environmental impact and improve their operational efficiency and competitiveness. To maximize these benefits, broadcasters should consider the following strategies:

Standardization of bit rate and resolution: Adopting consistent bit rate and resolution standards across all platforms can significantly reduce bandwidth consumption and energy usage. By optimizing content delivery, broadcasters can minimize unnecessary data transfer and reduce their operations’ overall energy footprint.

Dynamic adaptive streaming: Implementing dynamic adaptive streaming technologies can further optimize bandwidth usage by adjusting the bit rate and resolution of the content based on the viewer’s network conditions. This ensures that viewers receive the best possible quality without consuming excessive bandwidth.

  • Content caching: Utilizing content-caching mechanisms can reduce the need for repeated data transfers, especially for frequently accessed content. Broadcasters can minimize latency and reduce network traffic by caching content closer to the viewer, leading to lower energy consumption.
  • Intelligent content management: Implementing intelligent content management systems can help to identify and optimize content that is frequently accessed or has high viewer engagement. This allows broadcasters to prioritize content delivery and reduce unnecessary bandwidth usage.
  • Network optimization: Optimizing network infrastructure and routing can also reduce energy consumption. By minimizing network latency and improving efficiency, broadcasters can reduce the amount of data that needs to be transferred, leading to lower energy usage.

By implementing these strategies with cloud adoption and modern infrastructure, broadcasters can create a more sustainable and efficient broadcast ecosystem. It’s time for the industry to embrace a more sustainable future.

HPE – Is sustainability compatible with AI in the media and entertainment industry?

HPE – Is sustainability compatible with AI in the media and entertainment industry?

Matt Quirk, Director, WW HPE Channel & Partner Ecosystem, HPE OEM Solutions

When none other than Tyler Perry halts an $800 million studio expansion after seeing a text-to-video AI demo, you know something major is happening in media and entertainment. AI isn’t new to the industry—Netflix has used machine learning (ML) to serve up recommendations since the early 2000s—but generative artificial intelligence (GenAI) is changing more than distribution and marketing. GenAI is primed to change how film, television, and music are imagined and produced.

The risk to the artists, creators, and craftspeople who make the shows and songs we love is worrying, but it isn’t entirely clear yet. What is clear: Training AI and deploying AI services consume staggering amounts of energy and create tons of CO2 emissions. Here are some data points.

  • Training OpenAI’s GPT-3 model produced an estimated 552 tons of CO2 and consumed an estimated 1287 MWh of electricity,1 enough energy to power a US household for 120 years.2
  • Training just one AI model can emit nearly five times the lifetime emissions of an average American car.3
  • Researchers estimate that ChatGPT needs 1 GWh of electricity to answer the queries it receives in one day. 1 GWh is the daily energy consumption of 33,000 US households.4

Every human-powered function that GenAI assists, improves, or replaces adds to the electricity bill. The question for media and entertainment is whether the efficiencies GenAI creates outweigh the energy it consumes. If it does, GenAI may be a net positive for sustainability. If not, it’s a sign we all need to heed.

What roles will GenAI play in media and entertainment?

AI is already hard at work throughout the media and entertainment industry. AI algorithms compress video, optimize streaming, and save energy every day. With the advent of GenAI, machine intelligence will contribute even more, and at every stage of media creation and distribution.

Creative – It may sound like fantasy, but GenAI is already helping film producers analyze scripts, predict box office potential, and even generate scenes and entire scripts. The same holds true in music, where singers and songwriters are using AI to generate lyrics, melodies, and finished songs.

Production – Synthetic, virtual worlds have always been part of filmmaking. Major motion pictures and television productions already shoot in virtual sets made of 360°LED walls that can display any location imaginable. With GenAI, set designers will be able to conjure worlds with minimal effort by typing a few words.

Post-production – Animation, editing, and sound design use AI today to automate tasks and generate sequences. GenAI allows editors to remove objects from a scene, turn a can of soda into a glass of wine, de-age actors, and create composite performances stitched from multiple takes.

Digital asset management – Using GenAI to search footage is another emerging use case that can save hours of manual searching. Because GenAI can understand the action, performances, and cinematography of a scene, editors can search conversationally for virtually any attribute. This super-search workflow speeds up editing whether an editor is scrubbing through dailies on a feature film or searching stock libraries for the perfect shot.

Distribution and streaming – With rare exceptions, like 70 mm IMAX, movies and television shows are delivered digitally to cinemas, TVs, and phones all over the world. AI plays a major role in video compression and data optimization from the cloud to packet processing and bit rates as video streams across wired, Wi-Fi, and 4G/5G networks.

Captioning, translation, and localization – GenAI services can combine automated speech-to-text with response generation to produce closed captioning, transcripts, and translations on the fly. AI also assists in reformatting to match local broadcast specifications, frame rates, and device aspect ratios.

Marketing – AI helps identify potential hits, trending songs, and hot shows, then places them in front of the right audience, at the right time, on the right device. Thanks to AI, streaming platforms’ recommendation engines combine a nuanced understanding of audience behaviors and preferences with real-time analysis of what’s hot.

General business intelligence – AI and GenAI services create value in both directions. The users of AI services receive help, have more efficient workflows, and produce labor-saving work. The service provider receives intelligence. Wherever AI supports a function or role, the business receives data about that role that GenAI can turn into knowledge and action.

The same GenAI service that helps a line producer optimize shooting schedules can review fleet data and map more efficient routes for studio trucks. It can scout locations to minimize travel, track carbon emissions, and calculate offsets all while optimizing cloud computing resources to the enterprise’s real-time needs.

As GenAI spreads in media and entertainment, will the net effect be good or bad for sustainability?

At this stage, the tradeoffs are difficult to calculate. We only have estimates of GenAI energy use and a limited view into how rapidly studios, streamers, and infrastructure providers are adopting GenAI. However, we can make some educated guesses.

Replacing location shoots with virtual, AI-generated sets should reduce net energy use. Traveling to a location means moving talent, crew, and equipment. Powering a set usually means diesel generators, although there are greener alternatives that run on propane or natural gas. Even with green shooting practices in place, a location shoot is an energy and time-consuming proposition.

Using AI to improve file compression, encoding, and transmission should make file transfers and streaming faster, less expensive, and less energy-intensive. Compared to printing and shipping thousands of reels of film or Blu-rays, digital delivery is clearly more sustainable. GenAI promises to make it even more efficient.

General operational efficiency—from faster post-production to insight into a project’s carbon footprint—should improve steadily as GenAI takes on more roles and functions. Over time, automated, continuous improvements in processes should add up to significant energy savings.

Whether these hypothetical efficiency gains outweigh the energy cost of GenAI is an open question. One way to make sure GenAI does have a positive impact is to improve the energy efficiency of model training and deployment.

How to make LLMs and GenAI more sustainable

The energy costs of GenAI are built-in to the cost of the service, which makes GenAI’s energy impact opaque to the end user. Changing the energy equation for AI rests with the companies providing AI services and manufacturers like HPE who build the supercomputers and data centers that power GenAI. There are levers we can pull to reduce the energy overhead of GenAI, and HPE is working to create new ways to make GenAI sustainable for every industry.

Model training takes massive amounts of time, computing power, and electricity. Every improvement we make in the training process pays immense dividends. HPE has developed the HPE Machine Learning Development Environment. It includes tools for tuning training workloads, so they use hardware more efficiently. The environment runs on machine-learning-specific accelerators that are up to 5x more efficient than off-the-shelf systems.2

Even though model training is energy-intensive, AI services consume 90% of the energy used to deliver AI.2 Where those services run can be a major factor in energy consumption and carbon footprint. For example, the average private data center can be half as efficient than a cloud data center.2 Newer, more energy-efficient servers like the HPE ProLiant Gen11 come equipped with workload-specific accelerators that can deliver up to 10x better performance per watt.[1]

The physical location of a data center matters, too. It takes far more energy to move electricity than it takes to move data. Locating data centers near power generation facilities minimizes energy lost to power transmission. If those facilities use solar, wind, or hydropower, the data center’s carbon footprint will be significantly reduced. This is why many hyperscalers and service providers, including HPE, are shifting data center infrastructure to regions with abundant hydropower and colder climates that can reuse the data center’s waste heat.

Optimizing AI models to run more efficiently can also drastically improve energy performance, regardless of what hardware these models run on. Sparsely activated deep neural networks can consume <1/10th the energy of similarly dense deep neural networks (DNNs) without sacrificing accuracy.2

Taken together, increasing supply-side efficiencies through optimization, more efficient hardware, and renewable energy can collectively reduce AI carbon emissions 1000x.2 With efficiency gains like that, GenAI in media and entertainment—and other industries—may prove to be a gain for sustainability. At HPE, we are working to ensure that this is the case.

Learn more about how we can help you build solutions that are both sustainable and AI-driven at hpe.com/solutions/oem.

 

Footnotes

1. David Patterson, et. al, Carbon Emissions and Large Neural Network Training, arxiv.org.

2. Zachary Champion, Optimization could cut the carbon footprint of AI training by up to 75%, Michigan News, University of Michigan.

3. Bernard Marr, Green Intelligence: Why Data and AI Must Become More Sustainable, Forbes.

4. Sarah McQuate, Q&A: UW researcher discusses just how much energy ChatGPT uses, University of Washington News.

5. Based on MLPerf v3.0 inference results for Offline Throughput/Average Watts against comparable accelerators (with similar TDP)

 

NStarX – Can GenAI help with better visibility on the outcome of film making?

NStarX – Can GenAI help with better visibility on the outcome of film making?

Yes, it can!

Suman Kalyan, Chief AI Officer, NStarX

Introduction and problem statement

Financing Movie Making requires convergence of investors, bankers and several financial institutions coming together. The entire movie making process is complex across the lifecycle of pre-production, production, post-production, distribution etc.

As a producer of a movie, the intent is to ensure the success of the content (movie) and make financial profit. The entire moving making process results in a lot of data generation (from scripts, marketing assets, actors, posters, trailers, props, exchange of information, ideas and so many other aspects across the lifecycle).

Can AI or GenAI help with finding patterns through the latitude of data across the movie making lifecycle? Can it help with prediction of success of movies that allows producers, directors, financiers to take informed and wise decisions for moving making? NStarX Data Scientists have been looking at this problem for a while now!

Past original work

In 2020/2021 (SMPTE papers), a deep learning framework was proposed to predict the success of movies by analyzing various inputs such as movie plots, posters, and metadata related to the cast and crew. This framework utilized a hybrid neural network architecture, combining RNNs, LSTMs, and CNNs, to process different data streams and produce predictions regarding movie ratings and revenue. The original approach demonstrated the feasibility of using deep learning techniques to provide actionable insights during the pre-production phase of movie-making, helping content creators make informed decisions to enhance the likelihood of success.

While this approach showed promising results, advancements in artificial intelligence, particularly the advent of Generative AI (GenAI), present new opportunities to further enhance the prediction capabilities of this framework. GenAI can not only refine the input data but also generate new data, simulate various scenarios, and offer more nuanced insights, leading to more accurate predictions and better decision-making.

Enhancing content success prediction with Generative AI

The integration of Generative AI into the existing deep learning framework can significantly enhance the predictive accuracy and provide richer, more actionable insights for content creators. Here’s how GenAI can be applied to improve the prediction of content success:

DATA AUGMENTATION AND SYNTHESIS

Enhanced training data

GenAI can be used to generate synthetic data, including movie plots, posters, and even simulated audience reactions. This augmented data can significantly increase the diversity and volume of the training dataset, leading to more robust and generalizable models.

Scenario simulation

By generating various hypothetical scenarios—such as different plot twists, alternate casting choices, or varying marketing strategies—GenAI can help content creators explore a wide range of possibilities. These simulations can provide insights into how different factors might influence the success of the content, enabling more informed decision-making during the pre-production phase.

ADVANCED NATURAL LANGUAGE PROCESSING (NLP)

Plot analysis

GenAI models like GPT-4 and beyond have significantly improved natural language understanding and generation capabilities. These models can analyze movie plots with greater nuance, capturing subtleties in language, theme, and narrative structure that earlier models might have missed. This enhanced understanding can lead to more accurate predictions of how a plot will resonate with audiences.

Dialogue and script generation

GenAI can also assist in generating or refining dialogues and scripts, ensuring they align with audience preferences and trends. By predicting the potential impact of specific lines or scenes, content creators can optimize scripts for better audience engagement.

IMAGE AND VIDEO ANALYSIS

Poster and trailer optimization

GenAI models can generate and analyze variations of movie posters and trailers, identifying the most compelling visual elements that are likely to attract viewers. This includes analyzing color schemes, compositions, and other aesthetic elements that resonate with target demographics.

Automated content creation

Beyond analysis, GenAI can generate promotional materials, such as alternative trailers or teaser videos, which can be tested for their potential impact on audience engagement. This capability can streamline the marketing process and ensure that the most effective content is used.

AUDIENCE SENTIMENT AND BEHAVIOR PREDICTION

Sentiment analysis

GenAI can be employed to analyze large volumes of social media data, reviews, and other sources of audience feedback. By understanding current trends and sentiments, the model can predict how future audiences might react to similar content. This real-time feedback loop can be invaluable for making adjustments during production.

Behavioral modeling

GenAI can create detailed profiles of audience segments, predicting how different groups might respond to various elements of a movie. This includes analyzing factors like cultural trends, regional preferences, and even psychological triggers, allowing for highly targeted content creation.

EXPLAINABILITY AND DECISION SUPPORT

Enhanced explainability

While the original framework proposed the development of an explainability layer, GenAI can take this further by providing more detailed and transparent explanations of how different factors contribute to the predicted success of content. This can help content creators understand the “why” behind the predictions and make more confident decisions.

Interactive decision tools

GenAI can be integrated into interactive tools that allow content creators to explore “what-if” scenarios. For example, creators could adjust certain variables (like changing the lead actor or altering the plot) and immediately see how these changes might impact the predicted success of the movie. This interactivity can lead to more informed and agile decision-making.

FUTURE DIRECTIONS AND POTENTIAL APPLICATIONS

Cross-media applications

The techniques developed for movie content can be extended to other forms of media, such as TV shows, video games, and even digital marketing campaigns. GenAI’s ability to analyze and generate content across different media types can create a unified framework for predicting and enhancing the success of various forms of entertainment.

Real-time adaptation

As content is released, GenAI can provide real-time feedback on audience reactions, allowing for adjustments to marketing strategies, distribution plans, and even content itself. This adaptability ensures that content creators can respond quickly to market dynamics, maximizing their chances of success.

Conclusion

The application of Generative AI represents a significant leap forward in the ability to predict the success of content. By augmenting and enhancing the original deep learning framework with GenAI, content creators can gain deeper insights, explore a wider range of scenarios, and make more informed decisions throughout the content creation process.

Unveiling the future: dive deep into AI at IBC2024

Unveiling the future: dive deep into AI at IBC2024

The media and entertainment landscape is undergoing a seismic shift. Artificial intelligence (AI) is shaping every aspect of content creation, production, and delivery, streamlining workflows, adding efficiencies and delivering better experiences for viewers. At IBC2024, the all-new AI Tech Zone in Hall 14, powered by the EBU, promises to be the place to cut through the hype and discover the impact AI can have now and in the future.

A Gateway to innovation:

The AI Tech Zone will be home to a vibrant community including tech leaders such as AWS and NVIDIA, established producers, content creators, and forward-thinking innovators. Offering a whole host of hands-on product demos as well as learning opportunities on the AI Tech Stage, the zone will explore the possibilities of AI and push the boundaries of what’s possible.

Witness the power of AI across the media spectrum:

  • Media and content enhancement: Discover how AI is revolutionising production workflows. From intelligent metadata tagging with VionLabs to automated video editing with Magnifi, AI is streamlining processes and enhancing content. The AI Tech Zone will also showcase solutions for library management with Imaginaro.ai and explore music and audio separation with AudioShake. AI4ME will showcase how it is breaking new ground with modular content creation using object-based media while broadcasters including Swiss Radio and Television SRF, who are already implementing custom AI solutions, will share their insights.
  • Data and business optimisation: One clear way AI is adding value is its ability to unlock insights and drive efficiencies. This will be demonstrated across the AI Tech Zine with Dot Group and MobiusLabs showcasing the power of advanced analytics, Media Monks pushing the boundaries of digital marketing and creative content, and Zaibr highlighting its cutting-edge solutions for optimising business with data. Attendees will also be able to see how public-funded projects like XReco are paving the way for new AI-powered business models across Europe.
  • Advanced technology and infrastructure: AI is not just about content however; it has the power to transform infrastructure too. Solutions on show here will include improved weather forecasting with The Weather Company, Video.Taxi’s innovative and secure video hosting solutions, and Wasabi’s cloud storage powered by AI for fast, secure data management.

Beyond innovation: trust and transparency

Of course, no conversation about AI is complete without discussing the ethical considerations surrounding the technology. The AI Tech Zone will provide a space to learn more about emerging initiatives in this area, such as C2PA, designed to counter the misuse of AI, explore content provenance tracking, and address critical issues like user privacy, data security, and responsible practices. Attendees will also be able to learn more about how the vera.ai project is tackling disinformation and leveraging AI-supported verification tools.

Join the AI Revolution – register today!

Don’t miss out on this transformative experience. Be part of the conversation and witness firsthand how AI is shaping the future of media. Get inspired, network with industry leaders, and discover how AI can empower your business.

Here’s what awaits you at IBC2024:

  • The AI Tech Zone: Explore the latest AI solutions from leading exhibitors in Hall 14.
  • Accelerator Innovation Zone: Discover cutting-edge projects like the AI Media Production Lab and the Evolution of the Control Room in Hall 3.
  • Conference Highlights: Attend key sessions on AI integration in content creation, ethics of generative AI, and the future of machine learning models.

Limited-Time Offer: Register today for a visitor pass and avoid the €150 fee which comes into play soon!

 

IBC2024: Where the Future of Media Takes Centre Stage

We look forward to welcoming you to IBC2024 as we delve into the transformative power of AI. Together, let’s re-imagine the future of media and unlock its boundless potential.

China (Beijing) Pavilion at IBC2024

The China (Beijing) Pavilion, hosted by the Beijing Municipal Radio and Television Bureau, has organized 10 audio-visual technology companies to participate in IBC2024 (Stand No. 3.A27). As the managing institution for the development of Beijing’s radio, television, and online audio-visual industries, the Beijing Municipal Radio and Television Bureau has brought together companies to showcase innovative technical products and solutions such as XR virtual shooting solutions, UHD intelligent shooting systems, UHD encoders and decoders, Cloud-based production systems, and portable telescopic cranes etc.


List of enterprises participating in the China (Beijing) Pavilion:

1. China Television Information Technology (Beijing) Co., Ltd.
CTVIT is a high-tech enterprise wholly-owned by China International Television Corporation. It focuses on the research and development and industrial applications of digital TV and broadband network technology, smart media, and digital home services.

2. BOE MLED Technology Co., Ltd.
BOE MLED Is a wholly owned subsidiary of BOE Technology Group, which specializes in the design, production and delivery of Mini / Micro LED display and solutions. As one of the high-potential tracks in BOE Group’s “1+4+N+Eco-chain” business structure, BOE MLED continues BOE’s excellent display technology, advanced management mode and professional production capacity.

3. Arcvideo Immersion (Beijing) Audiovisual Technology Co., Ltd.
Arcvideo Immersion is a wholly-owned subsidiary of Arcvideo Technology (stock code 688039) listed on the Science and Technology Innovation Board. Arcvideo specializes in the research and development of the BlackEye multi-modal model. This advanced model integrates reasoning and generative capabilities across video, audio, image, text, and 3D models. Our technology is applied in four major areas: Audio-Visual media, Spatial Video Computing, Industrial Vision, and Intelligent Cockpit. We provide powerful and intelligent audiovisual processing engines for all video processing and professional applications.

4. Incam Systems Co., Ltd.
Incam started in 2014, HQ based in Tianjin, PRC. and is engaged in the provision of products and services for the broadcast and media industries. All Incam product lines are fully designed and manufactured by ourselves. These include: robotic camera system, remote production system, wireless control system, optical transmission products, customized products, etc.

5. Kinefinity Inc.
Kinefinity was founded in 2012 in Beijing and entered the motion picture industry with the release of its first cinema camera, the KineRAW-S35, that same year. Today, Kinefinity offers a range of high-quality cinema and broadcast cameras, including the MAVO Edge 6K/8K, MAVO mark2 S35/LF, and MC8020 EFP system.

6. Beijing Zooxer Filming Technology Co., Ltd.
Zooxer is a dynamic, innovative R&D company focused on studio filming robotics. Zooxer delivers unmanned control picture shooting equipment, consistently providing cutting-edge technologies to the broadcast and film industry.

7. Yukuan Technology Ltd.
Yukuan adopts cutting-edge technologies to develop and manufacture professional video and audio encoders, decoders, IRDs, multiplexers, modulators, transcoders, video audio fiber optic extenders, splitters, converters, switchers for DTV, IPTV, OTT, broadcast etc.

8. TERIS (Beijing) Tech Trade Co., Ltd.
Teris is a leading manufacturer that produces a wide range of fluid head & tripod kits, jibs and related accessories. The company was founded in 2009 and is based in Beijing, China. It is managed by a lot of professional engineers in broadcasting and video field. Growing fast in the China domestic market, Teris has an 80% share of professional market.

9. Beijing SanWarm Technology Co., Ltd.
SanWarm is a high-tech enterprise specializing in the manufacture of professional video surveillance equipment. The company is committed to the research and production of professional broadcasting and television equipment, including 4K/8K UHD ultra high definition monitors, movie grade color adjustment monitors, lighting post stage color adjustment monitors, director monitors, portable box-mounted monitoring systems, and signal processors.

10. Beijing KXWELL Technology Co., Ltd.
KXWELL is a pioneering company authorized by the Ministry of Industry and Information Technology in China. The company has played a significant role in shaping the field of intelligent shooting, particularly with its involvement in formulating the first industry standard for intelligent PTZ (Pan-Tilt-Zoom) cameras.