Technology & Trends Roadmap
Roadmap: April 2025
The IABM Technology and Trends Roadmap isn’t just for industry technologists to use as a reference.
IABM has discovered industry execs using it as a starting point for their keynote speeches: product line managers are using this to plot their own products; and corporate board members get a better understanding of where the companies’ products sit on the adoption curve, hence a better grasp or risks vs gross margins.
This also assists marketing activities by giving an indication how best to promote products within M&E and also adjacent/vertical market areas.


Recent update
This year’s update has seen some major changes in the major technology and trends groupings as the IABM Roadmap working group felt best to portray the condition of the various aspects of the industry. As always, this activity draws on strong industry collaboration between end-users, vendors and competitors alike hence it created a lot of discussion, debates and controversy, yet the final outcome is a remarkable example of teamwork.
Getting into the details, I always like to start with security as it is super important and still is far too often neglected, not due to technology, but as an issue on the implementation and budget sides. On the positive side, IABM research shows that cybersecurity is now emerging as an investment priority amidst cloud operations and AI growth. So, this year we are focusing more on end-to-end workflows within Security Architectures. Certainly, content security is well understood.
Last year the IABM Roadmap group decided to have a focus on GenAI/ML as it became evident that these were having a strong influence within the industry. Many use cases have clearly become stable and used for some time, giving the opportunity to scatter some of the more common usages directly into the categories of concern, just like we would with any product or service. New areas still needed to be highlighted, which are not generative, hence keeping the focus on AI/ML in general.
Provenance is better understood when it comes to LLMs (Large Language Models) now, so rather than being a separate category, it is now incorporated with AI/ML. What the group observed, while discussing provenance, is this highlighted the importance of data, and increasingly bad data is turning up, so Data Integrity became a new category.
The term “Cloud” itself begs so many questions: on/off prem or perhaps even virtualization is meant. IABM has made it clear, the word “cloud” in general means “cloud services”. Whether we are discussing Cloud Services – off-prem (public or private) or Cloud Infrastructure – Virtualization (public, private, hybrid), Microservices, these areas are a regular part within our industry and more product or service focused. So, there is no requirement for a general cloud category on a go-forward basis.
Similarly with Edge Computing since we avoid moving huge media content into and between cloud services, having it processed and then moving it back again. Of course, when a sudden burst of compute power is required a cloud service with appropriate Infrastructure will be used, otherwise it will be local infrastructure. The same goes for storage.
With sustainability there is so much greenwashing going on, we decided to move towards Tangible Sustainability; hence can cover specifics (either happening now or planned) within each area of Create, Produce, Manage, Publish and Monetize. By making it Tangible this brings to light the realization of understanding how difficult it is to find the actual energy used.
With technology becoming readily available and less specialized for each industry, more market areas are cross-sharing products and services now. The magnitude of creators using commoditized MediaTech is huge. This is now commonly referred to as the Creator Community. The M&E community needs to welcome this community as they are the future employees and entrepreneurs of the M&E industry. Additionally, this IABM Technology and Trends Roadmap is reflecting more information to help merge these areas and embrace the Creator Community on a two-way basis, hence harmonizing efforts together.
IABM research clearly shows that technology is required more and more on the business side of industry and to better signal this need, Business Technology is a must take away to help advance new business concepts and improve profitability.
Production – (Remote/Hybrid/LiDAR)
The Global Live Cloud Production by OBS at the Paris games is now mature since its introduction at the 2020 Tokyo games. 11,000 hours of content were produced using AI-driven tools for storytelling, highlight generation, and analysis. It is becoming increasingly unclear to viewers which productions are remote, local or hybrid; this has been driven by lower latency.
The Creator Community isn’t bound by industry workflows, technology standards or even revision control processes hence is a much larger market than current episodic and feature productions. Gaming engines are at the center of many consumer, prosumer and VR productions.
Many productions are embracing AI, not on the generative side, more for the improvement of operations. We see story-centric workflows and AI-driven automation transforming news production. AI-driven video editing, real-time collaboration, along with automated metadata management are reshaping various media production pipelines. The trend of using AI for content ideation always keeps a human in the loop and is being used as a brainstorming tool.
With the help of AI, the on-camera talent effectively becomes part of the production team with voice control of graphics and virtual set control.
Single unit multi-cams with AI lead have matured and are producing thousands of lower-tier automated sports productions.
Camera direct-to-cloud capabilities continue to make production turn-around faster.
In-studio volumetric productions continue to grow, yet often still require a learning curve to match cameras with LED walls. New software addresses color matching and moiré avoidance.
3D laser scanners (LiDAR) are used in many areas now. LiDAR captures metadata on location along with the video image. Once ingested, this LiDAR metadata saves on modeling worktime ("sampling", rather than "synthesizing"). LiDAR also aids in making a complete studio or movie set digital replica (digital twin) which improves both the speed of production changes as well as the safety.
Services - XaaS/Microservices/QC
Cloud operations were considered the home for services. This is no longer realistic as services can be local or even hybrid, so the focus in these categories is what many of the services actually produce within a specific requirement. Playout services have been mature for some time now, even considered a commodity unless low latency is required. We are seeing more quality control (QC) innovative technology approaches for services beyond standard in-range level verification. Frame-by-frame content validation across the whole media distribution chain is a key example, which by comparing multiple feeds, allows ad insertions to be verified along with confirmation audio is synchronized with the video.
Standardized control of microservices is starting to be tested in the earlier adopter stage. This is a must to have true interoperability within the MediaTech ecosystem, whether running locally or remotely.
AI services are mature for applications such as speech-to-text, sub-titling, translation with several languages, storage analysis/duplication, etc. GenAI services do still suffer from hallucinations. It is best to consider that “generative AI” is genuinely “predictive AI”.
Infrastructure – (Storage/Edge/Compute/Networks)
Flash storage density continues to increase with 128 TB QLC flash NVMe now shipping. HAMR & MAMR technologies are used to increase density in graphene HDDs.
Edge compute now targeting AI Inferencing at time of data creation. CPU core counts continue to increase (192 current), with variable clock speeds used for thermal management (Direct Liquid Cooling required over 400 watts).
Faster DDR5 SDRAM (64 GB/s per DIMM) and PCIe6 (2x throughput > PCIe5). CXL3 built on PCIe6 allows cache-coherent CPU access of FPGA & GPU shared memory. DPUs enable new protocols (e.g. ST-2110) via FPGA accel on NIC. Accelerators allow real-time use of low-latency codecs e.g. JPEG-XS. Lower protocol latency than TCP/IP achieved via GPU Direct and NFS over RDMA.
All-IP Global Connectivity and Infrastructure enables seamless content acquisition and distribution. Field trials have successfully transported 6 x 100GE and 1 x 400GE high-speed services over 1Tb/s single wavelength across 850km using DWDM.
The cost of getting data out of public cloud operations or moving between cloud services (to get the data closer to the service location) is a limiting factor. On-prem or co-located object stores encourage repatriation of assets from hyperscalers. Object stores provide a single source of truth for global collaboration, improved garbage collection and native metadata search – but few M&E applications are object native so POSIX gateways are still common for object-to-file workflows.
AI/ML – Responsible AI/Machine2Machine/Provenance
We are seeing more and more AI solutions to assist with streamlining workflows by using AI in an accountable manner. This is typically done by automating repetitive tasks. For example, with AI, by analyzing newsroom stories, appropriate templates can be selected, and then automatically populated with text, images, and video clips from the broadcaster’s content repository. This instantly streamlines graphics workflows and provides for localization.
Other examples are Smart Trimming and scaling for improvement in VOD and FAST distribution workflows.
Using AI to quickly find the exact assets required with automated licensing is one example of Machine to Machine (M2M) AI.
The use of AI to break down silos, such as using a unified AI-powered platform to incorporate content monetization, contextual advertising, content curation and content bundling for video service providers.
Since the trend of defining AI copyright drastically varies region-to-region globally, this will affect technology rollouts much more than issues such as out-of-country cloud storage that was much cleaner to control. Note that Gen AI does hallucinate as well as making errors due to having unscrupulous training data; hence responsibilities for using Gen AI cannot be taken lightly. Knowing the provenance of the training data is paramount.
The use of AI drastically reduces animation time with text-to-video, however clips longer than 4 or 5 seconds aren’t usable currently.
Data Integrity – (LLMs/Metadata/C2PA/Hallucinations)
Often data comes from a myriad of historical documents which leads to difficulty knowing the authenticity of content. This is paramount for brand protection as well as for monetization of content.
Generative AI algorithms sort through existing data to create content; the importance of knowing precisely the data that the Large Language Model (LLM) was trained on cannot be overstated as feeding LLMs unreal or LLM-generated content increases hallucinations and degrades output. This is also affected by model size as hallucinations drop by 3 percentage points for each 10x increase in model size. Assuming a clean LLM, it is predicted that hallucinations will hit zero in 2027 lining up with the next-generation AI often referred to as “AGI” (artificial general intelligence). Understanding and logging when AI is used to generate data must become a standard part of the data input process.
To even begin to distinguish real from AI-generated content requires analysis or trusted provenance of the data. Analysis struggles to keep pace with evolving synthetic content. The Coalition for Content Provenance and Authenticity (C2PA.org) uses local blockchain to address the occurrence of misleading media and assure trusted provenance. This technique is more stable but slower to implement.
Ethical data sourcing is required due to legal issues with data provenance. Some LLMs now provide "clean data" guarantees, but "trust washing" is a problem, as few are able to verify provenance of training data.
Immersive & Imaging – (8K/Audio/XR/MicroLED)
Cameras for 8K imaging are readily available for a range of applications, from stills photography to cinema production. 8K is now in widespread use for image capture in 4K projects. Ultra HD and HD workflows for live production with HDR is mature and well proven.
Higher end mobile devices are being used for much more than social media. Rigs are being put together with two phones, one for image capture and the other for data capture. Mobile devices are not meant for long shoots, so fan cooling is added to the jig to assure the sensors are kept cool. Another example is where it is too expensive to use higher end cameras, 14 mobile devices are “locked together” to develop a “4DGS” (aka 3D Temporal Gaussian Splatting) during a shoot.
Solutions for creation of spatial content continue to evolve for generation of real-world spaces and objects for XR environments.
Newer display technology has a high-density array of over 24.88 million microscopic LEDs known as MicroLED. Since each pixel is its own light source, composed of independent red, green, and blue LEDs, the need for a traditional backlight is removed.
Secure Architectures – (Cyber, Zero Trust, End-to-End Workflows)
IABM research shows cybersecurity is becoming an investment priority amidst cloud operations and AI growth. Clearly compute and networking architectures have migrated to on and off-prem virtual infrastructure generating constant security concerns. We see Security Frameworks and Standards (such as Zero Trust) in use along with NMOS IS-10 and OAuth2 becoming common elements of RFP processes.
Security must extend beyond physical protection and in to the security and integrity of the code that defines these infrastructures. End-users are suspicious of the code they are using, hence now taking it upon themselves to reviewing the code for malware. This will build more confidence that the architectural foundation on which secure applications and workflows are run will be more secure. This includes having dependence on the use of open-source repositories, GitHub/BitBucket or similar that can have undetected flaws, open ports for testing that didn’t get removed, etc. - resulting in possible breaches.
Contribution feeds for streaming are typically secured with various transport protocols, however once decrypted at the CDN, piracy occurs. This is known as CDN leeching which is a serious issue with many CDN architectures.
Zero-trust security has replaced perimeter or firewall defenses in enterprise and studio best practices. Software-defined broadcast applications becoming platformed with M&E-specific AI microservices.
Contribution/Delivery – (Transport/5G/CDN/Public Data/BPS)
Ubiquitous seamless live video transport, ultra-low latency streaming, and end-to-end security; all backed by AI-driven analytics to automate the seamless delivery of live video with ultra-low latency streaming and end-to-end security has matured rapidly.
Managing contribution streams is incredibly challenging as it is highly customized and usually different for each CDN, hence “best practices” are required to help the streaming industry improve profitability. The same goes for multiple camera feeds from the same event sent to streamers which are becoming more popular.
We are seeing very low bit rate versions of FAST channels using terrestrial distribution. This is clearly more sustainable than streaming via the Internet.
ATSC 3.0 transmitters have the capability for the “distribution of data as a service” which distribute public data and files. Although not firmly in use yet, provisioning of the TX multiplex to handle data is underway since this type of datacasting will drive new sources of revenue. Broadcast Positioning System (BPS) is using ATSC 3.0 for geolocation services as an alternative (back-up) to GPS.
Orchestration & Scalability – (Automation, Provisioning)
Orchestration and automation designs are growing ever more complex as efficiencies are sought in cloud and hybrid options. Operations that scale, yet remain cost-effective, and secure, are increasingly difficult to manage.
Best of breed is desired hence multi-vendor design partnerships continue leading to the requirement of responsibility. This also goes for major vendor consolidation. Standardization and “best practices” help this and our efforts are gaining scale.
Provisioning of FAST/AVOD is growing much easier while traffic-and-ad-server resources are moving toward integration.
The ability to identify and solve problems in software-and-networks clearly is slowing adoption demonstrating budgets are needed for additional training. This also goes for understanding how automation can simplify repetitive processes like how metadata management can free up internal teams for creative and strategic initiatives.
Orchestration tools are scalable, less error prone and help with sustainability by using more power efficient resources when applicable, even powering down systems when not in use.
Tangible Sustainability – (Energy Consumption/DataCenters/Remotes)
MediaTech buyers who prioritize AI and machine learning in their technology roadmaps place a stronger emphasis on sustainability when making purchasing decisions. This varies region by region.
Terrestrial distribution of FAST channels is more sustainable over the streaming alternative. The same can be said for remote productions.
Global operators are aiming for carbon footprint neutrality, and their commitments are increasingly influencing the entire supply chain (GHG Protocol Scope 3). Vendors may not be subject to Scope 3 within their country; however, they need to be prepared when shipping product globally.
There is uncertainty that generally moving to public clouds means there is an improvement in sustainability. GreeningofStreaming looked at data centers for example: Power Factor is a huge concern and by actively managing Power Factor one data center reduced their total power draw by nearly 15% while maintaining the same computing capacity. These improvements required significant investment in both equipment and expertise. Looking at one network router, it showed an excellent power factor (0.95 or better) at full capacity but would drop to much lower values (0.6 or worse) under light loads. This means a router rated for 500 watts might require nearly twice that much power generation during low-traffic periods due to poor Power Factor. A Power Factor of 1.0 indicates perfect efficiency whereas 0.5 requires twice as much power generation as its measured consumption suggests.
On the consumer side, Television sets demonstrated even more dramatic variations in Power Factor; for example, power factors ranging from 0.9 when displaying bright, dynamic content to as low as 0.1 in standby mode.
The “right-to-repair” laws are encouraging repair of electronic goods instead of replacement that leads to a reduction in electronic waste.
Creator Community - Corporate/Medical/Pro-AV/Social
With the technology gap shrinking between high-end prosumer and consumer products the creator community is huge. CapCut has 300M monthly active users, making it the most popular editing application today. This industry movement away from specialist tools to generalist tools has enabled the creator economy by building a low-friction sphere of influence around Pro-AV protocols, 5G connectivity to public cloud and mobile social media apps.
Corporate use of social media for marketing & branding has increased the boardroom market for IP displays and cameras, tied in with AR and training uses.
Physical and drone manufacturers can deliver inexpensive rigs for multiple mobile devices (as in Imaging above) with the same agility as software, and medical/dental 3D imaging wands allow custom prostheses to be delivered next-day.
“Video podcasts” didn’t exist 10 years ago, yet 84% of podcasts are viewed on video platforms today – 33% of all media is consumed on mobile. Manufacturers need to build a technology migration path towards this low-friction sphere of influence and avoid one-offs in favor of scalable systems.
In areas where in-home productions are not possible due to small or crammed living areas, micro-studios are becoming more widely used on an as-needed basis.
Business Technology - Content Performance/Adtech/Churn
Technology for business must be considered as much as on the technical side of operations. AI and ML are quickly augmenting standard analysis with deeper answers.
Granular audience metrics are being combined with content providing detailed analysis of precise content performance for a specific audience. This in turn is used to train models which determine the best matching distribution outlet for specific content. With so many publishing options available now, expect to see this become totally automated.
The evolution of Ad Tech continually opens new avenues for monetization, allowing broadcasters to offer more targeted and dynamic advertising.
AI engines are being used to combat Ad Spoofing.
Fan engagement isn’t necessarily new, however now with more one-to-one personalization options, fan engagement is turning more generally into audience engagement. Newer AI platforms with a focus on content monetization and contextual advertising take advantage of understanding how to best bundle content to a specific video service provider.
Analytics used to combat OTT subscriber churn are now using AI with multiple data sources to provide a more complete user profiling with some cases of predicting churn with up to 95% accuracy and lowering churn rate by 30% in 3 months.
Ad insertion for traditional linear is mature, however still early adopter and quite complex for Dynamic Ad Substitution (DAS).
The cost of getting data in/out of public cloud operations is changing some CFO’s opinions about wanting to use public cloud service. This also goes for recognizing and tackling the financial management of “pay as you go” cloud service costs.
View the Standards Activities page here.