Tedial – Composable NoCode UI Design

Tedial – Composable NoCode UI Design

Emilio L. Zapata, Founder, Tedial

 

Over the years Gartner has come up with the concept of a composable enterprise to facilitate digital transformation. Modularity is at the heart of composability. Software-based business innovation becomes viable when organizations master the capability to assemble, reassemble and extend applications from an ecosystem of ready-made components. The application architecture needs to manage the interaction of functional components and software composition can speed up innovation processes while remaining in control of the quality and manageability of the application landscape as well.

Composable UX & UI

In digital product design, a common mistake is overloading interfaces with features that can overwhelm users and lowering their satisfaction. Composable UX addresses this by structuring experiences as modular, reusable well-scoped UI components (cards, lists, form controls, search widgets, etc.) that can be assembled to meet different user needs and contexts.

The goal of user interface (UI) design is to visually guide the user through a product’s interface. It’s about creating an intuitive experience that doesn’t require excessive mental effort. This involves removing unnecessary elements, simplifying complex interactions, and prioritizing content based on user needs, while also enhancing the overall aesthetic appeal and facilitating navigation and interaction with the interface

NoCode platforms, which are themselves built around visual components, templates, and pre-wired integrations, accelerate this process by exposing those components in visual builders, letting product teams assemble interfaces quickly without hand-coding. Why composable UX/UI fits NoCode and how microservices complete the picture:

  • Microservices alignment: The modularity of UI components maps naturally to microservices: independent services expose focused APIs that match a component’s data and behavior needs
  • Integration-first UX: NoCode tools usually include connectors and data bindings that make it straightforward to wire UI components to back-end sources (APIs, spreadsheets, databases, automation flows).
  • Speed of assembly: Visual builders let non-developers compose interfaces from reusable components, shortening prototyping and release cycles.
  • Consistency at scale: Shared component libraries and templates ensure consistent interaction patterns and visual language across multiple pages or products.

UX and UI are closely related, and the product’s interface design has a significant impact on the overall user experience. Some UX design principles for creating intuitive, meaningful, and effective UI interfaces are: User-centered design, usability and accessibility, simplicity and consistency.

 

NoCode WYSIWYG App Builder

Modern no-code UI app builders are a component-based architecture, using popular frontend frameworks such as React, Vue, and Angular. WYSIWYG (what you see is what you get) is the most widely used app builder in the design of user interfaces, because it provides multiple advantages:

  • NoCode editing: WYSIWYG editor, combined with a form builder, provides a user-friendly interface that doesn’t require users to have coding knowledge. This simplicity makes them accessible to a wide range of users, including those without technical backgrounds.
    • Real-time visualization: One of the key advantages of WYSIWYG editor is the ability to show users an immediate preview of how the content will appear. This instant visual feedback helps users make informed design and content decisions without needing to toggle between editing and preview modes.
    • Ease of use: The editor is easy to understand. New users don’t need a time-consuming and complex training and can start creating content right away without needing to learn a programming language, making the content creation process faster and more efficient.
    • Time-Saving: The immediate visual feedback and simplified interface can save a significant amount of time compared to writing code manually or using more complex design tools.
    • Creativity: The editor allows you to optimize your UI for desktop and mobile, using separate sets of the component library. Because users can see the results of their changes in real-time, they can quickly experiment with different designs, layouts, and formatting options – pushing your creativity to a maximum.
    • Decoupled architecture: Composable UX requires a decoupled, microservices-based architecture that allows users to work without limitations or restrictions associated with the backend.

    Typically, a WYSIWYG app builder includes a wide selection of customizable user interface elements (component library) that allows precise control over the appearance of each screen, on each device. Furthermore, it is complemented by a form builder because most applications (CMS, DAM, MAM, PAM) include forms to present the metadata that the user needs in an omnichannel context. This architecture enables rapid, screen-by-screen customization, giving users the flexibility to adapt their UI design.

    Component Library

    A WYSIWYG component is a reusable, independent, self-contained functional element that enables designers to assemble full interfaces visually, without writing code. Each component encapsulates a specific functionality — for example: video players, metadata viewers, timeline annotation tools, or media file managers — and can be freely combined to create highly customized screens tailored to user needs. Key characteristics include:

    • Automatic interaction between components: Although each component is independent, it can automatically interact with others placed on the same screen. When multiple components are dragged onto the canvas, each one “discovers” the others present and activates predefined interaction logic. For example, placing a player next to a comment viewer or timeline results in automatic synchronization, requiring no configuration. This built-in intelligence ensures seamless composition: simply place components together, and they start cooperating immediately.
    • Flexible configuration per component: Every component can be adapted to its specific context. A single component may display different levels of detail, allow editing or restrict interactions, or expose a tailored set of actions depending on the requirements of the screen. This flexibility avoids duplicating logic and ensures reusability across diverse scenarios.
    • Reusable global configurations: Components can inherit global application settings — shared actions, form templates, search filters, display layouts, etc. — reducing repetitive setup and reinforcing consistency. These defaults can be overridden when necessary, balancing standardization with customization.
    • Design for multiple contexts and environments: The platform supports creating customized screens for various usage scenarios: general browser access, mobile and tablet layouts, integrations with third-party tools (such as Adobe Premiere Pro via UXP), or simplified external-access screens with limited permissions. This adaptability enables designing experiences tailored to each user role or environment.
    • Continuous evolution of the component library: The component ecosystem grows over time as new use cases emerge. By identifying evolving needs in the media and entertainment market, the library continually expands with new components that deliver real, tangible value. This ensures the platform stays up-to-date and anticipates future requirements.

     

    Conclusions

    The core technical principle of composable UX lies in composability: the ability to design UI components that seamlessly integrate and operate across different contexts. A WYSIWYG component-based platform amplifies this principle by transforming screen creation into an agile, visual, modular, and business-oriented process. It empowers non-technical teams to design sophisticated interfaces, reduces reliance on development resources, standardizes functionality through robust reusable components, and encapsulates business logic within maintainable, modular units.

    The platform’s automatic component interactions — combined with flexible global and per-screen configurations — produce an exceptionally powerful system in which screens are not only built rapidly, but also behave intelligently and consistently from the start. This architecture delivers speed, consistency, customization, reusability, and scalability, reinforcing the platform’s ability to adapt to evolving business and technological needs.

    Together, composable UX/UI and no-code technologies bridge the gap between business strategy and technical execution, ensuring that digital experiences can evolve in direct alignment with organizational goals. In essence, composable and no-code UX/UI approaches accelerate digital transformation and serve as foundational enablers of the composable business model.

     

On Air 2025: The Future of Media Talent – It’s Here!

On Air 2025: The Future of Media Talent – It’s Here!

Carrie Wootten

I am acutely aware as I write this piece that I am never going to be able to ever thank everyone enough or indeed include all their names in this article, as we had just under 1000 people involved. But please know as you read this that I have never underestimated your contribution or impact to this project.

Thank you. You made On Air 2025 happen.

On Air 2025 seemed to fly by in a matter of seconds, even though we had been preparing for the event for six months. Having now fully recovered from the crazy few days, I am incredibly proud of what the team and the international student network produced. What began as a small idea became something truly extraordinary: a 24-hour live global broadcast, created by over five hundred students from seventeen universities across six continents. It still feels surreal that we pulled it off – although I never doubted the extreme talent and capabilities of the industry professionals and students I had the immense pleasure to work with.

This project was always about building an eco-system to give students real-world experience while giving industry access and direct contact to the next wave of talent. On Air 2025 was designed to showcase the next generation of creative and technical talent while proving what’s possible when education, innovation and collaboration meet. What became evident over the months of preparation is that although this was the ambition, this wasn’t a training exercise; as Stephen Stewart reminded me, we were creating a fully-fledged TV channel, with editorial, scheduling, compliance, technical and operational issues to address.

And of course, issues did come up.

Niki Whittle was the glue that held us all together from the start – her deep understanding of technology, production and logistics were mission critical for this event. The brilliant Sarah Chase, Laurissa Yeung Shea and Paul Walsh were continuously adapting and changing the schedule as more universities and locations came on board. I think we started with around 10 and each time a new one was confirmed, I promised it would be the last – but of course it wasn’t! But when on 14th October, just 36 hours before we went live, Sydney let us know that they couldn’t broadcast live, they, and the amazing Levira playout team (Martii Kinkar, Sven Rekkaro, Victoria Butt), had to adapt again, but at speed. There were of course technical issues too on the test day – the brilliant technical team consisting of Russell Trafford-Jones, Tim Guilder, Simon Blunt, Kendrick Foo, John Biltcliffe and Scott Kerr were working flat out to test with every location, starting at 9am with Brisbane right through to Washington at 9pm. Whether it was frame rates, ports being opened, lip-sync or latency – it was all going on! And this was before the broadcast had even started.

In addition to this, we were also using innovative tech that is at the forefront of production. From Vizrt’s Flowics for our graphics, to Clear-Com’s Gen-IC, which provided our global intercom systems so we could communicate from playout to every international location – it was all vital. This was never truer than when implementing AWS’ TAMs too,using Techex tx darwin, where all the live studio feeds were ingested into an AWS S3-based TAMS store as one continuous, time-addressable source. Students working remotely were able to access the store through the Drastic plug-in, clipping highlights and social packages directly from the live timeline. Having innovative technology was a key element that I wanted to embed in the project wherever we could – providing students the access and opportunity to understand how the industry is evolving is critical to their integration into the sector once they graduate.

On this note, the Ravensbourne students were outstanding; they had a triple threat to manage. They were managing the 24-hour production, fulfilling all gallery and studio roles, as well as supporting playout. In addition to this they had their one hour of content to produce and deliver, as well as an Alumni event to film and stream too. A couple of them also grasped the opportunity to create the presenters’ script – and with the many changes right up to the last minute, they were ultimate professionals. It is probably a huge underestimation that they had little time to prepare! And some of them had only been involved in live broadcast a few times up until that point. But their dedication, talent and ability to adapt to continuously moving goalposts was phenomenal. A huge thank you to Tim Verrinder, Bill Hobbins, their brilliant team, alongside Howard Austin and his colleagues who all made it run so smoothly.

Perhaps one of the areas that I didn’t fully appreciate until the night the breadth of cultures we were showcasing on the channel. Moving as the sun rises across the globe literally gave us a moment in time to see the creativity and the stories from the people within every country. This also included the small segments with Rise Academy and CNN where we were able to bring in Primary and Secondary School children, giving them access and exposure to live television too – an element I am proud that we managed to develop in the schedule.

Following the sun gave us a visual guide to the world, all seen through young people’s eyes. It was powerful, impactful and joyous to see. The project started about global connections but what has emerged in addition to this, is connection through storytelling – which of course is always at the heart of our industry and why we do what we do.

A critical team I should also mention is our amazing presenters – they were the ones driving the energy of the channel, as well as representing its values and what On Air what trying to achieve. They were flawless at it. Perhaps one memory that will be for ever is when we needed to fill some time quickly and Suzie Cox started a morning stretch class with Mya – showing her off her Vogue Gladiator moves – priceless! And then when we signed off the broadcast at 1am on 17th October – Mya and Urban brought it together beautifully highlighting what the channel had accomplished and the memories from across the 24 hours.

The final few moments of the broadcast I will treasure forever – Charlotte Layton running around with the post-production students to get a recap to air, Sarah getting the streamers ready and everyone realising that we’d done it.

The power of collaboration, a shared goal and community.

Oddly I never really felt tired during the broadcast – I think it must have been the adrenaline (or maybe those jet-lag feelings from NAB helped?!). It definitely took me a good few days to be able to sleep again. Perhaps it’s because I can’t stop thinking about next time?

There are so many ways we can expand and grow On Air – the debrief has happened and plans are already underway for On Air 2026. If you would like to be involved, please let me know.

This is building the future generation of talent – something that you don’t want to miss out on!

On Air: bring on the talent!

Stuart Ray, IABM

If you are the sort of person who thinks to yourself: “I know what I should do! I should set up a 24 hour live global broadcast featuring contributions from University TV and Film courses from around the world, get loads of industry people and companies to volunteer their time and run it on YouTube” you are probably dangerously unhinged. Or you are Carrie Wootton.

‘On Air’ went live on YouTube on Thursday 16 October 2025. It only lasted 24 hours but it was one of the most extraordinary days in TV History.

The project brought together 17 universities and schools from six continents  – each submitting up to an hour’s worth of content to be seamlessly woven together in a 24-hour broadcast managed out of Ravensbourne University in London.

The brainchild of Carrie Wootton, CEO and Founder of the Global Media and Entertainment Talent Manifesto, the concept was designed to showcase the incredibly talented young people representing the next generation of the Media Technology industry.

Starting at 1.00am UK time with a game show contributed by Griffith University in Brisbane, and culminating in the early hours of the following morning with California State University, the project was supported by a huge range of individuals and media companies who donated time, expertise and equipment.

Wootten explained the reasoning behind the concept: “We know that this industry is a global industry and if you’re a student we want to give you the best opportunity to build the peer-to-peer network you will be working with and collaborating with as you enter the sector. To have that opportunity at an early stage in your career seems like a really powerful thing to enable and empower students to have. We also know that industry wants to spot the best and up-and-coming talent – we’ve got skill shortages in particular areas – so industry can say ‘oh X in Mumbai you are incredible’ or ‘Y in Brisbane we saw your fantastic work and we would like to talk to you’ … then actually that becomes super powerful. What we’re trying to do is to create this global ecosystem that integrates fully students, tutors and industry so that we work as one and those gaps don’t exist anymore.”

Niki Whittle, Principal Product Manager with VizRT, was one of those who donated their time and worked as Delivery Lead. “My focus was on removing blockers, keeping everything aligned, and making sure all teams stayed in sync so we could successfully get to air. I saw this as a great opportunity to help students connect with the industry in a more meaningful way. I talk to a lot of graduates who are still struggling to find their footing, and there’s a real risk that they’ll move into other industries. If we can help them build connections earlier, not just with fellow students around the world but also with industry professionals, I hope it can ease that uncertainty and help them feel more confident in their career paths. It was also such an ambitious project, and I wanted to be part of that challenge!”

With 913 students, university staff and volunteers around the world contributing to the Production, the end credits lasted over four and a half minutes. Many IABM member companies donated time, equipment and staff to the project.

Another who volunteered their time in support was Scott Kerr, Lead Solution Architect for Sky. Scott was one of the Tech Leads on the project, though he “Did a little bit of everything where I could lend a hand – everything from having input on the design, setting up the control rooms and even preparing the graphics for playout and the end credits.”

Russell Trafford-Jones, Industry Engagement Manager with TechEx, was another tech lead on the project. TechEx supplied essential equipment as Russell explains: “tx edge, a software gateway from Techex, played a critical role in monitoring all incoming feeds and switching them into the playout system via tx darwin. tx darwin is Techex’s modular platform for live video manipulation which played two essential roles ahead of playout: Rebuilding incoming transport streams to correct timing errors ahead of decoding and format conversion. This allowed playout to run in our house format of 1080p50 and to deal with a whole range of encoders from a variety of universities.

“After playout, tx darwin wrote to a Time Addressable Media Store (TAMS) which played a pivotal role in allowing universities around the world to instantly access footage akin to a classic ‘growing file’ post workflow.”

Levira Media Services also provided physical and human resources to the project. Stephen Stewart, COO, explains, “Levira Media Services was at the sharp end, ensuring that all the multiple live feeds from across the globe (as well as the live studio inputs from Ravensbourne’s studios and event space) were delivered as a robust playout stream.  This involved taking the signal hand-offs from AWS and Techex; working with the schedulers and studio crews; and reacting to live compliance instructions in the case of any live non-compliant content appearing from the universities.”

The project’s central hub was based at Ravensbourne University in London. Their students not only produced the whole 24-hour broadcast, they also contributed an hour of content themselves. Levira set up a playout control room and as Stewart reflected: “It was Levira Media Services’ first major broadcast project in the UK – starting with a 24-hour live broadcast with 20 outside sources from around the globe with multiple timezones, controlled via a university corporate and Wi-Fi network – and it certainly proved that the Levira platform and team is capable of pretty much anything.”

For all those involved the project proved an unforgettable experience. Third year Ravensbourne Digital Television Production student Suzie Morrow, who was a Script Supervisor and Co-Producer said, “This was a wonderful experience, and I feel as though I have gained a great deal! The On Air experience was a whirlwind journey, very fun but also very stressful at times. It was such an important test of my classmates and my skills, and it has taught us so much.”

Niki Whittle reflected, “I’ve learned so much myself, especially about the cloud platforms we used, and got to work with an incredible group of highly skilled professionals. We faced plenty of challenges along the way: cloud production realities verses the dream, logistical hurdles, last-minute university changes. But seeing it all finally come together, and seeing how proud the students were after delivering it, was truly rewarding. We had 24 hours of students and professionals around the world really championing each other, collaborating, and bringing that innovative spirit to life, and that’s exactly the kind of industry I want to see!”

Scott Kerr: “It was a fantastic experience – and certainly a big challenge with it happening over 24 hours. I think I counted that I was up for a full 43 hours in the end, which is something I will not be trying again in a hurry! The general feeling at Ravensbourne on the day was fantastic and seeing how students across the globe were working together, congratulating each other was a joy to see.”

“The event was such a positive experience and reinforced for me what we all know – that everyone is always learning,” said Russell Trafford-Jones. “The non-student volunteers were all pushing themselves out of their day-to-day comfort zone, either reprising roles they had earlier in their career or trying something new. And, of course, even with the decades (centuries!?) of combined experience on the wider team, dealing swiftly with all the problems that twenty four hours of live TV throws at you is always fun, never flawless and leaves you eager to do it all over again.”

For Stephen Stewart, “The thing that struck me most was the degree of respect shown by and between the students, as well as the industry professionals.   In the whole 24 hours, I didn’t hear a single raised voice; a single cross word; or a single moan or groan.  If these students are the future of our industry- I think it’s in safe hands.”

And what of the person whose original idea started the whole thing off? Wootten has already confirmed that ‘On Air’ will return in 2026 (you can read her own reflections opposite) and IABM will once again be offering our support. And we’d love to see our members getting even more involved next year.

The production leveraged existing broadcast facilities at various universities to simplify operations as much as possible. The SRT output was routed via AWS MediaConnect to the On-Air AWS environment. Techex tx edge was used to receive the signals and route them into playout. If frame rate conversion was required, the signal was passed through tx darwin, which utilised InSync’s FrameFormer to convert to the 1080p50 house format. Playout was provided by Levira using the BCNEXXT VIPE playout system which loaded assets after compliance viewing in Tanooki. For onward distribution to YouTube, tx darwin was again used, simultaneously writing the output to a dedicated TAMS store set up specifically for the event to facilitate segment creation and highlight generation. Everything was overseen by TAG VS’s monitoring and multiviewer software platform. A temporary control room including equipment from VizRT was built in a teaching room at Ravensbourne to host playout and MCR operations.

OOONA EDU: (Em)Powering the Next Generation of Localization Talent

OOONA EDU: (Em)Powering the Next Generation of Localization Talent

Andrew Garb

 

 

Andrew Garb is Global Account Manager at OOONA, where he leads engagement for several of the company’s offerings: EDU, the Testing and Training Platforms, AVTpro, OnStage as well as the InSync podcast. Drawing on his background in management and business development, Andrew is highly adept at bringing web-based software solutions to market – internally and externally. At OOONA, he also oversees the integration of the company’s internal project management software, ensuring operational alignment across teams. Previously the COO of Bgate Software Solutions, his role involved developing and implementing bespoke solutions for business clients. Today he applies the same strategic mindset to cultivate partnerships and drive growth and innovation across the media localization ecosystem.

The media localization industry has been built on talent. From the early subtitlers working with VHS tapes and clunky software to skilled dubbing adapters and actors bringing authenticity and cultural nuance to global stories, people have always been at the heart of the craft.

But the landscape has never stood still. The once a fragmented ecosystem of local studios turned into multinationals with centralized workflows. Then cloud technology enabled remote collaboration and global workforces. Today’s media localization pipelines involve advanced tools, metadata management, accessibility requirements, day and date releases, consistency across episodes, seasons and languages.

It is the same creative talent that makes all this possible, though the environment they work in is completely different. Online tools, automated quality checks, language technologies, real-time collaboration, multiple file formats and strict compliance guidelines have become standard parts of the media localization workflow.

Talent needs to continually adapt to new systems, tools and expectations, often taking on newly defined roles. The modern media localization expert sits in a dynamic environment where technology meets creativity. Success today requires flexibility, technical fluency, and a curiosity and openness to new practices. In short: continuous upskilling.

The rise of online courses

Upskilling comes in many forms, but shorter, online courses are probably the hallmark of our times. The sharp increase and popularity of online courses across all possible subjects stands to good reason given the many advantages they offer.

  • Ease and flexibility: Online courses make training more accessible, without disrupting work or life commitments.
  • Bite-sized learning: Shorter modules fit today’s fast-paced culture, making the learning progress more manageable.
  • Self-paced study: Learners can move at their own rhythm, spending more time where needed, benefiting from utmost flexibility even in the busiest of lifestyles.

OOONA EDU: Designed for the media localization workforce

To support the evolution in the media localization industry and the need for a steady supply of talent, OOONA, a leading platform of cloud-based localization tools, developed OOONA EDU, an educational platform designed by media industry experts for industry professionals.

OOONA EDU courses are aligned with real-world workflows and integrated into the same user interface used by large and small media localization vendors, broadcasters and freelancers worldwide. Graduates gain not only theoretical knowledge but also hands-on practical experience that equips them with the confidence to go after new media localization roles or expand existing ones.

OOONA EDU empowers talent at every level:

  • Students entering the industry who need to learn the industry workflows, lingo and expectations, and stand out in a crowded marketplace certified for their job readiness.
  • Translators transitioning into audiovisual localization, who want to expand their services and build new revenue streams.
  • Experienced audiovisual professionals who want to diversify their skills and undertake more types of tasks within their vertical.
  • Content creators who want to localize their own videos into other languages and make them accessible to all audiences.

Courses that reflect the industry’s needs

OOONA EDU offers a growing portfolio of courses across key media localization sectors.

All about subtitling

  • Fundamental rules and best practices for subtitle timing, reading speed, layout and segmentation
  • Translating constrained text across languages and working within templated subtitle workflows
  • Understanding subtitle file formats, QC processes, and version control
  • Subtitling technical videos
  • Specialized mini courses on how to improve grammar for subtitling, or how to subtitle with sensitivity

Media accessibility

  • Making content accessible for the d/Deaf and hard-of-hearing
  • Crafting clear, descriptive scripts for blind and low-vision viewers
  • Exploring creative audio description in broadcast and museums
  • Mini courses and downloadable resources on subtitling sounds and punctuation for subtitling

Industry and career development

  • Free courses on OOONA’s state of-the-art localization management and production toolkit
  • Content security and cyber security training
  • Guidance on working for language service providers and building a successful career
  • Introductory training for emerging roles such as AI speech editors

From foundational skills to advanced professional tasks, the platform supports lifelong learning and development, and directly connects learning to employable skill sets.

OOONA Training Platform: Custom onboarding and training

In addition to OOONA EDU, the OOONA Training Platform provides companies with a bespoke solution for onboarding and training new talent efficiently. Built on Moodle and fully integrated with OOONA’s suite of state-of-the-art tools, the platform allows companies to:

  • Create their own custom courses tailored to their workflows and internal processes.
  • Train teams efficiently, ensuring quality and adherence to company standards.
  • Onboard new hires quickly, reducing the learning curve in complex subtitling and accessibility tasks.
  • Track progress and performance, giving managers insights to team readiness and areas for improvement.
  • Enable continuous learning for staff, with on-demand access to training materials.

Talent will always be the heart of media localization

Technology can scale workflows and speed up delivery, but it cannot replace cultural understanding, creative nuance, or imagination and passion for storytelling. The future of creative sectors such as media localization depends on people – professionals who adapt and innovate, who confidently take up new tools and challenges, and remain competitive.

Platforms like OOONA EDU and the OOONA Training Platform provide the education and hands-on practice professionals need to thrive in the media localization market. By investing in training and upskilling, the industry can ensure quality localization, quick onboarding, and a workforce prepared for the challenges of today and tomorrow.

The world is watching more stories than ever before. We at OOONA are proud to help make sure they are understood, accessible and truly global thanks to the talented people who bring them to life in every language.

 

 

 

Red Bee Media – Sustainability in the Media Broadcast Industry: Shaping a World Where Future Generations Can Thrive

Red Bee Media – Sustainability in the Media Broadcast Industry: Shaping a World Where Future Generations Can Thrive

Gabija Jonsson, Head of Communication, Red Bee Media

For years, climate change, pollution, over-consumption, and dwindling natural resources have dominated our daily conversations. Driven by passionate individuals and non-governmental organizations, world leaders are now rolling out bold strategies and implementing new laws, transforming how we consume and holding major corporations accountable. The media broadcast industry plays a pivotal role in this story. While it keeps the world informed and entertained, it also leaves a significant carbon footprint, from the technology powering our screens and the creation of content to the energy-hungry smart devices in our households. Today’s rising media broadcast professionals are drawn to employers who showcase clear sustainability objectives. Making environmental values part of a company’s DNA is not just a badge of honor, but also a way to attract rising talent.

The Environmental Impact of Media Broadcast Operations

The broadcast and media sector has long been very energy-intensive, relying on extensive data centers, large transmission networks, and frequent equipment updates. These activities produce significant carbon emissions and electronic waste. Recognizing this impact, industry leaders are now adopting innovative approaches to reduce their environmental footprint.

This brings us to a question: what is our industry doing to protect our beautiful but fragile planet?

Energy Efficiency

As technology in broadcast equipment rapidly evolves, the constant upgrade from cameras to switchers fuels a mounting wave of electronic waste. In response, broadcasters are embracing energy-saving solutions, from LED lighting and virtual studios to low-power servers. Some media companies are even powering their facilities with renewable energy like solar and wind, taking bold steps to break free from fossil fuel dependence. For example, Netflix has aimed to achieve net-zero greenhouse gas emissions by 2025, integrating renewable energy sources and adopting rigorous carbon offsetting strategies. Microsoft has even set an ambitious target to become carbon negative by 2030, launching a generous $1 billion climate innovation fund.

Reduced Travel and Virtual Production

When I started my corporate career in 2014, the stories of senior professionals flying to another continent for a one-day meeting already seemed like old-time tales. After COVID, the industry has transformed completely, with a realization that virtual work and online team meetings, gathering colleagues from different parts of the world, are just as effective as face-to-face interactions. Technologies, like high-speed internet and cloud computing, facilitate collaboration without the environmental costs of physical presence. We must admit, the joy of waking up at 4 AM, squeezing all essentials into a tiny carry-on bag and rushing into the airport, fades away after the first years of corporate travelling. The adoption of virtual sets, remote editing, and virtual meetings has significantly minimized travel-related emissions. Companies have started implementing various internal practices, like showcasing the carbon emissions of each flight and suggesting more sustainable route options.

Environmentally Friendly Data Centers and Cloud Solutions

Shifting to cloud-based workflows means fewer on-site data centers guzzling energy. Many cloud providers now champion sustainability, tapping into renewable energy and fine-tuning data storage to shrink their environmental footprint. Yet, as we embrace cloud solutions and high-resolution streaming, the energy needed to power and cool these sprawling data centers keeps climbing. There is still much room for improvement. The 2024 CrowdStrike IT outage is a great reminder: nearly 8.5 million systems crashed and could not be restarted. The disruption rippled far beyond broadcasting, halting operations in hotels, airlines, airports, gas stations, retail stores, and even core institutions like banks, governmental offices and hospitals. The global financial toll soared to an estimated US$10 billion. It is also important to note that end-user devices like Smart TVs and mobile phones account for the lion’s share of streaming’s energy use, sometimes as much as 80 per cent. While media vendors cannot control these devices directly, every upstream improvement, such as using more efficient codecs or smarter adaptive streaming, lightens the load on devices and helps shrink the world’s overall energy costs.

AI: An Environmental Savior Or A Power-Hungry Tool?

The hottest topic in the industry is, without a doubt, AI. While AI offers various solutions for environmental management, such as optimization of energy grids and improving resource efficiency, the rapid development and deployment of powerful generative AI models come with environmental risks, including increased electricity demand, water consumption, and the extraction of critical minerals needed for AI infrastructure for data centers. The power required to train generative AI models, which have billions of parameters, such as OpenAI’s GPT-4, can demand a staggering amount of electricity, which leads to increased carbon dioxide emissions and pressure on the electric grid. AI searches require about ten times the electricity,  from 0.3 watt-hours for a traditional Google search to 2.9 watt-hours for a ChatGPT query to respond to user queries, according to the Powering Intelligence: Analyzing Artificial Intelligence and Data Center Energy Consumption report. Another report states that ChatGPT consumes 1.059 billion kilowatt-hours annually just to answer questions. That electricity usage is estimated to be almost $400,000 a day and $139.7 million a year.

The rise of sustainability-themed content

There has been a huge increase in documentaries, short films, TV shows, and educational programs that promote environmental awareness and receive huge audience success. Recent movies like My Octopus Teacher, Don’t Look Up, Cowspiracy: The Sustainability Secret, and Tidying Up with Marie Kondo educate audiences about climate change, natural wildlife, the meat industry, sustainable lifestyles, over-consumption, conservation, and sustainability practices. Many of these TV shows and documentaries also often receive government grants and funds for production. While location shoots require extensive travel, expensive accommodations, large power generators, and set materials that contribute to environmental waste, we have also seen movies like Flow, produced using only free animation software, receive the highest industry awards, such as the Oscars.

Conclusion

To sum up, media companies are embracing sustainable procurement, choosing eco-friendly equipment, recyclable materials, and responsible waste disposal. Sustainability has moved from the sidelines to center stage in the media broadcast industry, now seen as an essential both in everyday operations and future innovations. By embracing energy-saving technologies, fostering virtual teamwork, and championing environmental awareness, broadcasters can make a real difference for the planet. For technology vendors, sustainability is more than a box to tick; it is a path to resilience and a powerful edge in a competitive market. Most importantly, we can all implement small, environmentally conscious habits in our daily private and professional lives.

Limecraft – Smooth Operations – Environmental Sustainability thrives on Operational Excellence

Limecraft – Smooth Operations – Environmental Sustainability thrives on Operational Excellence

Maarten Verwaest, Limecraft

Across Europe, the conversation about environmental sustainability in the media industry is no longer superficial. It is grounded in measurable data, and in an urgent need to address the footprint of the content we create. BAFTA Albert reports (https://wearealbert.org/wp-content/uploads/2025/11/ACCELERATE-2025-BAFTA-albert-report.pdf) that one hour of UK-produced television generates an average of 16.6 tCOe, with scripted drama rising to 48.7 tCOe per hour—the highest of all genres. These numbers serve as a reliable benchmark for understanding the wider European landscape.

According to the European Audiovisual Observatory, around 1,800 European producers deliver 16,000 hours of scripted fiction each year. Applying the drama benchmark of 48.7 tCO₂e per hour, Europe’s scripted sector alone is responsible for approximately 800,000 tonnes of CO annually. Fiction, although a relatively small part of the total volume, carries a disproportionate share of the environmental load.

To understand the full picture, we need to look beyond scripted content. Europe produces an estimated 200,000 hours of television across all genres each year. When we apply the multi-genre average of 16.6 tCOe per hour, the combined footprint for European TV production is over 3 million tonnes of CO per year, with scripted accounting for almost one-third of the total impact.

These figures highlight two important realities. First, production itself remains the dominant source of emissions, led by travel, location work, and energy-intensive studio operations. Second, while post-production contributes a smaller share of 15%-30% depending on the genre, the Limecraft sustainability analysis shows that unnecessary file movement, repeated transcoding, and fragmented toolchains add a significant amount of avoidable energy use.

If the industry wants to reduce its environmental footprint at scale, the opportunity is clear: simplify the supply chain and eliminate needless digital inefficiencies, minimize file transfers and use cold storage as much as possible, and use AI responsibly. Even modest operational improvements become significant when multiplied by nearly two hundred thousand hours of content each year.

Building a More Sustainable and Reliable Production Workflow

To reduce the industry’s environmental footprint at scale, producers need to be mindful about each step that can simplify operations and eliminate waste. Much of the inefficiency during production and post-production stems from unnecessary complexity: too many tools, too many copies, and too many file movements. By adopting a cleaner and more unified approach to workflow design, producers can cut emissions while improving reliability, turnaround times, and operational stability. The following best practices provide a starting point.

Simplify the Supply Chain and Eliminate Digital Inefficiencies

Many productions rightfully adopt the “3-2-1” strategy—three copies, two technologies, one offsite. While the principle is sound, the way it is implemented often leads to sprawling workflows with overlapping tools and duplicate processes. This becomes especially problematic when multiple companies collaborate on the same production.

Instead of shipping high-resolution media between parties, producers should use a proxy-based workflow as the default. As shown across several Limecraft case studies, proxies maintain creative visibility without the hassle and the cost incurred by moving hundreds of terabytes across facilities or the cloud. A simplified, shared supply chain cuts file transfers and turn around time, it reduces human error, and makes collaboration far more efficient.

Minimize Copies, Transcodes, and File Transfers

Every extra tool in the tech stack introduces another conversion, another export, and another transfer. Internal review, external review, editing, subtitling, and delivery are often handled by separate systems even though they rely on the same underlying media. An integrated collaboration platform removes these unnecessary layers.

When review, logging, transcription, subtitling, and post-production all happen in a single environment, the result is fewer ancillary copies and a considerable improvement of the security perimeter. Reducing these hops not only saves energy—it avoids waiting time and improves overall reliability.

Use Cold Storage as a Primary Tier

Rather than manifesting high-resolution files on SSD or spinning disks—or transferring entire repositories into cloud storage—post-production supervisors should consider cold storage as the primary tier.

Cold storage offers the lowest environmental impact, it is quasi impossible to hack, and it is surprisingly effective when disclosed via a proxy-based workflows. Producers can access and work with material as easily as online storage, while keeping the high-resolution originals offline until absolutely needed. Case studies like Onze Natuur (https://www.limecraft.com/cases/hotel-hungaria-onze-natuur/) show that this approach cuts cost, reduces risk, and avoids the energy waste of keeping large volumes of media permanently “warm.”

Use AI Responsibly

AI now consumes a meaningful and increasing share of the operational budget, yet many productions use it for tasks that could be solved more efficiently by using existing upstream metadata.

Instead of applying heavy AI models to “interpret” images, producers should leverage planning documents, production reports, script notes, and casting data already generated during filming. This improves accuracy, reduces processing load, structurally avoids privacy breaches, and minimizes emissions associated with compute-intensive AI pipelines.

Summary

By designing a tech stack that is as simple as possible – but not one bit simpler – producers gain both sustainability and reliability. A reduced number of tools means fewer failure points. A proxy-based workflow minimizes storage and transfer needs. Using existing metadata instead of AI improves accuracy and increases the chances of long-term reuse. Small changes compound across the nearly 200,000 hours of television produced each year in Europe—and they represent one of the fastest, most achievable paths toward a more sustainable media industry.

 

Grass Valley – Elastic Compute for a Sustainable Media Industry

Grass Valley – Elastic Compute for a Sustainable Media Industry

Ronny Van Geel, Director of Product Marketing, Grass Valley

 The media industry has a paradox at its core. It’s an industry built on light, color and imagination, yet behind the scenes, it’s powered by one of the heaviest infrastructures in technology. Every second of live production consumes compute cycles, cooling, transport and power. The creative output is ephemeral; the energy cost is constant.

That contradiction has become impossible to ignore. As audiences demand richer experiences and round-the-clock content, the carbon footprint of production quietly grows. The challenge is no longer only how to make great content, but how to make it responsibly. Without losing the immediacy, emotion and precision that define live storytelling.

When Elasticity Meets Efficiency

True sustainability doesn’t come from doing less; it comes from doing smarter. The biggest environmental gains aren’t found in recycling or offsetting, they’re found in optimization. This is where elastic compute changes the equation.

Instead of maintaining racks of always-on hardware, elastic compute lets production teams activate only what they need, when they need it. The moment a function isn’t required, it powers down. When it’s time to scale, capacity expands instantly. Energy use becomes proportional to creativity, not to idle time.

Grass Valley’s AMPP OS was designed precisely for this: to virtualize production functions so they can be dynamically orchestrated across available compute resources. The result is efficiency by design. An architecture where flexibility and sustainability are inseparable.

For decades, the industry accepted a silent inefficiency. Hardware was dimensioned for the “worst day”, the biggest show, the busiest feed, the peak load. Those systems then ran all year, consuming power even when pushing black video through the chain. That model once made sense; reliability required redundancy.

But the economics of overcapacity and the ethics of energy waste no longer align. The media industry, which shapes public perception on global issues, can’t afford to lag behind on the one topic that defines our shared future: sustainability.

A Smarter Architecture for Creative Freedom

Elastic compute doesn’t just reduce waste; it amplifies possibility. When production tools become virtual, every node of compute can take any form. A production switcher today, a multiviewer tomorrow, a replay system next week. This shape-shifting flexibility allows creative teams to experiment, to reconfigure workflows on the fly and to scale for special events without overbuilding.

And the transition doesn’t require abandoning what already works. AMPP OS applications integrate seamlessly into existing Grass Valley infrastructures, extending their life and reducing the need for new hardware investment. Organizations can evolve step by step, guided by operational logic rather than by capital cycles.

Of course, not every team has in-house cloud or DevOps expertise. That’s why Grass Valley introduced GV Hosted. It’s a media-grade compute environment operated by Grass Valley engineers and powered by AWS. It removes the technical burden of infrastructure management, allowing production teams to adopt elastic compute immediately.

The idea is simple: you control your productions; we manage the environment beneath them. It’s a bridge into software-defined production that combines reliability, security and performance with a dramatically smaller environmental footprint.

From Efficiency to Integrity

The sustainability conversation in media often drifts toward image: badges, pledges and metrics. But real change happens in architecture. Elastic compute doesn’t just tick the ESG box; it redefines the physics of production itself. By linking energy use directly to creative activity, it aligns ecological responsibility with operational sense.

That alignment is the real breakthrough: a system where doing the right thing is also the efficient thing.

The creative spark will always consume energy. But how we manage that energy is a choice. With AMPP OS and the hybrid pathways that include GV Hosted, Grass Valley offers a way to keep the lights of storytelling bright while dimming the waste behind them.

Sustainability is not a constraint on creativity; it’s its next frontier. The lighter our infrastructure becomes, the freer our stories can travel.

Tiledmedia – A Look towards Sports Streaming in 2030

Tiledmedia – A Look towards Sports Streaming in 2030

 

Our industry is in choppy waters, as we have seen job cuts in streaming and the wider media world over the last couple of years. While streaming is increasingly picked over linear broadcast, the pressure on sports streaming services mounts as they wrestle with consumer churn, large-scale piracy, and streaming ad revenue not fully replacing linear broadcast ad revenue. Add an unstable geopolitical situation and fears of a recession to this mix, and it is safe to say that streaming services worldwide face a precarious situation. This article aims to identify key factors and trends, and to explain some of the dilemmas that sports platforms both large and small are dealing with. Special attention will be given to the role of AI within streaming services and the push for more personalization on the consumer side.

Picking the Right Lane

Sports remain appealing for streaming services in terms of engaging a core base of fans, which means leagues still expect broadcasters and streamers to increase the price of acquiring the content rights every cycle. A noteworthy point is that when sports games are available on both broadcast and streaming, broadcast audiences still outnumber streaming viewers, it will be interesting to see how this ratio will develop further. Only a few companies can afford the top tier sports franchises like NFL, Premier League and NBA. These premium formats are well-established brands, and both traditional broadcasters and tech giants are vying to obtain these rights. Still, there are many other sports franchises that have a smaller audience, less global reach and fewer channels for monetization, that need to make a different calculation.

Some sports rights holders go direct-to-consumer (D2C), examples include Ligue1, F1, and ATP. The reasons for going D2C vary: Giving leagues more control, increasing fan engagement, understanding consumer behavior through data ownership, and cutting out (the cost of) the middleman. D2C works differently than third-party broadcasting and streaming arrangements because it doesn’t adhere to the fixed 4/5-year rights negotiations cycle. In addition, the cost of setting up a D2C channel has come down significantly, mainly due to a more modular arrangement of components that form a live streaming system.

Other (E-)sports leagues and competitions make use of the vast reach of non-traditional (social) media giants like YouTube, Twitch as well as Douyin (TikTok) and Kuaishou in China. These examples have covered major events streamed for free, such as the Football World Cup in Brazil and China, Bundesliga on YouTube in the UK, and the E-Sports World Cup on Twitch.

Tough Times Force Tough Choices

We see a very intense competition with tight margins, expensive content rights and demanding consumers. Platforms basically have three dials that they can turn: revenue (total & ARPU), operational costs (TCO), and content rights. Now more than ever, streaming platforms are trying to do more with less. Reducing operational cost has been a major issue that platforms are trying to manage through lay-offs, automation and application of AI throughout the end-to-end chain.

Regarding AI, perhaps its most used application by video streaming platforms so far has been for content recommendation, which can be considered low-hanging fruit. We also see AI being applied to track down pirated live streams and flag them so that they are taken down quicker.

Turning to ads, ad production costs and inflation have outpaced ad revenue (CPM) growth for almost a decade now. It is no wonder that streaming services have started using AI for ad insertion, it helps with Server-Side Ad Insertion (SSAI), and increasingly Server-Guided Ad Insertion (SGAI). However, even if you have the best AI tooling to optimize the ad content for specific audiences, inserting effective, seamless and unobtrusive ads also depends on other components in the chain, like the client-side video player. In our view, the ad insertion method that will dominate sports streaming in the future will be server-guided overlay ads, that can optimally and dynamically factor in viewer preferences, content type, real-time events, portals to marketplaces and more.

Simultaneously, sports streaming platforms aim to increase personalization, interactivity and storytelling for its viewers. Features like multiview, shoppable, and versioning are essential in realizing this goal. These innovations can help platforms reduce churn and increase stickiness. The key here is to build a lasting relationship with viewers and cater to modern viewing behaviors. From our perspective as a software vendor here at Tiledmedia, we aim to accommodate this trend, by offering a Video Player built from scratch. We think this enables unbeatable playback QoE, unmatched Multiview and unique overlay ads; key factors for sports streaming services to build a platform that can weather the storm in the industry.

Scrubbing Forward in Time

With all this in mind, where does that leave the industry 5 years from now? Obviously, AI is the most eye-catching, but it’s important to note that streaming services explicitly see AI as a tool and not as a goal.

For a look in the future we can see what Netflix – a relative newcomer to the sports streaming vertical –  is saying: The focus now is on applying AI to previsualization, shot planning, VFX, and post-production. Netflix has also announced innovations like gaming on TV, interactive merchandise and experiences, and physical events.

Gaming platforms like Roblox are expanding in the other direction, working with sports leagues to monetize and enrich their virtual world with franchises that people want to follow like NFL and FIFA. And we also see examples of virtual music concerts with massive audiences that are streamed on Fortnite, PUBG and other platforms. One can safely assume that sports leagues are also looking at these gaming platforms to target younger audiences through live streams of sports games.

The sports streaming industry is trying to merge digital content with real-life experiences. The hurdles for consumers to access a marketplace for sports related shopping during live sports streams are increasingly fading away, and this trend will only continue.

As sports streaming is becoming the go-to channel for consuming sports content, this decade will be defined by a rocky road towards a new equilibrium. We at Tiledmedia are eager to help the industry move forward, supporting platforms to provide a robust, personalized, and intuitive streaming experience that can drive monetization in this new era. Sports fans are still as passionate about their teams as ever, all we need to do is adapt to their evolving behavior.

 

Mobii – Democratizing Premium Content Personalization: How Synthetic Intelligence Unlocks D2C Revenue Without Production Cost Barriers

Mobii – Democratizing Premium Content Personalization: How Synthetic Intelligence Unlocks D2C Revenue Without Production Cost Barriers

Greg Schultz, CEO, Mobii Systems

 

The D2C Paradox

Sports organizations entering the direct-to-consumer streaming market face a fundamental economic paradox. Audiences accustomed to Netflix-level personalization expect tailored viewing experiences – following favorite players, accessing alternative camera angles, choosing commentary styles, and controlling their content journey. Yet these same organizations must deliver these experiences using broadcast-era production models where creating multiple personalized streams requires proportionally scaling production resources, crews, and infrastructure.

The mathematics are brutal: if producing one premium broadcast feed requires a certain investment in equipment, personnel, and expertise, creating ten personalized streams traditionally demands ten times those resources. For most sports properties, this economic reality makes personalization a luxury reserved for the largest organizations with the deepest pockets. The democratization of premium content personalization – making it accessible regardless of organizational size or existing infrastructure – represents one of the broadcast industry’s most pressing challenges.

The revenue opportunity being left on the table is substantial. Passionate fan segments willing to pay premium subscription fees for specialized content remain underserved. Sponsor activation opportunities through targeted streams go unrealized. The competitive differentiation that true personalization offers in an increasingly crowded D2C market stays out of reach for all but the most well-resourced organizations.

 

The Traditional Cost Barrier

To understand why personalization has remained economically prohibitive, consider the traditional broadcast production model. Creating a single high-quality sports feed requires camera operators, video switchers, graphics operators, replay coordinators, audio engineers, and directors making split-second decisions throughout an event. Each personalized stream variation – whether following a specific player, offering tactical analysis, or providing alternative commentary – historically required duplicating significant portions of this human infrastructure.

This linear cost scaling creates an insurmountable barrier for most organizations. While the flagship broadcast feed justifies the investment, each additional personalized stream struggles to generate sufficient incremental revenue to cover its proportional production costs. The result? Sports properties default to the one-size-fits-all broadcast paradigm, leaving niche audiences underserved and missing opportunities to convert casual viewers into passionate, paying subscribers through targeted experiences.

The democratization challenge extends beyond pure economics. Organizations lacking traditional broadcast infrastructure – emerging leagues, smaller federations, regional sports properties – find themselves doubly disadvantaged. They lack both the resources to create personalized content and the existing production frameworks to build upon. Yet these organizations often have highly engaged niche audiences who would enthusiastically embrace personalized viewing options if economically feasible to deliver.

 

The Dynamic Cloud Mixer: Democratization Through Intelligent Automation

The breakthrough enabling personalized content democratization is Mobii’s Dynamic Cloud Mixer (DCM), powered by Synthetic Intelligence – a production automation approach that occupies the strategic middle ground between manual operation and unpredictable generative AI. Unlike generative AI which creates new content with potential variations in quality and accuracy, DCM’s Synthetic Intelligence synthesizes existing inputs (video, audio, data) based on predetermined rules enhanced by real-time data analysis.

This distinction proves crucial for broadcast applications where consistency and reliability are non-negotiable. DCM makes production decisions with the predictability required for live sports while leveraging data to identify and emphasize the most compelling aspects of content across multiple simultaneous outputs. The economic transformation is profound: creating multiple personalized streams from the same source content without proportional increases in production costs.

The technical foundation of DCM starts with frame-accurate synchronization of diverse inputs – multiple camera feeds, various audio sources (ambient sound, commentary, team communications), and real-time data streams from sports APIs. This synchronized media ecosystem enables data-driven production decisions. When a specific player makes a significant play, DCM understands which cameras captured it, which audio sources provide relevant context, and which graphics should be displayed – all without human intervention.

The platform’s Synthetic Intelligence layer analyzes synchronized data to drive automated content curation decisions. This layer applies predefined rules enhanced by data patterns to make production choices that would traditionally require human operators. From automated camera switching based on event location and player involvement to dynamic graphic insertion triggered by specific data conditions, DCM handles the technical execution while maintaining broadcast quality standards.

Deployed as a cloud-agnostic solution, DCM democratizes access by working within any major cloud environment (AWS, Azure, Google Cloud) or on-premises infrastructure based on customer preference. Rather than requiring massive upfront capital investment in proprietary systems, organizations can leverage existing cloud relationships and scale personalization efforts incrementally. The technology complements rather than replaces existing workflows, expanding possibilities without eliminating traditional broadcast roles.

Real-World Economic Impact: LIV Golf

LIV Golf’s implementation of Mobii’s Dynamic Cloud Mixer demonstrates the economic transformation that intelligent automation enables. The challenge was straightforward: deliver unprecedented viewing flexibility without the traditional production costs associated with multiple broadcast streams while meeting growing viewer expectations for personalized experiences.

The DCM implementation ingests and synchronizes up to 36 direct camera feeds in the cloud, achieving frame-accurate synchronization with real-time golf data and audio. The platform’s Synthetic Intelligence layer then autonomously creates 18 Group Streams (enabling fans to follow specific players within each group) and 13 Team Streams (allowing fans to watch all players within a team multiview) – a total of 31 personalized streams plus the main world feed, all generated from the same source content.

DCM’s automation handles every aspect of production: switching between cameras as players move between holes, integrating real-time graphics for each personalized stream, making split-second production decisions based on live data, and delivering consistent broadcast-quality output without manual intervention. Each stream maintains the production values viewers expect from premium sports content while emphasizing different aspects of the event based on the intended audience.

The economic implications extend far beyond production cost savings. LIV Golf can now reach fan segments that would otherwise be underserved – passionate supporters of specific teams or players who previously had to accept the main broadcast’s editorial decisions about coverage allocation. The ability to offer genuine viewing choice creates tangible subscription value, differentiating LIV Golf’s D2C offering in a competitive streaming market. Perhaps most importantly, the infrastructure enables rapid experimentation with new content formats and personalization approaches without disrupting the core broadcast or requiring significant additional investment.

This implementation demonstrates how rights holders can rethink traditional broadcast models. Rather than personalized streams replacing or competing with the main broadcast, they enhance the overall fan experience and create new engagement opportunities. The future of sports broadcasting isn’t about creating one perfect feed – it’s about building an ecosystem of content that meets diverse viewer preferences while maintaining production quality and operational efficiency.

Industry-Wide Democratization

While golf provides a compelling case study, the democratization implications of DCM extend across the sports broadcasting landscape. Warner Bros. Discovery’s implementation of the Dynamic Cloud Mixer for NASCAR in-car driver feeds on the Max platform demonstrates the technology’s versatility. Processing up to 40 individual driver camera sources with synchronized telemetry data and automated audio ducking, DCM creates individual driver feeds and multiview composites that would be economically impossible using traditional production approaches.

The platform’s ability to ingest diverse inputs, synchronize them with frame-accurate precision, and generate multiple personalized outputs through Synthetic Intelligence creates opportunities for organizations of all sizes. Smaller leagues and federations can now access production capabilities previously reserved for major sports properties. Emerging sports can offer viewing experiences that compete with established properties. Regional sports networks can provide personalization that creates genuine value for local fan bases. The common thread? Making cutting-edge capabilities accessible regardless of organizational size or existing infrastructure investment.

This democratization also enables new business models. Organizations can experiment with tiered subscription levels based on personalization access, create sponsor activation opportunities through targeted streams, and develop content strategies that serve previously ignored audience segments. The economic risk of innovation drops dramatically when new content formats don’t require proportional production cost increases. DCM’s cloud-agnostic deployment means organizations can start with core event coverage and incrementally expand personalization options based on audience engagement data.

Beyond core video processing, DCM extends through a comprehensive Media Services pipeline that offers sub-second global distribution in industry-standard formats (DASH and HLS in CMAF), frame-accurate synchronization between different streams, and real-time interactive experiences. This end-to-end approach ensures that the personalized content DCM creates reaches audiences with the quality and latency standards that premium sports content demands.

The Path Forward

As 2026 approaches, the sports broadcasting industry stands at an inflection point. The question has shifted from “can we personalize content?” to “how do we monetize personalization effectively?” Early adopters of the Dynamic Cloud Mixer gain significant competitive advantages in the D2C marketplace. Organizations that democratize access to premium content capabilities position themselves to capture audience segments and revenue opportunities that competitors leave untapped.

The convergence between traditional sports broadcasting and streaming platform capabilities continues accelerating. Audiences expect the choice, control, and personalization they receive from entertainment streaming services. DCM provides the economic framework that makes meeting these expectations financially sustainable rather than prohibitively expensive. The platform’s Synthetic Intelligence delivers the consistency required for broadcast while the automation delivers the scalability required for personalization.

The democratization of innovation itself may prove the most significant impact. When cutting-edge production capabilities become accessible to organizations regardless of their traditional broadcast infrastructure or budget constraints, the entire industry benefits. Competition drives creativity, niche content finds its audience, and viewers gain the personalized experiences they increasingly demand. The Dynamic Cloud Mixer transforms personalization from a resource question into a strategy question – not “can we afford to personalize?” but rather “how do we best serve our audience segments?”

For sports organizations evaluating their D2C strategies, the economics of personalization have fundamentally changed. The production cost barriers that once made multi-stream personalization a luxury reserved for the wealthiest properties have fallen. The democratization of premium content personalization is no longer a future aspiration – it’s an available reality transforming how sports content reaches passionate audiences worldwide. The Dynamic Cloud Mixer, powered by Synthetic Intelligence, provides the foundation for this transformation.

Yospace – Dynamic Ad Insertion: Turning CTV Scale Into Sustainable Revenue

Yospace – Dynamic Ad Insertion: Turning CTV Scale Into Sustainable Revenue

Paul Davies, Head of Marketing, Yospace

Connected TV (CTV) is reshaping broadcasting, blending the polish of traditional television with the personalization of digital media. Viewers now expect high-quality experiences with added relevance, functionality, and choice, while advertisers seek the reach and trust associated with television, coupled with digital advertising benefits such as targeting and accurate measurement.

Dynamic ad insertion (DAI) sits at the heart of this transformation. By allowing broadcasters to replace static ad breaks with individually targeted spots delivered in real time, DAI makes advertising more relevant and campaigns more valuable. Done well, it creates a virtuous circle: viewers encounter ads that feel meaningful rather than disruptive, and broadcasters open up new inventory and revenue opportunities. But if the technology is poorly executed, the result can be jarring playback, missed impressions, and damaged brand trust.

Keeping Viewers Engaged

The first principle of DAI is that the audience should not notice it. Seamless transitions, stable audio levels and consistent quality across devices are essential to preserving a premium viewing experience. Even the slightest buffering or visual “join” can encourage viewers to switch off or migrate to ad-free alternatives.

Relevance is equally important. Advertisers now expect one-to-one targeting and frequency management to avoid overexposure. Sophisticated ad platforms can deliver this by combining server-side insertion (SSAI) or server-guided ad insertion (SGAI) with real-time feedback loops that cap exposure and refresh creative dynamically, all without interrupting the stream.

Reaching Every Screen

Maximizing monetization means serving targeted ads wherever the audience chooses to watch. That is easier said than done. The CTV landscape encompasses smart TVs, set-top boxes, mobile apps, and legacy devices that remain surprisingly prevalent in many households (in today’s hi-tech world, a smart TV that is under five years old can count as a legacy device). To capture every impression, ad systems must support multiple streaming protocols and remain compatible with both the newest 4K sets and older connected televisions with limited processing power.

The challenge grows when content is distributed via third-party apps or syndication partners, where broadcasters have less control over the playback environment. Emerging standards such as Common Media Client Data (CMCDv2) promise to help close these gaps by enabling consistent reporting across a wide variety of players.

Creating New Inventory

Advanced DAI is not only about replacing traditional ad breaks; it can also unlock entirely new revenue streams. Techniques such as SGAI enable innovative formats, including squeezebacks. Meanwhile, “contingency pods” open up more inventory during live events.

These capabilities are particularly valuable for sports and entertainment, where unpredictable moments, such as a last-minute goal, can generate sudden audience spikes and command premium pricing. Temporary event channels and replay modes, once too complex or expensive to monetize, can now generate incremental impressions with far less operational overhead.

Scaling For Major Live Events

Live broadcasting remains one of the most lucrative arenas for ad-funded streaming, but it also places the greatest strain on infrastructure. Audience surges during high-profile matches or breaking news can overwhelm ad servers if systems are not engineered for scale. Intelligent prefetching, where ad calls are paced to anticipate peak demand, is vital to maintain fill rates and protect revenue when viewer numbers suddenly climb.

Measurement As a Commercial Imperative

Advertisers used to the accountability of digital platforms will not commit significant budgets to CTV without transparent, trusted metrics. Accurate reporting of impressions, viewability, and completion rates is therefore non-negotiable. In addition, the most effective implementations provide real-time dashboards fed by log-level data, enabling broadcasters to detect errors, optimize campaigns mid-flight, and demonstrate clear return on investment.

Industry standards are beginning to harmonize measurement practices, but broadcasters cannot afford to wait. Platforms that combine server-side delivery with lightweight client-side data capture are already providing the precision advertisers demand while maintaining the broadcast-quality experience audiences expect.

Strategic Capability

Dynamic ad insertion is no longer a peripheral feature; it has become a core component. It is a strategic capability that must be embedded at the core of streaming operations and adtech infrastructure. Broadcasters who master it will be well-positioned to compete with global digital giants by offering the reach of television, the accountability of online advertising, and the flexibility to adapt as viewing habits continue to evolve.

Investment in robust technology, workflow integration, and emerging standards may be demanding, but the rewards are clear. With the right approach, every stream, every screen, and every unexpected audience spike can become an opportunity to grow revenue, without compromising the viewer experience.

 

TVU Networks – Cloud and the New Economics of Live Media

TVU Networks – Cloud and the New Economics of Live Media

Rafael Castillo, VP/GM EMEA and Latina America, TVU Networks

Cloud production is transforming media economics — broadening who can create, compete, and profit from content. In a fragmented, platform-centric market, success depends less on infrastructure ownership and more on the ability to move fast, reach audiences everywhere, and monetize every moment.

Democratization: Who Gets to Create Content Is Changing

 For the first time in media history, high-quality live production has become financially accessible. Things that once required fleets of satellite trucks, extensive links, large crews, and big budgets can now be done through cloud-based, virtualized workflows, aggregated connectivity and remote operations. The result is economic: the lower the cost to produce, the higher the volume of content that becomes commercially viable.

  • More creators entering media
  • More content to monetize
  • More audience segments served

Cloud unlocks direct-to-consumer business models, enabling rights-holders and brands to launch their own streaming channels and keep revenue relationships in-house. Democratization doesn’t lower quality: it broadens opportunity.

Convergence: Who Competes for Audiences is Changing

Today’s competition for attention is not limited to broadcasters alone. It’s broadcasters versus streamers, rights-holders, brands, and anyone with a live audience. Distribution has shifted from traditional channels to platforms and communities, and revenue has shifted accordingly. Streamers have become some of the most watched live personalities worldwide. TVU supports leading US creators like IShowSpeed and Kai Cenat, whose productions now rival live television in both scale and technical complexity.

A powerful signal of the shift:
IShowSpeed’s 35-day, 24/7 cross-country livestream transforms a tour bus into a mobile broadcast studio. Live switching, synchronized multi-camera workflows, and cloud collaboration continue seamlessly — from stadiums to streets to highways. This demonstrates how much creator-driven production has advanced: combining professional reliability with interactive viewer experiences. This is no longer just a broadcast industry. It’s a live audience industry — and the playing field is wide open.

Enterprise AV — Where Growth and New Revenue are Emerging

MediaTech is rapidly expanding into parallel sectors where live content enables mission-critical communication and monetizable engagement:

  • Corporate town halls and investor communications
  • Live education and training experience
  • Medical demonstrations and telehealth
  • Hospitality and retail brand activation
  • Smart city and emergency communications

Many of these organizations have live audiences without a broadcast unit. Cloud solutions allow them to adopt professional production standards, serving employees, customers, stakeholders, and citizens with real-time, high-value content.

This has opened a significant new revenue channel for production companies and system integrators who now serve both media and enterprise markets.

 

Monetization: Where Value is Captured is Changing

Audiences no longer gather in one place — and revenue no longer comes from one stream.

Advertising, subscriptions, sponsorships, rights licensing, and data monetization now coexist across dozens of distribution channels.

For content owners, this means a single live event can generate value:

  • on linear broadcast
  • on direct-to-consumer streaming platforms
  • in highlight-driven social engagement
  • in FAST channels with dynamic ad insertion
  • through long-tail replay, clip licensing, and archive monetization

Cloud-native workflows shorten the distance between creation and consumption, enabling rights-holders to publish to multiple outlets simultaneously and optimize yield per platform.

AI further enhances this value chain by identifying key moments inside the content — helping media organizations extract more revenue from every second they produce:

  • faster highlight turnaround
  • new sponsorship inventory
  • metadata-driven rights packaging

Monetization has become a real-time discipline — measured in seconds, not weeks.

Sustainability: How You Produce Determines What You Can Win

Sustainability has shifted from being a ‘best practice” to a mandatory part of procurement. Rights-owners now expect, and increasingly require, green production models. Traditional live workflows have a substantial carbon footprint: vehicle fleets, cabling, generators, and flown crews.

Cloud-powered production reduces that footprint dramatically:

  • Smaller teams on site
  • Fewer vehicles and less power
  • Far more remote control-room execution

The results are measurable and meaningful:

France Télévisions reduced transmission costs by 92% and prevented over 600 tons of CO₂ emissions during the 2024 Olympic Torch Relay by using TVU’s cloud-native remote workflows. Sustainability has now become a key factor that directly affects who wins future contracts.

Leadership: Who Shapes The Future Is Changing

Power used to belong to those who owned the most hardware. Tomorrow’s leaders will be those who:

  • reach audiences directly
  • monetize every moment
  • operate globally with flexible scale
  • build responsibly and sustainably
  • Innovate faster than legacy infrastructure can move

This is a more inclusive, profitable, and resilient industry.

Conclusion

Cloud production isn’t just a technical update — it’s a core business shift that:

  • Broadens who can create
  • Changes who competes for attention
  • Speeds up how value is generated
  • Rewards responsible creators
  • Enables new leaders to rise

TVU Networks is proud to support this change — helping every storyteller produce, distribute, and monetize live content efficiently, sustainably, and globally.

The future of media won’t be defined by who controls the most infrastructure — but by who unlocks the most opportunity.