Quickplay – Immersive Entertainment’s Next Frontier: The Convergence of AR, VR, and OTT

Quickplay – Immersive Entertainment’s Next Frontier: The Convergence of AR, VR, and OTT

Prabhu Mohan, Senior Software Engineer, QA, Quickplay

 

Introduction

The media and entertainment landscape is evolving rapidly, driven by technological innovation and shifting consumer expectations. Among the most transformative developments is the convergence of Augmented Reality (AR), Virtual Reality (VR), and Over-The-Top (OTT) streaming platforms. This fusion is not just enhancing how content is consumed, but it is redefining the economics of the industry by creating new revenue streams and business models while democratizing access to immersive, interactive entertainment experiences.

For content creators, MediaTech providers, and direct-to-consumer platforms, the opportunities are immense. But so are the challenges. Understanding this convergence is key to unlocking future growth and delivering the engaging consumer experiences demanded by today’s digital audiences.

OTT’s Evolution Toward Immersive Experiences

OTT platforms have fundamentally changed media consumption by offering vast libraries of on-demand content accessible anywhere, anytime. Yet, as OTT markets mature and competition intensifies, simply offering more content is no longer enough. Consumers crave richer, more interactive experiences that create deeper emotional engagement. And younger viewers, in particular Gen Z, are accustomed to and prefer immersive experiences. In fact, a JCDecaux report found that 88% of Gen Zers believe immersion is what makes an experience fun.

AR and VR technologies allow viewers to step inside the story rather than passively observe it. Virtual Reality creates fully immersive environments where users feel physically present within content worlds, while Augmented Reality overlays digital elements onto the real world, blending physical and virtual in compelling ways.

This convergence transforms OTT from a traditional “screen and play” model into an experiential platform while enhancing the value proposition for consumers, offering new paths to monetization and delivering the experiences younger audiences demand.

New Revenue Streams Through Immersive Engagement

By integrating AR and VR, OTT providers can introduce multiple revenue-generating innovations:

  • Premium immersive content tiers: Exclusive VR experiences, such as virtual “theater” screenings or 360-degree narratives, can command higher subscription fees or one-time access payments.
  • Social VR environments: Enabling users to watch content together in virtual spaces, interact via customizable avatars, and share real-time reactions fosters community, increasing retention and upsell opportunities.
  • Interactive advertising: Brands can embed interactive product placements or sponsored virtual assets within immersive experiences, blending advertising and content seamlessly.
  • Bundled subscriptions: Combining traditional OTT offerings with immersive content packages appeals to diverse consumer preferences and maximizes ARPU (average revenue per user).

These monetization avenues diversify income beyond standard subscriptions and ad-supported models, helping OTT providers differentiate their offerings and capture evolving consumer interests.

Cross-Industry Convergence Expands Market Potential

The impact of AR/VR and OTT convergence extends far beyond entertainment, creating a fertile ecosystem of parallel markets and economic opportunity:

  • Gaming: Immersive gaming on platforms like Oculus drives substantial revenue through digital purchases, subscriptions, and in-app transactions. In fact, eMarketer predicts that 75% of Gen Z will engage with digital gaming by 2027.
  • Education: VR and AR applications in virtual classrooms and experiential learning increase engagement and unlock new monetization for educational content providers.
  • Virtual collaboration: Businesses are adopting VR-powered meetings and events, merging professional media consumption with immersive technology and expanding B2B OTT use cases.
  • Virtual tourism and experiences: Immersive virtual tours and cultural experiences provide content providers access to the growing “experience economy,” opening new paths for revenue creation.

This multi-sector convergence accelerates MediaTech innovation, pushing OTT providers to adapt and partner across industries for comprehensive consumer experiences.

Case Study: A VR OTT Experience with a Global Franchise

For example, let’s imagine a VR-enhanced OTT experience centered on a blockbuster franchise like Marvel’s Avengers. Offering viewers the chance to immerse themselves in key moments alongside favorite characters transforms the typical viewing session into an interactive adventure. This deepened emotional engagement boosts subscriber retention, justifies premium pricing, and generates buzz which are critical factors in today’s crowded streaming landscape.

Such experiences can serve as marquee offerings in OTT content catalogs, driving subscriber acquisition and differentiating services in a commoditized market.

Wearable Tech and OTT: The Next Consumer Frontier

Beyond content, the convergence also depends on hardware innovation. Smart glasses and other wearable devices promise to bring immersive OTT content to a broader audience with portable, personal “theater” experiences. Demos integrating smart glasses with OTT apps showcase how users can consume high-quality video and AR overlays seamlessly in everyday environments.

 

This hardware-content synergy unlocks new business models involving device manufacturers, content owners, and platform providers which is expanding OTT’s reach.

 

Challenges and Strategic Opportunities

Despite its promise, the AR/VR and OTT convergence is not seamless. It faces hurdles such as:

  • Production costs: Creating high-quality immersive content is expensive, requiring new workflows and skilled talent.
  • Hardware accessibility: Consumer adoption depends on affordable, comfortable devices that can deliver seamless experiences without causing fatigue.
  • Interoperability: Standardization across platforms and devices is needed to ensure broad content availability and user convenience.
  • User experience balance: Immersive experiences must be engaging yet intuitive, minimizing friction for casual users.

To overcome these challenges, strategic investments in R&D, collaboration across content, technology, and device ecosystems, and customer-centric design will be essential. Democratizing immersive content experiences is not only a technological task but a business imperative.

Conclusion: Transforming Consumer Experiences and Economics

The convergence of AR, VR, and OTT streaming is redefining how consumers engage with entertainment. By expanding beyond passive viewing to immersive, interactive experiences, this fusion opens new avenues for monetization, business differentiation, and cross-industry partnerships. Media and MediaTech providers who embrace this transformation early and innovate boldly will be best positioned to capture the future’s evolving consumer landscape.

Throughout this decade, the fusion of immersive technologies and OTT platforms will be a key driver of the next phase of entertainment economics, defined by democratized access, new revenue models, and richer, more personalized consumer experiences.

 

Author Bio

Prabhu Mohan is a Senior Software Engineer in Quality Assurance at Quickplay, specializing in immersive media technologies and OTT platforms. With a passion for innovation in entertainment and MediaTech, Prabhu focuses on delivering seamless and engaging consumer experiences through cutting-edge solutions in AR and VR.

Net Insight – Protecting Live Production in the IP Era: Why Media-Specific Security Is Non-Negotiable

Net Insight – Protecting Live Production in the IP Era: Why Media-Specific Security Is Non-Negotiable

Paul Evans, Solution Area Expert, Net Insight

Live media production has always been a high-wire act. With tight timings, unforgiving audiences, and high-value rights on the line, live production is a world where stability is everything and any disruption quickly becomes a headline. Live production today is no longer contained within the walls of a single facility. It spans venues, OB trucks, cloud services, third-party studios, and remote teams. In this environment, the concept of a traditional network perimeter no longer applies. What replaces it must be smarter, more adaptive, and tailored to the specific needs of IP media workflows.

The security challenges of IP

 The transition to IP has unlocked new levels of flexibility and efficiency for media workflows. But as the industry moves away from SDI’s predictable, point-to-point connectivity, it is trading physical isolation for increased complexity. IP media traffic moves fluidly across networks, domains, and geographies — sometimes even via unmanaged public infrastructure.

These open workflows create new points of vulnerability. Live production environments, in particular, operate on extremely tight tolerances. Stream misconfigurations, bitrate mismatches, or minor equipment errors can have major downstream effects — interrupting service, degrading signal quality, or overwhelming critical network links. In an IP-native environment, these risks can emerge not from external cyber threats, but from internal operational complexity. Media organizations are increasingly aware that even unintentional configuration errors, like routing a 4K signal where only HD was expected, or mismanaging jitter and timing parameters, can threaten service continuity. Combined with the growing use of remote contribution and hybrid cloud workflows, the security posture of a media network must now include both protection and prevention.

The Limits of traditional IT security

Today’s live production environment is fluid. Cameras in one country, switching in another. Localization such as add insertion added in the cloud. Streams are handed off between facilities, production partners, and platforms. The paths are virtual, but the risks are very real.

What happens when a misconfigured feed enters the wrong network domain? When a remote signal floods a switch mid-broadcast? These aren’t speculative threats. They’re real-world failures, many of which never make it into public view — but are all too familiar to the engineers who fight to keep broadcasts on air.  When it comes to securing video over IP, generic enterprise firewalls are often the first line of defense. But while these tools are essential for safeguarding IT infrastructure, they fall short in media applications.

Unlike conventional IT traffic, IP media streams require high throughput, ultra-low latency, and deterministic behavior. Firewalls not designed for real-time media introduce bottlenecks, increase latency and all this at a premium cost. Worse, they often lack visibility into media-specific parameters such as jitter, video specific monitoring, or audio silence, making them ill-equipped to monitor or troubleshoot content delivery issues.

Live Production raising the stakes

 In live production, the margin for error is razor-thin. Missed frames, dropped feeds, or brief blackouts can result in reputational damage and significant financial loss. The globalization of media rights with syndication agreements, international rights holders, and simultaneous multi-platform delivery, the exposure is far-reaching. A single disruption at one point in the chain can ripple across hundreds of downstream feeds.

Compounding the challenge is the need for speed and adaptability. Today’s productions often come together with minimal setup time as production companies look to minimize costs, leveraging a mix of in-house and third-party resources. OB trucks may connect to central production hubs. Remote commentators might join from home studios. Feeds traverse from dedicated fiber networks to unmanaged Internet connections.

In this context, security cannot be a static checklist—it must be embedded into the design of the network itself. Facilities need the ability to onboard new sources dynamically, validate streams in real time, and enforce rules that prevent disruptions before they occur.

The Role of standards and interoperability

The industry has begun to formalize these practices through standards like SMPTE RP 2129, which defines how media streams should be securely and predictably exchanged between networks. These guidelines provide a blueprint for building robust, interoperable media workflows that don’t sacrifice agility for security.

At the same time, innovation is accelerating. Solutions designed specifically for high-bandwidth media flows demonstrate how off-the-shelf platforms can simplify security while preserving performance. By integrating stream validation, traffic shaping, flow replication, and monitoring into a single device, these tools make it easier for media organizations to deploy protection without compromising quality or scalability.

Security as an enabler

Too often, security is seen as a hurdle to be overcome – a set of constraints that slow down progress. But in IP media, the right security model is an enabler. It allows organizations to experiment without fear and scale without fragility.

As live production grows more complex and interconnected, media companies can no longer rely on generic solutions or best-effort protections. They need purpose-built safeguards that understand the unique demands of media transport and deliver control without compromise. Media companies that lead in the IP era will be those that embed resilience into every layer of their operations, where security isn’t an afterthought, but an architectural pillar of live workflows.

 

IBC 2025 – Shaping the Future

IBC 2025 – Shaping the Future

We spoke to Mike Crimp, CEO of IBC, about expectations and themes for this year’s show in Amsterdam, and also how IABM members can maximize their returns from IBC.

Your theme for this year’s show is Shaping the Future – please explain the thinking behind this, and where you see the M&E industry heading over the coming years.

Shaping the Future is all about connection and innovation as our industry navigates a period of accelerated change with a clear sense of direction. Across media and entertainment, there’s a growing acknowledgement that the business models, technologies, and creative processes within the industry are being redefined. The media sector has rapidly expanded, comprising a wider, more diverse ecosystem including digital platforms, content creators, enterprise streamers, and sports organisations. As the scope of media widens, IBC brings all of the creative, technology and business communities together to drive debate, incubate innovation and enable business outcomes.

At the same time, the sector is grappling with critical questions around shifting market realities, transformative technologies like artificial intelligence, and workforce sustainability. The industry isn’t just evolving – it’s actively shaping its future. IBC2025 aims to energise that process by providing a platform for mission-critical insight, shared learning, and sector-wide collaboration.

Please expand on the Shaping the Future theme in terms of the IBC2025 conference schedule – what areas have you picked to concentrate on and why?

The IBC Conference has always aimed to reflect the most significant shifts affecting the media and entertainment industry. This year, under our three core themes of Shifting Business Models, Transformative Tech, and People & Purpose, the agenda addresses pivotal trends shaping the industry moving forward – from AI and human-machine collaboration to immersive experiences and personalised content.

As streaming to large, concurrent audiences becomes more commonplace and expectations around interactivity and quality continue to rise, expert speakers will cover business-critical challenges around technology and advertising models. AI also features heavily across the programme – not just in terms of its creative potential, but how it can drive efficiency, enhance personalisation, and support editorial workflows in real-world environments.

Across the three days, the conference will bring together a broad mix of industry leaders and trailblazers. We’re pleased to welcome speakers from organisations including ITV, YouTube, Sling TV, Warner Bros. Discovery, and NBCUniversal, among others. Topics on the agenda this year include: The business of TV and the search for sustainable growth across new platforms; Live sports and real-time experiences, with an emphasis on production tools and audience engagement; Personalised advertising and the future of commercial models,; and Discovery and prominence – how to ensure content is surfaced in a crowded landscape.

The programme will also once again spotlight IBC’s Technical Papers, which continue to celebrate world-renowned, peer-reviewed original research  addressing real-world emerging challenges. From software-defined distribution to new approaches to low-latency streaming, the papers add critical technical insight to the broader discussions taking place.

Although it’s too early to talk about expected visitor numbers, I’ve noticed IBC is back to 14 halls, meaning that it is returning to pre-Covid scale. With so much competition from other shows, why do you think IBC has managed to not just retain, but grow its place in the exhibition calendar?

This year, we’re seeing strong momentum across all exhibitor segments, with nearly 44,000 square metres of space already booked across 14 halls. The exhibitor mix includes major returning players such as AWS, Avid, Adobe, Blackmagic Design, Ross Video, Sony, and Zixi.

The show has always operated as a business, technology and networking event. For many companies, it’s a moment to launch products, reset relationships, and gather market feedback. For others, it’s where strategic discussions happen – from mergers and acquisitions to product strategy or joint R&D. At the same time, our emphasis on networking and connection building means a wide range of attendees – from C-Suite to people starting out in the industry – value IBC as the must-attend event to mix with the people that matter, and find their next breakthrough opportunity. This level of experience is almost impossible to replicate in regional events or digital-only formats.

Crucially, IBC is in constant evolution – we’re always thinking about what’s next for our industry. That’s why we continue to expand our content programme, champion the Accelerator Programme, add new show features to address emerging technologies, or introduce new formats like the AV User Group speed pitch event to support sales across new verticals.

It’s clear that IBC has put a lot of thought into the way the show and visitor flow is structured. Can you dive into the logic behind this for us?

As the event has grown in scale, we’ve made a conscious effort to organise it in a way that helps visitors focus on the areas that matter most to them.  Each hall is organised by specific product sectors. This smart layout makes it easier for attendees to navigate, compare offerings, and quickly find the technologies and solutions they’re looking for.

IBC Future Tech will transform Hall 14, showcasing breakthrough AI technologies, the Accelerator Zone, Talent Programme, IBC Hackfest x Google Cloud, and a range of other features – creating a single location that captures innovation from all angles.

This clustering approach is echoed elsewhere, with dedicated stages and content blocks around core themes – like the Content Everywhere stages, for example. We’ve also ensured that networking spaces are embedded across the halls, allowing for informal conversations and impromptu meetings around the most topical subjects.

The new Future Tech area in Hall 14 looks like a must-visit destination at the show – especially given the overall theme you have chosen this year. Can you give us some hints of what we might be seeing in it?

From AI and immersive experiences to virtual production and sustainable workflows, Future Tech is where you’ll discover what’s next – and connect with the people making it happen. Exhibitors will include large technology vendors such as Google, AWS, Microsoft, and Tata Communications, alongside a host of breakthrough start-ups.

In the Accelerator Zone, visitors will find nine collaborative Proof of Concept projects covering everything from generative AI content frameworks to ultra-low latency streaming and content provenance via C2PA watermarking. These projects are being developed by leading industry names including the BBC, RAI, ITV, Comcast, Globo and Associated Press, with hands-on demonstrations scheduled throughout the show.

Elsewhere in Hall 14, there’s a journey into the past with an installation celebrating a century of television innovation, showcasing iconic hardware and creative milestones the industry has seen – while also providing a glimpse into what the next 100 years might bring. The all new Google AI Penalty Shootout will showcase how AI is being used for real-time decision-making, coaching, and athletic performance analysis. The IBC Hackfest x Google Cloud will bring over 250 digital innovators, tech entrepreneurs, software developers, creatives and engineers to compete in a two-day hackathon to solve real-world M&E challenges using Google Cloud, Gemini AI, and other powerful tech tools.

The IBC Accelerator programme has been a huge success. What has it got for us this year?

The programme continues to demonstrate the value of collaborative innovation and R&D acceleration across the media ecosystem. For 2025, we’re showcasing nine projects that address some of the most pressing challenges in live production, low latency streaming, control room automation and sustainability.

Among them is an AI assistant agent project designed for live newsroom environments – being developed by teams from the BBC, ITN, Google and Cuez. Another project, Private 5G from Land to Sea to Sky, explores mobile production in remote and complex locations, supported by partners such as OBS and the University of Strathclyde. These are not just speculative projects – they are outcome-driven, laser focused initiatives that are set to unlock real-world game changers for media organizations, today and in the long run.

The Accelerator stage in Hall 14 will host presentations and proof of concept demonstrations from each team. It’s a fantastic opportunity for attendees to understand where leading-edge innovation is making a real difference — I’d encourage every visitor to find the time to experience the demonstrations first-hand.

With the IBC Awards deadline approaching, can you give us any insights into what we might be seeing at the ceremony in September?

The IBC Innovation Awards remain a highlight of the show, and this year’s entries reflect the breadth of innovation across media technology and content creation. Five categories are open for 2025: Content Creation, Content Distribution, Content Everywhere, Social Impact, and Environment & Sustainability.

The jury includes representatives from Variety, TVBEurope, Film & TV Video, SVG Europe and others – ensuring wide-ranging perspectives on what real innovation looks like in today’s climate. These awards put a spotlight on the industry pioneers that are reshaping media through technological and social progress. Expect stand-out projects ranging from the biggest live sports events, game-changing audience experiences, to social and environmental breakthroughs. The winners will be announced on Sunday 14 September at the RAI.

How is IBC approaching talent, education and skills development in 2025?

The IBC Talent Programme returns this year with a focus on skills, mentorship, and inclusive recruitment. Scheduled for Friday 12 September, the programme includes a series of sessions developed in partnership with Rise, GalsNGear, SMPTE and Women in Streaming Media. We’re also once again hosting the World Skills Café from the Global Media & Entertainment Talent Manifesto – which will explore sector-wide responses to skills gaps and training challenges.

With production workflows and creative pipelines shifting rapidly, there’s a clear need for the industry to invest in new competencies – whether in data handling, cloud orchestration or ethical AI deployment. The Talent Programme provides a forum for that conversation to happen in a collaborative and forward-looking way.

What advice would you give to IABM members exhibiting at IBC2025 to make the very most of their investment and time in the show?

The most effective exhibitors see IBC not as a standalone event, but as a campaign. That means building awareness ahead of the show through editorial and digital channels, showcasing your new products and trusted expertise, and using the event itself to launch, listen and connect. Whether by speaking on one of the showfloor stages, putting your initiatives forward in the Innovation Awards, or taking part in a range of new features, there are so many opportunities to stand out from your competitors.

We’d also encourage companies to think broadly about who they might meet at IBC. With more enterprise AV buyers attending, and crossover between media and adjacent industries growing, there are opportunities to connect with new markets, partners and customers that might not be immediately obvious. Exhibitors can also get involved in IBC Connect to forge strategic, high-value connections that directly support business growth.

Broadcast-quality technology is increasingly being adopted in the Pro AV sector. How is IBC working to support the emerging Broadcast AV market?

We’ve seen significant growth in enterprise use of video – from internal town halls and product launches to immersive training and experiential marketing. This has created demand for higher-quality, broadcast-standard workflows in the AV space, and we’re seeing increasing crossover in technology choices, procurement teams and service providers.

In response, the AV Speed Pitch event in partnership with the AV User Group will be returning. This brings together around 40 enterprise AV buyers from companies such as Barclays, AstraZeneca, Arup and WPP, offering IBC exhibitors the opportunity to showcase their solutions directly to decision-makers. Alongside that, the show floor will include AV integrators and vendors whose products span both traditional broadcast and modern enterprise use cases.

IABM sees our partnership with IBC as a critical collaboration for our members. How can both IABM itself and all our members around the world work more closely with IBC – and vice versa – to help deliver maximum value for our members at this pivotal time for the whole industry?

Our relationship with IABM and the wider IBC Owner group is fundamental. IBC will continue to champion the work of industry bodies, associations and partner groups to fuel knowledge-sharing, networking and growth opportunities. Our free-to-attend IBC Owner & Partner Programme will deliver a wealth of sessions, covering a range of topics from the latest industry standards and strategies for business transformation, to defining future networks and building new sustainable broadcast infrastructures. I’d encourage all IABM members to maximize the opportunities enabled by IABM at IBC2025, whether through speaking sessions, research presentations or networking discussions. IBC is designed as a collaborative platform to showcase the latest vendor innovations, unlock new leads, and help companies do business. We look forward to catching up with many IABM members in Amsterdam, seeing your latest technology breakthroughs, and hearing your views on the challenges and opportunities shaping the industry’s future.

GB Labs – From Shoot to Seamless Recovery: Rethinking Disaster Recovery and High Availability for Modern Media Workflows

GB Labs – From Shoot to Seamless Recovery: Rethinking Disaster Recovery and High Availability for Modern Media Workflows

 

Tim Harland, Product Specialist, GB Labs

In the world of media production, “high availability” means different things depending on who you ask. For post teams capturing hundreds of hours of footage at massive shoot ratios, or creative teams under tight delivery deadlines, storage downtime doesn’t just cause frustration—it halts progress.

The need for modern disaster recovery (DR) and high availability solutions has never been more urgent. Yet many teams are still relying on outdated infrastructure—systems built for static backup, not dynamic collaboration.

The Cost of Downtime

In today’s fast-paced media environments, downtime is expensive. When shared storage goes offline, timelines slip, talent waits, and delivery windows are missed. Losing access to project files, raw media, or critical metadata—even temporarily—can result in huge financial losses and damage client trust.

Traditional DR methods like LTO tape or offsite backup may prevent catastrophic data loss, but they rarely offer continuity. They’re built for retrieval, not workflow. And in post-production, that’s no longer enough.

Where Traditional Storage Falls Short

Media workflows are unique—they require high throughput, real-time access, and seamless collaboration across locations. Most legacy systems (SANs or traditional NAS) offer redundancy through hardware-based protections like RAID, mirrored drives, and dual PSUs. But these only guard against internal failures, not total system loss.

When a failure occurs—be it a power outage, hardware issue, or human error—teams scramble to re-route access. Even clustered systems, often sold as high availability solutions, can introduce new risks: complex rebuilds, performance bottlenecks and painful re-optimizations. These aren’t viable in modern media pipelines, especially when deadlines loom.

Rethinking Resilience for Media

Not all cloud storage is created equal. Generic cloud platforms—often built for enterprise IT or document management—may offer scale and redundancy, but they fall short when it comes to high-performance media workflows. Access delays, sync errors, and lack of true collaboration make them a poor fit for teams working with time-sensitive, high-resolution content.

Media-first cloud platforms take a different approach. They’re designed to support editing, collaboration, and asset management directly from the cloud—without needing to duplicate files or re-link projects. However, not all media clouds are created equal either. Many are geared toward small teams, carry hidden costs or struggle with performance under load.

This becomes especially apparent for power users—colorists, finishing artists, and VFX teams—who rely on sustained, high-throughput access to massive files like DPX, EXR, or RAW formats. In these cases, even media-aware clouds can become a bottleneck.

A New Standard: Built-in Resilience Through Hybrid Flexibility

Pure cloud isn’t always practical. File sizes, location-specific bandwidth, or team preferences often demand a hybrid approach. That doesn’t mean giving up the cloud—it means extending it.

Modern hybrid models let you deploy edge devices in remote studios or offices to act as local accelerators. These devices don’t host the data—they simply make it instantly accessible. Files are automatically cached as needed and remote users benefit from fast access without ever touching your core configuration.

More importantly, if a location experiences downtime or loses hardware, another user—or even the same user on another machine—can connect to the same files, in the same place, without interruption.

A good DR strategy for media should:

 

  • Ensure your files are always available, regardless of hardware or location
  • Maintain consistent file paths and access for users—no matter where they are
  • Eliminate complex restores by keeping your live environment always active
  • Support distributed teams: remote, hybrid, and on-prem
  • Allow instant relocation—teams can move and resume work seamlessly

A New Model for Resilience

Modern media teams don’t just need a Plan B—they need a day-to-day workflow that’s resilient by default.

This means moving beyond static backups and archives toward an always-on, cloud-integrated workspace that supports real-time production. The ideal solution combines a globally accessible media filesystem with high-performance access—whether teams are working remotely, on set, or in centralized facilities.

Editors just want to work, so the system must deliver intelligent, automatic performance enhancements through flexible deployment models. Even bandwidth-constrained environments should support real-time editing of high-resolution formats. If a local device fails or a facility becomes inaccessible, users can continue working from another location or machine—without changing file paths, permissions, or configurations.

Security is integral to this model. Full traceability of user activity and safeguards for compromised users or devices should come built in. Adding new users or securely sharing selective file access with third parties must be straightforward without sacrificing control.

Unlike traditional DR frameworks, recovery isn’t reactive—it’s continuous. Every site, user, and deployment stays connected to the same authoritative data layer, maintaining consistency and operational continuity across the board.

For organizations seeking maximum resilience, this model should also support replication across multiple cloud vendors or geographically distributed instances, delivering true fault tolerance without added complexity.

It’s not just disaster recovery. It’s a smarter, more scalable way to work and its also the foundation that NebulaNAS was built on. NebulaNAS is GB Labs’ cloud storage solution that offers automatic, operational continuity for remote teams and workflows.

 

 

Catena – Getting back in Control

Catena  – Getting back in Control

Stan Moote, CTO, IABM

How do you build a multi-vendor facility and implement a seamless control system? One capable of spanning local hardware, on-prem, off-prem and multi-cloud systems? This article looks at how IABM’s Control Plane working group has been assisting with the Rapid Industry Solutions (RIS) effort within SMPTE called Catena. The working group’s emphasis has had a clear focus to avoid the pitfalls that have aborted several control system standardization efforts over the last couple of decades.

What are those pitfalls?

Standardizing a control system that was specifically designed for an existing product family in the hope that others will “jump on board”

I have seen this approach a few times. Whilst it could work, the proponent is often reluctant or unwilling to modify it to encompass functionality that is identified by the group during standardization. Considerable effort is expended, but the project withers on the vine and the proponent finds something better to do.

Creating a control system standard in a Standards Committee

This approach gets input from a number of stakeholders, however, is incredibly slow! Some of the prime movers will leave over time. New people will join part-way through and argue that the work done should be turned around. Being new work, not field-tested, it may not gain traction with vendors who have their own systems.

Publishing a Control System Standard with no exposure to potential implementers

With something as complicated as a control system, it is important to encourage independent implementations before the documents are finally published. This allows modifications to be made and interoperability testing.

Extending a vendor’s control system in a Standards Committee

The original vendor is often reluctant to expend energy on features that are needed to make the system generic if the vendor does not have need for those features within its product range.

Translators

This approach recognizes that there are many proprietary systems and writes custom protocol translators/convertors for all the vendor products in the system. This soon gets out of control – pardon the pun. It is a maintenance headache.
I have seen systems with dozens of translators mainly because “best-of-breed” products were wanted, each having a dissimilar control protocol/systems making both control and monitoring a nightmare.

Security and Vulnerability

Nobody ever caught a virus over RS-422 or over SDI (often jokingly, yet seriously referred to as Secure Digital Interface). The point-point nature of these interfaces served as an air-gap for content and control flows. Today these both travel over IP networks, and the bad actors out there have racked up plenty of victims with “network” as the attack vector. Simply adding in “IT-style” security doesn’t work due to the instantaneous nature of control and monitoring within our industry.

How have those pitfalls been avoided in Catena?

Catena has been developed in SMPTE’s Rapid Industry Solutions group “Open Services Alliance” (RIS-OSA). It is based on vendors’ knowledge of control system requirements and popular protocols, so it is not starting from scratch.

At IBC 2024, IABM put together a “Control Plane Working Group” that was supported by multiple vendors. At our first on-line meeting, we decided to build a comparison document to see what overlaps there may be. This was also ideal, because within the RIS-OSA group we didn’t want to “reinvent the wheel”. Hence we took the appropriate bits from several industries that were already in use today. For example, reviewing work from the AES-70 and AMWA NMOS working groups.

“Relationship with IABM was essential in the effort to open the communication to the international Vendor community when it came to the discussion about Catena and Control Protocols in SMPTE RIS OSA. This is an important step to create a high-quality document which feeds into the SMPTE Standards process.”

Thomas Bause Mason, Director of Standards Development, SMPTE.

 

 

Ciro Noronha, CTO of Cobalt Digital and president of the RIST Forum (RIST is developed by an activity group within the VSF – Video Services Forum) took on the task to create a comparison document for the IABM Control Plane group.

The Control Plane group was given access to the draft documents from RIS-OSA and comments, such as identify feature requirements, were fed back to RIS-OSA before the documents were sent to the SMPTE Standards Group.

The Catena documents are being submitted to SMPTE’s public Committee Draft process. This involves a period of comment resolution after which the documents are publicly released for a period of review, implementation and GitHub comment feedback. The documents are revised in the light of comments received before proceeding through the SMPTE standards balloting process to publication.

Catena directly addresses the security issue – see later in this article.

What’s the current situation with Catena?

Press release from SMPTE shortly after their Standards meeting round:

An Open-Source, Vendor- and Platform-Agnostic Solution, Catena Delivers Single Secure Protocol for Control of Media Devices and Services

At its quarterly SMPTE® Technology Committee meetings, held June 1-3 in Tokyo at Imagica, SMPTE  introduced the initial documents defining the Catena control plane standard. The product of extensive work by SMPTE’s Rapid Industry Solutions Open Services Alliance group (RIS-OSA), the initial Catena documents (known as the ST 2138 suite) were introduced to the SMPTE Standards Community (and its 34CS Technology Committee that focuses on Media Systems Control and Services) to begin the official standardization process. 

“Catena represents one of the most ambitious and essential standardization efforts SMPTE has undertaken in recent years,” said Chris Lennon, Director of Standards Strategy for Ross Video and a SMPTE Fellow. “With media workflows now spanning on-prem, cloud, and hybrid environments, the need for a unified, secure, and vendor-agnostic control plane is more urgent than ever. By introducing the initial Catena documents into the SMPTE Standards Community, we’re inviting the broader industry to help shape a solution that works for everyone, regardless of where their services reside or what platform they use.” 

Hundreds of proprietary protocols are used today to control media devices, creating a control plane challenge across the media industry. In defining and standardizing Catena, SMPTE aims to provide the first and only standardized open-source solution to this challenge. In providing a vendor- and platform-agnostic solution, Catena offers a single secure protocol that is equally suited to controlling very small devices and microservices as it is to controlling the most complex physical devices and services in use by the media industry. 

“One of the fundamental challenges facing our industry is managing devices and services across a fragmented infrastructure, and proprietary control protocols are simply not up to the task,” said Thomas Bause Mason, SMPTE Director of Standards Development. “Catena offers a new model based on open standards, community-driven development, and a pragmatic path to implementation. Designed to address every device, service, and system in any environment, it offers the adaptable, future-proof approach we need.” 

The initial suite of Catena documents introduced this week consists of:  

  • ST 2138-00: Catena Overview 
  • ST 2138-10: Catena Model 
  • ST 2138-11: gRPC Connection Type 
  • ST 2138-12: REST Connection Type 
  • ST 2138-50: Catena Security 

These documents are transitioning into SMPTE’s 34CS TC to begin the official standardization process. SMPTE has also stood up its Catena repository on GitHub, which includes interface files, schema, and other supporting resources. SMPTE’s plan is to advance these initial Catena documents to Public Committee Draft (PCD) status as soon as practical, then pause development to give implementers time to integrate Catena into their products and provide feedback. Following this implementation and review period, the documents will quickly move forward through the final standardization and approval stages. 

“The industry has long needed a common control layer that actually reflects how we operate today — across clouds, platforms, and vendors,” said Stan Moote, CTO of IABM. “Catena offers a standards-based path forward that brings the transparency and scalability needed for smart, efficient resource management in distributed environments. It’s encouraging to see this kind of progress being made openly, with broad collaboration through SMPTE and engagement from IABM’s Control Plane Working Group, bringing the supplier community into the process.” 

A note about Catena security from John Naylor, Ross Video.

I hope one day to claim: “nobody ever caught a virus over Catena”. Time will tell, but you can help – Catena is an Open Source project which means that it can be inspected by anyone and security holes can be spotted and fixed using the SMPTE GitHub repo.

Focusing on control, in broadcast, when we migrated what had worked over RS-422 to IP we did so with little thought to information security. That was excusable in the 90’s and 00’s. It clearly no longer is which is why Catena includes a robust, standards-based approach to security.

Catena security (ST 2138 part 50) requires:

  • Transport Layer Security for data in motion – which guarantees authenticity, integrity, and confidentiality.
  • Fine-grained access control using the IEEE’s OAuth2 suite of standards – in practical terms this means that you can let your IT admin upgrade devices, but not adjust creative parameters such as EQ settings etc. And by corollary, your TD can punch the show but not upgrade the switcher. If that’s what you want, of course.

But we don’t lose sight of the user’s mission critical aim which is to get (and keep) their content on-air, which is why some of the Zero Trust principles (such as checking every single request against an authorization server) are relaxed to promote better determinism thereby preserving the operator’s artistic intent.

Once again, I hope one day to claim: “nobody ever caught a virus over Catena”. Time will tell.

Cheqroom – From Chaos to Clarity: How Asset Management Powers MediaTech Resilience

Cheqroom – From Chaos to Clarity: How Asset Management Powers MediaTech Resilience

Empower Your Teams by Making Equipment Reservations and Tracking Easy

Asset management often flies under the radar in media organizations, yet its impact is profound. Mishandled or poorly managed equipment can bring operations to a halt, leading to substantial financial losses. Effective equipment management is essential, as organizations need accurate, accessible records of their assets’ location, status, maintenance needs, and utilization rates to prevent compliance fines and lost equipment.

This drove us to create our 2025 State of Enterprise Asset Management Report, investigating how organizations manage their physical assets and evaluate their current processes. Our research revealed actionable insights into gear management challenges, showing that while many organizations grapple with inefficiencies, disconnected systems, and outdated processes, there’s a strong appetite for transformation.

The Hidden Link Between Asset Management and Operational Resilience

Physical assets, from AV equipment to studio gear, are essential for production. Our survey found that 90% of professionals struggle with equipment management. Common pain points include broken or missing equipment, double bookings, and time-consuming manual tasks, all stemming from poor or neglected equipment management. These issues cascade into missed deadlines, reduced team satisfaction, and diminished bottom-line results.

Responses from media professionals also illuminate overlooked struggles, such as reliance on spreadsheets or insufficient tracking systems that lead to equipment downtime and project delays. Equipment downtime isn’t just a technical glitch, it’s a preventable problem that wastes hours of your team’s time and risks damaging client trust.

Key takeaway? Equipment management isn’t just a line item. It’s central to delivering quality output, protecting operations, and enabling your teams to succeed.

Key Challenges in Media Technology Asset Management

From our research, several challenges commonly plague organizations when managing assets:

  • Loss and Damage: 64% of teams frequently encounter broken or missing items right when they are needed.
  • Complex Booking Flows: Reservations and checkouts are needlessly time-consuming and conflict-prone.
  • Underutilized Tools: Many organizations rely on outdated spreadsheets, which fail to scale with rapidly growing needs.
  • Limited Visibility: Gaps in tracking lead to inefficiencies, miscommunication, and missed opportunities.

This status quo fosters a culture of “good enough.” But amidst industry pressures and tightening budgets, leaders must ask themselves whether their asset management approach is actually working for them or if it is just a short-term bandaid solution. If it’s the latter, they are potentially losing thousands every year.

Why Integrations Matter More Than Capabilities

Many of our survey respondents identified integration as their primary concern when selecting asset management tools, surpassing even functionality. Why? Asset management in modern media organizations cannot operate as a silo.

When an Enterprise Asset Management (EAM) platform seamlessly integrates with other essential platforms, it enhances the efficiency of all your tools and processes, making teams more productive and equipment last longer.

  • No more isolated data silos or project planning tools: Cross-platform communication ensures everyone—from finance to operations—sees the same data, enabling effective collaboration.
  • Automated workflows: Simplifying administrative overhead, gives teams back valuable time.
  • Real-time tracking: Knowing where your gear is ensures leaders can act proactively to maintain project timelines and avoid surprises.

The bottom line? Integration transforms physical equipment from being “invisible infrastructure” into a competitive advantage. No more lost or underutilized items with no accountability trail.

4 Steps to Resilient Asset Management in 2025

As media technology advances, asset management priorities will evolve. To stay ahead, organizations must commit to the following:

  1. Rethink Processes

Audit current workflows and identify bottlenecks. Are you wasting time searching for equipment or processing check-ins manually? Automating audit and check-in tasks can recover hundreds of hours per year.

  1. Invest in Real-Time Data

Adopt asset-tracking systems that offer live updates on location, usage, and availability. This visibility reduces downtime and improves decision-making.

  1. Focus on Training

Technology is only as effective as its users. Provide onboarding and continuous support to ensure adoption of tools and processes.

  1. Adopt Preventative Strategies

Minimize downtime by scheduling maintenance before it’s critical. Use automated tools to manage warranties, service needs, and lifecycle planning.

The most resilient organizations of the future will view asset management not just as a means of maintaining operations but as a strategic tool for scaling their business. Success tomorrow depends on implementing robust systems and strategies today.

Cheqroom Overview

Cheqroom is a leading asset management platform that helps organizations share and track mission-critical equipment across teams and locations. The platform prevents equipment loss, increases productivity, maximizes asset value, and enables confident growth. From media and entertainment companies to universities and Fortune 100 firms, Cheqroom serves thousands of organizations managing $5 billion in assets worldwide.

Want to learn more? Visit cheqroom.com to download the full State of Enterprise Asset Management Report to see how top MediaTech teams are turning asset resilience into a competitive edge.

Read the full report here: https://www.cheqroom.com/tools-ebooks/state-of-enterprise-asset-management-report-2025/

Calrec – Standing on the Shoulders of Giants

Calrec – Standing on the Shoulders of Giants

Sid Stanley, Managing Director, Calrec

Covid taught the broadcast industry many things, and the requirement to quickly adapt to new ways of working is perhaps its lasting legacy. As broadcasters scrambled to get content on air, it was a stark reminder that companies need to ensure their business, supply chains and technologies are agile and flexible enough to be able to withstand different kinds of widescale disruption.

Since then, companies across the industry have come to value the importance of collaboration as a key business driver. As Henry Ford famously said, “Coming together is a beginning, staying together is progress, and working together is success”. He should know, he forever revolutionized the transport industry and his legacy of affordability and reliability is still standing strong.

Technology shifts in the media industry might not generate the same headlines as motor cars, but arguably impact more people’s lives, and the speed of change is continuing to accelerate. The move to IP has enabled broadcasters to work remotely, delivering immersive and personalized audio, whilst leaning into the cloud for additional resources.

Collaboration builds resilience and the expansion of production ecosystems over the last five years has helped broadcasters strengthen their infrastructures. As the adoption of sustainable, cost-effective, and flexible remote and distributed production workflows continues to develop, the ability to control any system from anywhere is creating more agile ways to work. Providing access to more audio cores, more faders, more surfaces, and more control from any location gives users ultimate flexibility delivering content to air.

These distributed workflows are designed to meet the needs of the production rather than the other way around, and in a world where consoles no longer operate in isolation, networks necessarily become more complex with remote and distributed workflows locating IP processing cores on site, on edge, and on prem. In this new flexible broadcast environment, being agile makes all the difference.

Cost-efficient Resources in the Cloud
Another valuable tool in the remote distributed production toolbox is the ability to process and mix audio in a cloud environment. More recently, the growing acceptance of resources in the cloud has also encouraged broadcasters to spin-up cost-efficient cloud processing for one-off productions, with no extra CapEx investment in additional hardware. Cost-efficient resources in a cloud-native environment like Calrec’s ImPulseV deliver even more flexibility, especially for ad hoc single productions that need more processing power. It also creates an environment that meets broadcasters’ precise needs.

This has led to an increase in more ambitious large-scale orchestration systems and virtualizing of productions, and distributed DSP that enables large mixers with thousands of channels of audio to be replaced by lots of DSP in lots of different places. This gives more control to broadcasters by allowing a single surface to control multiple DSP engines located anywhere, providing versatility and virtualization. It is also driving down cost – the traditional model of buying enough processing for your biggest event of the year is redundant when a virtualized DSP engine can deliver the audio quality and feature set in a cloud-native environment.

The big advantage remote and distributed production delivers is the flexibility to locate both equipment and personnel resources where they can be used most efficiently. This could be as simple as allowing an audio operator to mix from home or as complex as a distributed facilities hub where studio-based facilities are used to produce multiple events from various locations. Both have significant benefits in terms of saving on time and travel expenses and also bring other benefits where equipment resources can be pulled from multiple locations to meet the needs of the production. Utilizing remote technologies can create endless combinations of control and processing components that can adapt to any size of production, matching the resources required to meet the creation needs of the specific content.

More Choice Means More Control
The expansion of these production ecosystems is where the real value is for broadcasters. Calrec’s True Control 2.0 exploits this trend, giving the ability to control any Calrec system from anywhere in the world. It provides access to more cores and more surfaces, wherever they are. In fact, each controller console can access up to five other consoles simultaneously to give broadcasters much greater levels of remote control without the limitations of mirroring or parallel controlling. In addition to ImPulseV, True Control 2.0 works with Argo M, Argo Q, Argo S and Type R, allowing any of these products to remotely control any other True Control 2.0 enabled product.

It enables broadcasters to embrace distributed production workflows to create endless combinations of control and processing components, making the best use of existing production equipment to meet the production requirements of every live event.

Calrec’s unique relationship with its customers and its adaptability working with other manufacturers in an IP environment has enabled it to help broadcasters define and embrace creative new ways of working. Knowing everything will work seamlessly together develops stronger working relationships with trusted technology partners. And because it’s those vendors who are in the best position to adapt, improve and develop their specialist technologies, it ultimately delivers more choice and more control over how we work and the agility to adapt how we control everything.

As Henry Ford discovered, it always was and always will be the job of a technology manufacturer to anticipate change, and manufacturers like Calrec don’t design equipment for the present; they design for the future.

 

 

Appear – Firewalling in the Age of IP: Rethinking Security for Live Media Workflows

Appear – Firewalling in the Age of IP: Rethinking Security for Live Media Workflows

Ian Wagdin, Vice President of Technology & Innovation at Appear

As live production workflows shift towards IP and cloud-based models, the security considerations facing broadcasters and media companies are also evolving. Where operations were once confined to private, closed networks, today’s environments often depend on public infrastructure and remote collaboration. This move brings clear advantages in terms of flexibility and scalability – but also introduces new risks.

Firewalling has long played a role in securing digital workflows, but traditional approaches were not designed with media transport in mind. With live content increasingly delivered over IP, and often over the public internet, firewalling strategies must be scrutinised to ensure they can support the performance and security needs of modern media delivery.

More Connectivity, More Exposure

In the past, production environments operated within a contained network perimeter. Contribution feeds, playout chains, and editing systems were rarely exposed to external networks. That has changed. From cloud playout to remote contribution, content now moves across networks that are not always under direct control.

A single security incident can lead to serious repercussions – ranging from unauthorised access to content, to disrupted services and lasting reputational harm. Standard enterprise firewalls, typically built for broader IT functions like VPN access, general network defence, and web filtering, are often not equipped to handle the specific performance and reliability requirements of high-bitrate, low-latency media workflows.

Firewalling for Real-Time Media

Generic firewalls are typically built for common enterprise traffic, such as web browsing, email, file transfers. They’re not optimised for real-time video or audio streams. Media applications operate under different conditions, and often struggle to meet the specialised requirements including high data rates, stringent latency requirements, and the need for seamless redundancies.

To be effective in live media environments, firewalling needs to account for both the data and control planes. The data plane is responsible for transporting media streams and is highly sensitive to issues such as latency, jitter, and packet loss – any of which can impact the quality and reliability of live content. The control plane, on the other hand, manages session initiation, signalling, and device access. It plays a crucial role in orchestrating media workflows and is often a target for attacks aimed at disrupting services or compromising system credentials. Both layers require tailored security measures to ensure seamless and secure operation.

A firewalling solution for live media must do more than just block traffic. It needs to maintain throughput, allow for redundancy, and work in conjunction with other tools like NAT, VLANs, and traffic optimisation.

Implementation Challenges

Although best practices exist, uptake across the industry has been uneven. Concerns around added latency, configuration complexity, or disruption to workflows have led some organisations to delay implementation.

But the shift to public internet and cloud-based distribution makes this approach increasingly critical. Major events, including sports, entertainment and news are routinely streamed over networks that are vulnerable to external threats. Without purpose-built firewalling, media organisations face risks of cyber threats that could disrupt their content delivery.

We asl have to think about how we manage the traffic specifically as we often use multicast traffic in the broadcast workflow. To configure a firewall for unicast and multicast traffic, you need to create firewall rules that allow or block traffic based on its source, destination, and protocol. For multicast, you’ll need to ensure it’s enabled and allow specific multicast group addresses. Unicast traffic can be configured with standard firewall rules based on IP addresses and ports.

Managing firewall rules across multiple sites or partners adds another layer of complexity. A centralised management interface can help simplify policy deployment and ensure consistency across distributed infrastructure.

Designing for Media-Specific Needs

In response, more media-aware security tools are becoming available. These are built to support the performance expectations of broadcast and live-streaming environments. These solutions incorporate advanced firewalling techniques, including optimised traffic management, to ensure that high-capacity media workflows can handle large-scale IP video streams securely, without adding latency or causing performance bottlenecks.

Additional features such as de-jitter buffers, forward error correction (FEC), and bitrate policing help stabilise streams and manage network behaviour. ST 2022-7 support allows for seamless failover, while conformance to standards like SMPTE RP 2129 further strengthens security by allowing only authenticated and authorized traffic to pass through designated perimeters.

These tools do not eliminate risk, but they provide a foundation for more resilient media operations – especially when integrated into broader infrastructure planning from the outset.

Security as Part of Infrastructure Design

As workflows become increasingly decentralised, security must be considered at the architectural level. It’s no longer sufficient to treat firewalling as an isolated task managed by the IT team – it needs to be part of how media systems are designed and maintained. The next generation of firewalling for live media must deliver strong security while also maintaining smooth, uninterrupted performance for high-bitrate, real-time workflows.

A firewalling strategy that’s fit for purpose can help ensure operational continuity and reduce the likelihood of service disruption. In a sector where timing, reliability, and trust are essential, getting security right is no longer optional.

At Appear all our hardware platforms are designed from the ground up to support current security practices. The X10 and X20 both support 10G bi-directional IP interfaces that provide firewall-grade IP security at every connection node that can monitor and regenerate traffic as required.

Bridging the AI skills gap: essential knowledge for media professionals

Bridging the AI skills gap: essential knowledge for media professionals

Damon Neale, Metadat-AI

The media landscape is in the midst of a seismic shift. Artificial Intelligence (AI) is the game-changer that’s reshaping how media is produced, distributed, and consumed. Whether you’re in advertising, journalism, content creation, or any corner of the media world, there’s no escaping it: AI is here, and it’s not just for the tech gurus. The most challenging aspect is that once again, even our most seasoned media professionals may need to play catch-up on another emerging technology that is becoming pervasive in everything we do.

Why AI matters in media

First things first, let’s clear the air – AI isn’t about replacing human creativity or turning our jobs into a sci-fi dystopia. It’s about enhancing what we do best. Imagine automating the repetitive, time-consuming tasks (hello, keyword research and data analysis) so you can focus on the big picture – creating killer content that resonates.

AI is making waves across the board. It’s powering personalized content recommendations on platforms like Netflix, optimizing ad targeting, generating data-driven insights for newsrooms, and even helping with mundane tasks like hunting through hundreds of clips  or transcribing interviews. The demand for AI-savvy professionals is skyrocketing. According to recent job market data,  . It’s clear: those who understand AI and its applications are not just valuable; they’re essential.

The skills gap – and why it’s growing

So, what’s the deal with this AI skills gap? In simple terms, the media industry is evolving faster than it is possible for the workforce to keep up. Traditional media education hasn’t quite caught up to the rapid advancements in AI technology. To be fair, neither has IT education been able to keep up this time as the pace of change is faster than ever. Most professionals have a solid grasp of the basics – editing software, social media strategies, perhaps a dabble in analytics. But when it comes to AI, many find themselves in uncharted waters.

And it’s not just about knowing how to use AI tools – it’s about understanding them. The difference between someone who can operate an AI tool and someone who can use it effectively and strategically is the difference between staying afloat and leading the pack. AI literacy means knowing when to use AI, which tools to pick, and how to interpret the data they spit out. It’s about making informed decisions that enhance creativity and efficiency, not just following what the algorithm suggests.

Essential AI Knowledge for media pros

Let’s break down what you actually need to know. No, you don’t have to become a full-fledged data scientist, but a little knowledge goes a long way.

  •  Understanding AI basics:

Get a solid grasp of the essentials – what AI truly is, the mechanisms behind machine learning, and why data serves as the lifeblood of AI systems. It’s not about becoming a data scientist, but rather about understanding the core concepts that drive AI technologies so you can understand what it is good at (and not so good at) and where best to apply it. Knowing these basics will make you more comfortable using AI-powered tools and equip you to have meaningful conversations about data and technology in your day-to-day work.

  • AI in content creation:

AI is revolutionizing content creation, making it more efficient and enabling more choice for your creativity. Familiarize yourself with tools that can automate content generation, optimize SEO, and tailor audience engagement strategies. For instance, AI can help in curating personalized content, suggesting the best keywords, or even predicting what your audience will respond to next. Knowing how to leverage these tools can give your content that competitive edge, allowing you to focus more on creativity, strategy and the best content while AI handles the grunt work.

  • Data interpretation:

AI churns out vast amounts of data, but the real value lies in how you interpret it. Learning how to get AI to summarize data and then how to read and adapt AI-driven analytics can transform how you approach content strategy, campaign adjustments, and success measurement. Instead of getting lost in numbers, focus on extracting actionable insights that align with your goals. This skill will empower you to make data-driven decisions that resonate with your audience and drive tangible results.

  • Ethics and AI:

With the increasing influence of AI in media, ethical considerations are more crucial than ever. AI systems can unintentionally reinforce biases if not carefully monitored, especially when it comes to training data. Legally, data sets  might include unlicensed media or culturally sensitive content that could pose business risks. Media professionals must be vigilant about these risks, particularly in areas like content moderation, audience segmentation, and news reporting. Understanding the ethical implications ensures that your use of AI not only enhances efficiency but also upholds fairness and integrity and doesn’t land you in a legal mess down the line.

  • Staying updated:

AI technology is constantly evolving, and what’s advanced today might be outdated tomorrow. To stay one step ahead of the game, make it a habit to regularly update your knowledge and skills. Whether it’s attending workshops, following industry leaders, or experimenting with new tools, staying informed should be as routine as catching up on the latest headlines. This ongoing learning process will keep you agile and ready to adapt to the continually changing influence of AI in media.

Bridging the gap: it’s easier than you think

Now, here’s the good news. Bridging the AI skills gap doesn’t mean going back to school for a degree in computer science. There are resources designed specifically for media professionals who want to level up their AI game without feeling like they’ve signed up for a full-time course.

Learning may be made more approachable and accessible through the use of AI-powered tools with hands-on workshops and online courses specifically designed for those in the media. This is why Metadat-AI and IABM teamed up to produce an AI course that can easily be completed on your lunch breaks within a week. Whether with us or not, the key is to start somewhere – even small steps playing with new tools can make a big difference. And as you get more comfortable with AI, you’ll find that it’s not just a tool, but a partner that can upraise your work to new heights.

Embrace the future

AI is not a passing trend – it’s the future of media. The sooner you grasp it, the more prepared you’ll be for the challenges and opportunities ahead. Bridging the AI skills gap is about staying relevant, competitive, and progressive. So, whether you’re just dipping your toes into the AI pool or ready to dive in, now’s the time to start building those essential skills.

Remember, the goal isn’t to become an AI expert overnight, but to equip yourself with enough knowledge to use AI effectively and confidently. After all, the best media professionals are the ones who know how to blend the art of storytelling with the power of technology. And with AI in your toolkit, there’s no limit to what you can create.

 

Young Person of the Year – bring on the talent

Young Person of the Year – bring on the talent

Every year at the IABM Annual Awards, IABM recognizes the Andrew Jones Young Person of Year with a special award. The competition for the 2024 award was particularly intense. In the end, the award went to Ciaran Ennis, Associate Engineer at Techex. However, any of the shortlisted candidates would have been worthy winners in their own right.

We spoke to the shortlisted candidates to find out what brought them into the MediaTech industry, what they enjoy about it and how they see their futures unfolding. They come from a wide variety of backgrounds, and personal drive is a common factor for all of them – as is the support and mentorship they have received. What follow is an inspiring read in its own right and also provides an excellent insight into how to foster the new talent our industry needs to continue to thrive.

The award is named for former IABM Head of Training, the late Andrew Jones, who dedicated his career to attracting and fostering young people into our great industry. Andrew’s motto was ‘Never stop learning’. Reading this article would definitely make him smile!

The shortlisted young people interviewed are:

Alex

Locking


Senior Graphics Operator



MOOV

Ben

Killackey


 Technical Supervisor



Sky

Ciaran

Ennis


 Associate Engineer



Techex

Rares

Paunescu


 Broadcast Engineer Technology

 BFBS

Sophie Humphrey

 Associate Technical Supervisor

Sky

What is your educational background?

 Alex Locking:

I went to a comprehensive secondary school in Maidstone, Kent, where I stayed for 6th form to study my A-levels. I then studied Film, Radio and Television Studies at Canterbury Christ Church University, where I achieved a First Class BA Honours.

Ben Killackey:

Throughout secondary school I was really interested in all things computer science, however wasn't sure if it was something I wanted to do as a career. Alongside this I'd always loved watching big sporting/entertainment events on TV and taken an interest into trying to understand how everything worked - but it never clicked that this could even be a job.

Ciaran Ennis:

I completed my undergraduate studies at the University of Surrey, where I enrolled in a course initially called Film and Video Production Technology (BSc Hons). By my final year, the program had evolved and was renamed Film Production and Broadcast Engineering (BEng) to better reflect its robust mix of theory and hands-on learning. The curriculum was incredible, with a strong focus on industry-standard technologies and concepts.

The course offered access to state-of-the-art facilities like a new TV studio and sound stage, alongside instruction from an award-winning faculty with deep expertise in film, TV, and audio production. Being IET-accredited, it was an ideal blend of engineering principles, artistic storytelling, and practical production skills tailored to the creative industries.

Interestingly, during my time at Surrey, I discovered a passion for the technical aspects of filmmaking—something I hadn’t anticipated when I first joined. Initially, my love was solely for film creation, but as I delved into the engineering side, I realized that’s where my true enthusiasm lay.

Rares Paunescu:

I have successfully finished my Computer Science (Hons) degree at Coventry University and now I have successfully finished my Master’s degree with Merit in Digital Media at London Metropolitan University. I really wanted to do a Master’s since after I finished my Bachelor degree in Computer Science, but Telos Alliance came along with an internship at Broadcast Bionics’ office. I wanted to do a Master’s which is technology media related, since I already have the strong foundation in Computer Science from my Bachelor degree, and then I wanted to do a Master’s which combines my two passions, technology and media, and Digital Media was my best fit. The challenging thing was that in my second semester, I was also studying full-time and also working full-time at BFBS (British Forces Broadcasting Service), because I started working here at BFBS in March 2024 during my second semester, but still I have finished my Master’s degree with Merit classification, even if it was not easy to also study full-time and to also work full-time.

Sophie Humphrey:

I did a BTEC in performing arts at college, after a break I went back into higher education completing a degree in Television Production at the Solent University.

How did you get into the industry? - did you always have an eye on a career in media technology?

Alex Locking:

I always knew I wanted to work in media/ production from my teens, so I chose Media Studies for both GCSE and A-Level which confirmed my desire to enter this industry. I chose my specific degree because I wanted a 'hands-on' approach when learning about equipment and the live production environment, rather than focusing on theory-based writing.

I got into the industry through my university; we had a module in which alumni would come and do a presentation about their career and what they have done since leaving uni. One alumni works for Moov as the scheduler, and after hearing the type of work Moov are involved in I was extremely keen to get involved however possible. So, I contacted my now colleague, we arranged a zoom call where he offered me some part time work at BT Sport at weekends. Studying my final year of my degree coupled with working both days every weekend at BT Sport was difficult and I had very little time outside this, but it was worth it, because as soon as I finished my final year I was offered a full time role at Moov, where I still am today nearly three years later as a Senior member of the team.

Ben Killackey:

Whilst looking at post GCSE options, my parents happened to come across a course at a local college studying Creative Media Production & Technology and that's when everything clicked; I realised it was a potential career path I could take. From there, I went on to study Television Production at Solent University where I became heavily involved in student TV with the university's student-run production company called Sonar Events. It actually took a trip in my first year of uni to my currently employer, Sky, to open my eyes to media technology. I'd always only known the more general TV roles such as directing, camera operating etc., however the opportunity to tour such a facility and meet people who could work in TV, but also be technical at the same time was a real eye opener.

Ciaran Ennis:

From a young age, I’ve been passionate about both media and computer science. In school, I was always drawn to creative projects and problem-solving through technology. When it came time to choose a university course, I realized I didn’t have to pick just one path—I could combine my two passions. That’s how I discovered the Film and Video Production Technology course. It felt like the perfect fit, and from there, everything just fell into place. The rest is history!

Rares Paunescu:

At the age of 9, I was “infected” with a very beautiful passion, called “RADIO” from my father. At that age I started to develop my new love, trying to learn about the basic principles of a broadcast radio station. When I was 10 years old I was playing with a broadcast console and radio automation software. Well, that passion went crazy until 14 years ago, when I begged my dad to buy a FM license to have my own radio station, called Dream FM. From that point, I started to learn how to do professional things on a radio station. I love everything from radio, from the music, playlist, jingles, sweepers, radio imaging, radio production,... up to the transmitters, antennas, dipoles...etc.

This radio station is like my own child, I must do everything for it. The cool thing is that I really enjoy doing all the work, meaning: playlist, radio production, traffic, website, iOS App, IT issues, and of course also Broadcast Engineering issues.

Sophie Humphrey:

I didn’t always have my eye on a career in the media technology industry. I originally wanted a career in the acting world, and did a term at the university of Chichester to study it. My eye had subconsciously shifted a while before I left, as I kept looking at TV apprenticeships, BBC, channel 4, whilst at the same time getting a couple of audience tickets for some TV show recordings which sparked my interest even more, seeing how things ran behind the scenes.  After dropping out I applied for all the TV schemes including the mama youth but didn’t have much luck because of the pandemic. After working in retail and hospitality, I found a course at Solent University in Television Production. Whilst at Solent I got involved in many jobs within and outside university including networking events, which all accumulatively let me into the media tech industry.

What do you enjoy the most about your job?

Alex Locking:

I enjoy pretty much every aspect of my job, but in particular the team I have around me are brilliant; when I was new there would always be someone to help or provide support. Everyone here is very easy to work with and get along with. The other part I love about my role is the events I get to work on and locations I get to visit. In two and a half years I have worked on Wimbledon twice, Olympics, many Champions League football games including two finals in Istanbul and at Wembley and multiple huge box office boxing fights in both the UK and Riyadh.

Ben Killackey:

I love being in an environment where I can always keep learning and staying up to date with the latest technologies and developments - all whilst working on some big sporting events!

Ciaran Ennis:

One of the things I love most about working at Techex is the sheer variety of responsibilities I get to take on. Every day is different, whether I’m working on baseband video adaptation, deploying cloud infrastructure, diving into networking and IP video, or exploring compression technology. My role spans a wide array of cutting-edge products and technologies, which keeps the work exciting and intellectually stimulating.

Over time, I’ve built a strong foundation in these areas, including expertise in DevOps practices using RESTful APIs, Ansible, and Python, as well as container-based workflows like Docker and Kubernetes. I’ve also had the opportunity to work on OTT and next-generation contribution workflows, European and North American IPTV projects, and SDN and orchestration project delivery. It’s incredibly rewarding to see how these projects come together to drive innovation in the industry.

What makes it even better is the collaborative environment at Techex. Despite being a relatively small company, we’re making a big impact, and that’s a testament to the hard work and dedication of the team. I’m especially grateful for the support I’ve received from Ben Somerville—he’s been instrumental in my growth. From a timid graduate to a confident and capable engineer, his mentorship has played a huge role in shaping my career.

Rares Paunescu:

The variety of technical challenges and when I least expect it, of course that's when something happens. Every day is a school day! – this is what I LOVE the most about my job!

Sophie Humphrey:

Coming from a more production background, live broadcast technology was the thing that eventually ended up getting my brain ticking over. I love learning about how things work and being able to fault find issues. No two shifts are the same, there is always more to learn.

What advice would you give to other young people about getting into and working in media technology?

Alex Locking:

There are a few pieces of advice I would give to young people wanting to work in this industry. Firstly, focus on learning a skill, or multiple skills that will put you in contention for roles. For example, during lockdown I spent hours working on editing and motion graphics in the Adobe Suite. This allowed me to put together a portfolio of work without having to work on professional scale productions. In this industry a showreel is far more important than a CV as it shows both your skills and personal touch on your work. Secondly, I would say use any resources available to you to make contacts in the industry; it’s quite a small world so people will remember names. Finally, any skills or experience you do have, big yourself up! Use LinkedIn, make a showreel or portfolio to show off what skills you do have no matter what they are or how few they may be. Everybody has to enter the industry with no experience in the industry, so use what you do have to make people notice you.

Ben Killackey:

Push yourself out your comfort zone and say yes to every opportunity! I always found I learnt the most and everything felt the most rewarding when I pushed myself to say yes to an opportunity that was just outside my comfort zone - and don't be afraid to ask questions!! University helped guide me towards opportunities, it’s all about pushing yourself to say yes.

Ciaran Ennis:

My biggest advice would be to stay curious and never stop learning. Media technology is a fast-paced industry that constantly evolves, so having a genuine passion for learning about new tools, systems, and approaches is essential. Don’t be afraid to dive into areas that seem complex or unfamiliar—those are often the most rewarding to master.

Networking is another piece of advice I’d emphasize. Engage with industry professionals, attend events, and join online communities. The media tech world is smaller than you might think, and building strong connections can open unexpected doors. Something I wish I had done more of at University.

Lastly, find mentors or role models who inspire you and can help guide your growth. I’ve been fortunate to have incredible support in my career, and that’s made a huge difference. And remember, everyone starts somewhere—confidence comes with time, practice, and the willingness to tackle challenges head-on.

Rares Paunescu:

To be serious, to work hard, and to stop playing around. 🙂

Sophie Humphrey:

Do it, it’s a brilliant industry and people are willing to help you if you show interest and drive to do well. I would say find networking events, I have gained a lot from SMPTE and Rise events. Shadow as many people in different roles as possible to see what you like and don’t like, what you can see yourself doing in a future role. The more people you get to know the more you find out about individual industry jobs that you probably haven’t heard of. Take every opportunity, be outgoing - message people for advice or an insight into their job, 9/10 times they will more than likely reply and want to help you.

Where do you see yourself in 10 years’ time?

Alex Locking:

In 10 years’ time I sincerely hope I am still in the industry, maybe even with nothing to do with sports broadcast or graphics. I love the high-pace, intensity and creativity of the industry to entertain viewers and captivate audiences, by delivering moments in the best way possible.

Ben Killackey:

Ooh tough question! So long as I'm still learning, developing and having fun whilst doing so I'll be happy!

Ciaran Ennis:

In 10 years, I hope to be in a managerial role. I’ve discovered a real passion for working with people, and I’d love the opportunity to guide and support others in their careers, much like the incredible mentorship I’ve received. Helping others grow, giving them the same opportunities I was fortunate to have, and fostering a collaborative and innovative environment are something I’d find deeply rewarding.

Of course, I also want to continue expanding my technical expertise and staying at the forefront of media technology. But ultimately, combining leadership with technical innovation is where I see my future—and where I believe I can make the biggest impact.

Rares Paunescu:

 In 10 years’ time I see myself growing with the company, contributing to the company's success, and to also help growing the broadcast industry.

Sophie Humphrey:

That is a massive question but generally I hope to build on my broadcast technology knowledge and career, maybe work on some cool big events like the Olympics or some mainstream music festivals.