Veset – The appeal of niche and nostalgia

Veset – The appeal of niche and nostalgia

Lelde Ardava, COO, Veset

In an increasingly crowded broadcast landscape, OTT services and broadcasters face the unending challenge of capturing and maintaining viewer attention. Strategies to expand viewership and increase engagement include expanding into new geographical regions, launching on new platforms, and investing in big budget original content, but of course the latter isn’t always an option. For many broadcasters, another strategy that has emerged as being effective at drawing audiences and improving engagement is the delivery of content that caters to niche audiences and fandoms.

Power of niche offerings

Of course, delivering niche content that appeals to a very specific audience or fanbase is far from a new phenomenon. Niche channels have in fact been around for decades – take MTV or the History Channel for example. These networks recognized that launching niche offerings to appeal to passionate audiences not only helped to enhance viewer loyalty and enthusiasm but also created an opportunity to monetize back catalogue content. This strategy allowed networks to keep audiences entertained while maximizing the value of existing content libraries.

These days, networks, broadcasters and video providers stand to gain similar benefits from launching niche channels and services as those early networks. In recent years, there’s been a whole host of niche channels and streaming services launched, such as Funimation, a VOD subscription service showing anime shows and movies, and Mórbido TV, which caters to a growing audience of horror fans across Latin America.

Niche content is a great route to engage audiences because viewers are already passionate about the subject. This makes them more likely to engage with the content, so they watch for longer, keep coming back, and even interact with discussions about the content on social media. This type of interaction helps to promote the channel and create a buzz without the high costs associated with traditional marketing strategies. Another perk of niche content is that it can also attract advertisers looking for precise audience targeting, which in turn helps to improve ad revenue.

While not new, niche services remain an attractive strategy for broadcasters and video providers because they provide a way of leveraging existing content to create a new monetization stream, increasing profitability without major investment. This is particularly appealing given the current economic climate that is causing churn rates to be high and squeezing profit margins. The wide array of niche streaming services and channels in operation demonstrates the viability of this approach.

Nostalgia: a powerful emotional hook

Just as niche content channels and services allow broadcasters and OTT services to engage dedicated and passionate viewers, the appeal of nostalgic content is another powerful strategy that can keep viewers engaged and connected. Classic TV shows and movie franchises are seeing a resurgence as audiences seek out content that reminds them of their past, bringing with it a sense of comfort and familiarity. Another appeal of old content for many viewers is that it brings a more laid-back kind of viewing. Viewers can engage with older content easily while doing something else at the same time. New content on the other hand demands the viewer’s full attention, a higher level of commitment if you like, which the viewer will not always be in the mood for.

Several factors contribute to the ongoing success of nostalgia-based content. Viewers that grew up watching and enjoying certain shows or movies have an emotional connection to the content, which makes them more likely to revisit the content again and again. Additionally, those viewers are also likely to introduce old favorites to younger generations.

The appeal of nostalgic content is so great that broadcasters are not only reshowing much loved classics but are also investing in licensing legacy content to attract both new and returning viewers. A number of broadcasters have also invested in remaking old classics such as Hawaii Five-O, Danger Mouse and Dragnet, to appeal to the modern audiences in the hope of building a loyal fanbase.

Role of FAST channels

The FAST (Free Ad-Supported Streaming TV) business model is a perfect partner for both niche and nostalgic content, and so it’s not surprising that there’s been a huge increase in FAST channels in recent years. These curated linear format channels are often focused on niche and/or nostalgic programming and may offer episodes from a single show, or from a range of shows all within the same genre or field of interest. These channels are free to watch so are highly appealing for those viewers who enjoy particular shows, genres or interests, or who are passionate about a specific subject, but who may not want to pay for a subscription.

By leveraging SaaS playout platforms to launch FAST channels quickly and at minimal cost, broadcasters can repurpose existing content libraries to generate ad revenue. Broadcasters can create themed programming channels, such as an action movie channel, a dedicated classic sitcom stream, or a channel showing a long running, much loved series. One broadcaster that has leveraged this model with great success is ITV, which has launched many FAST channels based around single shows such as Hell’s Kitchen US and Love Island. This strategy not only appeals to loyal fanbases but also maximizes content monetization potential in a cost-effective way. Additionally, FAST channels can also be used as a means of promoting a broadcaster’s other channels and service, as well as driving viewers to take up premium services.

Reshaping engagement strategies

As competition for viewers intensifies, niche and nostalgia-driven content both offer viable pathways for capturing and maintaining viewer attention, in a cost-effective way. By catering to specific passions and emotional connections, broadcasters and video service providers can build stronger viewer loyalty, differentiate themselves in the crowded market, and create lasting engagement strategies. Whether through hyper-targeted content for dedicated fanbases or tapping into the emotional pull of the past, this approach is reshaping engagement strategies as broadcasters continue to adapt and evolve in line with market conditions and viewer preferences.

 

Tuxera – The evolution of creative infrastructure

Tuxera – The evolution of creative infrastructure

Why storage technology matters more than ever

Ned Pyle, Enterprise Storage Technical Officer, Tuxera

The media and entertainment industry is experiencing unprecedented growth in data demands. With an estimated 250-300% increase in media data over the last five years, and increasing adoption of 4k workflows, traditional approaches to storage and file sharing are struggling to keep pace with modern creative workflows. From post-production houses exchanging terabytes of data with global partners, to broadcasters managing time-critical content for live events, the ability to move and access massive files swiftly and securely has become critical to business success.

As someone who has spent years developing network file sharing protocols, I’ve witnessed firsthand how technical infrastructure can either empower or hinder creative teams. The reality is stark: every minute spent waiting for files to transfer or load is post-production paused and money lost.

The infrastructure challenge

Today’s media workflows demand more from their infrastructure than ever before. Virtual production requires real-time rendering at high resolution and immediate access to multiple versions of scene changes and assets. Post-production teams need to transfer large files between workstations for editing, color correction, and VFX work. Broadcasting operations require rapid ingest and distribution of content for live events. The need to transfer, centralize, and share these collections requires a fabric with high bandwidth, low latency, and considerable throughput.

Real-time rendering necessitates the lowest possible latency and rock-solid reliability to ensure frame rates are maintained. Traditional TCP (Transmission Control Protocol) network solutions simply cannot stand up – they’re too slow, too processor-heavy, too congested. The biggest problem with open-source SMB (Server Message Block) products on Linux is they have not evolved or kept up. These huge datasets require a modern solution that can compress data, offload processing, and fully utilize the biggest networks, all while operating on clients running Windows, Mac, or Linux.

SMB has been the primary remote file protocol of MacOS for twelve years, part of the core Linux kernel for more than twenty years, and present in Windows since the 1990s. The protocol has evolved from humble workgroup beginnings to SMB 3, a powerful data fabric designed to take advantage of the latest networking and storage innovations, with the latest security and scalability options. When fully and properly implemented, it can run equally well on huge clusters or tiny containers.

Next-generation networking

Time is money. The speed of sharing large datasets is key to productivity so that the valuable time of experts is spent working, not waiting. When infrastructure bottlenecks force teams to wait for files to transfer or load, it doesn’t just waste time – it interrupts creative flow and impacts the entire production pipeline. A modern data fabric using SMB will provide objective, tangible productivity benefits and even improve staff morale – how frustrated are you when you’re just sitting there while your computer appears to do nothing?

SMB operating over RDMA (Remote Direct Memory Access) brings hundreds of gigabits per second throughput at sub-millisecond latency to each node, ensuring uncompromised performance and consistency for production teams.  When using SMB Direct – or even when still on traditional TCP and ethernet – SMB multichannel ensures your networks get the full throughput utilization they support. You paid for the network; SMB 3 makes sure you fully utilize its potential. When working with the highest fidelity raw media formats, SMB compression ensures that you can maximize that bandwidth for all to share.

For high-fidelity raw media formats like DPX sequences, OpenEXR files, ProRes masters, and Digital Cinema Packages (DCPs), SMB compression can dramatically reduce transfer times. A 100GB uncompressed video file might transfer at half its original size, freeing up bandwidth for compressed formats like H.264 or HEVC that don’t benefit from additional compression. This optimization happens in flight, even improving the transfer of container formats like MXF.

The closed-loop nature of RDMA networking adds an extra layer of security through its inherent air-gapping, protecting valuable content while maintaining peak performance. This enhanced security is particularly crucial in today’s distributed production environments. When I owned the protocol at Microsoft, one of our key focuses was protecting data both during transfer and at rest, while providing robust management tools to handle the increasing complexity of large-scale deployments. RDMA’s architecture helps address these concerns by creating isolated, high-performance data paths that are inherently more secure than traditional networking approaches.

Looking to the future

I believe that RDMA, containers, security, and compression are the big four technologies shaping our industry’s future. RDMA will become the standard for media and entertainment productions, and traditional networks will become the exception.

Containers and microservices will be common, with containers providing storage and remote file sharing inline, all as part of managed fleets spun up quickly and efficiently for production, then discarded when no longer needed.

The days of lax security and trusting air-gapped networks will diminish, requiring more use of encryption, modern authentication, and other protective measures. The sheer size of datasets and raw production media formats will bring compression back to the forefront, both on the wire and on the storage pools, until the next generations of hardware rise to meet them.

Planning for tomorrow

What should media companies consider when planning their infrastructure? Listen to what your creative teams are saying about application performance. They probably don’t understand the storage or network, but they know their tools and where they’re slow, and you can tell when the infrastructure is bottlenecked. Plan for a system with the biggest capacity to store and move data you can forecast, then increase that even more because it probably isn’t enough – especially when you are successful!

The future of content creation depends not just on creative tools, but on the infrastructure that enables them. As global collaboration becomes the norm and file sizes continue to grow, the ability to move and access data efficiently will become even more crucial. By understanding and investing in the right technologies today, media companies can ensure their technical foundation supports rather than constrains their creative ambitions.

TotalMedia – Empowering intelligent video processing solutions for over 30 years, revolutionizing video broadcasting and OTT services through innovative technology licensing

TotalMediaEmpowering intelligent video processing solutions for over 30 years, revolutionizing video broadcasting and OTT services through innovative technology licensing

Founded in 1994 and headquartered in Silicon Valley, California, TotalMedia has over three decades of expertise in image and video core technologies. Leveraging a Software-Defined Architecture (SDAA), TotalMedia provides a flexible and scalable foundation for seamless integration with existing broadcast and OTT infrastructures while adapting to future technological advancements.

With over a decade of investment in artificial intelligence, AI serves as the cornerstone of TotalMedia’s UHD encoder, generative content and quality enhancement features. AI-powered algorithms enhance video resolution, smoothness, and color accuracy, restore aging content, and generate advanced visual effects such as super slow motion and afterimages. TotalMedia empowers industries including broadcasting, cable networks, telecom operators, OTT platforms, content providers, surveillance, and emerging sectors like in-vehicle entertainment systems.

Industry impact and reach:

  • Daily Broadcasting: Powering over 20,000 multi-screen channels daily.
  • Reach: Serving over 400 million households and mobile devices globally.
  • UHD Live Events: Successfully delivered over 400 Ultra High Definition live events.

Comprehensive product suite

Live Encoder

A real-time transcoding and streaming solution supporting multiple platforms, video codecs, and output formats. Ideal for live events, news broadcasts, and real-time content delivery with high quality and low latency. AI powered encoder could maintain the original quality while saving bandwidth cost.

VOD Encoder

Tailored for media industry professionals, the VOD Encoder excels in Ultra HD video content production and conversion. In the 5G era, where high-resolution video is increasingly prevalent, this solution ensures that content creators can efficiently prepare and deliver stunning visuals to their audiences.

AI Enhancement SDK

TotalMedia leverages artificial intelligence to enhance video compression efficiency and visual quality significantly. The AI Enhancement SDK offers a comprehensive suite of advanced AI-powered features, including:

  • AI-Powered Encoding: Minimizing bandwidth consumption is crucial for OTT services. Content Adaptive Encoding (CAE) and Perceptual Visual Compression (PVC) enhance low-bandwidth streaming, ensuring smooth, uninterrupted playback on any device or location. This approach maximizes cost efficiency while maintaining high-quality viewing experiences.
  • Super Resolution: AI-powered upscaling enhances lower-resolution video to HD or 4K, delivering superior clarity and detail for a sharper viewing experience.
  • Motion-Compensated Frame Smoothing: For frame interpolation and NTSC/PAL cross conversion, motion-compensated frame rate conversion enhances video fluidity by reducing judder and delivering smoother motion.
  • Color Enhancing: Optimizing color vibrancy and accuracy, resulting in richer and more lifelike visuals.
  • SDR to HDR Up-Conversion: Transforming Standard Dynamic Range (SDR) content into High Dynamic Range (HDR), expanding the range of colors and contrast for a more immersive viewing experience.
  • Old Video Restoration: Revitalizing aging video content by removing random scrach and noise, artifacts, and other imperfections, breathing new life into valuable archives.

AI sports solutions: enhancing the fan experience

TotalMedia provides specific solutions tailored for the sports broadcasting and OTT market:

  • Instant Replay: Instant replay lets viewers quickly rewatch key moments in live broadcasts, especially in sports and entertainment. It captures short segments of the live feed for instant playback, often with slow-motion for better detail. This feature enhances the viewing experience by ensuring important moments aren’t missed.
  • Super Slow Motion: Creating exceptionally smooth and detailed slow-motion replays, either live or from recorded files, using advanced AI algorithms.
  • Afterimage Creation is an AI-powered feature that generates dynamic motion trails, emphasizing movement and impact in fast-paced scenes such as sports, action sequences, and dynamic visual effects. This enhances viewer engagement by adding a cinematic, immersive effect to video content.
  • Ball Trajectory Highlight: AI-powered tracking of ball movement with visual overlays and data insights, enhancing analysis and viewer engagement. Supports football, tennis, badminton, and table tennis.
  • Football Highlights: Football highlights showcase the best moments of a match, including Goal, Shot, Corner Kick, Penalty Kick, Direct Free Kick, Kickoff, Red Card, Yellow Card. They provide a quick recap for fans, capturing the excitement in a short, engaging format.
  • Live 16:9 to 9:16 Streaming: Automatically transforms standard widescreen (16:9) live streams into mobile-friendly vertical (9:16) format with advanced face and body detection and tracking, delivering an optimized live viewing experience for mobile users.
  • Auxiliary Live View: In badminton and tennis, micro cameras on the net tape capture close-up shots of key moments like net approaches and smashes. However, despite high-definition recording, these cameras often suffer from quality and color loss. AI-powered live video enhancement restores professional-grade quality with advanced color correction, detail enhancement, and frame interpolation. This ensures sharp, real-time visuals, allowing fans to experience every critical moment with exceptional clarity.
  • Personalized Content Delivery: Deliver tailored content—highlights, behind-the-scenes footage, real-time stats, and data visualizations—based on viewer preferences and history. This enhances engagement and satisfaction, creating a more immersive experience.

Shaping the future of video consumption

TotalMedia’s commitment to innovation, AI-driven video enhancement, and strategic technology licensing cements its position as a key player in video broadcasting and OTT industries. By equipping media professionals with cutting-edge tools and scalable solutions, TotalMedia is redefining the video experience for audiences worldwide.To learn more, visit www.totalmedia.ai or contact us at info@totalmedia.ai

 

Telos Alliance – Next Generation Audio: The future of broadcast audio is here, today

Telos Alliance – Next Generation Audio: The future of broadcast audio is here, today

Costa Nikols, Executive-Team Strategy Advisor, Media & Entertainment, Telos Alliance

Audiences today expect higher quality media, whether content is being viewed on a television, mobile device, or streaming platform. Listeners now anticipate consistent, high-quality audio that is clear, personally engaging, and available in their preferred language. Advances in media delivery have ensured that audio experiences are more immersive and enjoyable, meeting the expectations of modern viewers. With immersive technologies like Dolby Atmos and MPEG-H, viewers can expect more dynamic home entertainment. Content creators and platforms should deliver audio that meets these expectations, including consistent loudness, clear dialogue, and support for personalized standards. Enhanced audio quality boosts engagement, making content more enjoyable and effective. For broadcasters and streamers, this translates to loyal viewers, valuable audiences for advertisers, and sustainable subscription revenues.

Step forward: Next-Generation Audio (NGA)

Audio quality in live broadcasts and streaming has remained the same for over a decade. Next-Generation Audio (NGA) technology offers the potential to enrich this aspect by providing more sophisticated audio experiences at home. NGA introduces advanced features such as enhanced dialogue, immersive audio, and personalization for greater preference and accessibility. It allows the audience more control, enabling options such as switching commentators, changing language tracks, or fine-tuning audio, thereby delivering a personalized and immersive listening experience.

Supporting NGA’s capabilities are technologies like object-based audio and new industry standards for metadata, such as Serial ADM (S-ADM), allowing for more diversity than traditional channel-based audio.  With broadcast infrastructure transitioning from SDI to ST-2110 IP, it ultimately enables technologies like NGA to offer flexible, personalized audio experiences. This approach streamlines production in multi-platform, multi-format environments by delivering audio, video, and metadata separately, allowing broadcasters and content distributors to offer more audio profiles beyond the traditional channel-based audio requirements. As customization and user-centric features become more integral, there is rapidly growing interest among innovative sports broadcasters and producers to adopt and implement NGA methodologies to enhance audience immersion and personalization.

Sports leaders are securing early wins with NGA

Several pioneers in the sports broadcast domain have already experimented with new NGA experiences across major live events, including highly-watched occurrences like the Paris summer games. Last year, France TV, alongside several other European broadcasters, introduced S-ADM in major sports broadcasts, captivating audiences with exclusive, tailored experiences only NGA can offer. At Paris 2024, France TV showcased Dolby Atmos and the latest SMPTE 2110-41 standards, utilizing advanced audio processing tools to enhance accessibility and deliver more personalized features such as multilingual audio and audio descriptions. This implementation of NGA represents an initial exploration into immersive audio technologies, setting the stage for broader application. Collaborations with various sports broadcasting customers are ongoing to trial and deploy new NGA-ready audio workflows that enhance fan experiences. The sports broadcast community prioritizes fan engagement and superior immersive quality, promising continued interest in NGA for future sports events.

It’s time for better than good enough

Adhering to outdated, one-size-fits-all audio workflows is no longer feasible. In a landscape where broadcasters must produce more content across more platforms with fewer resources, the challenges of audio management are expanding exponentially. Just as video workflows have evolved, audio must follow suit. With increasing downstream formatting requirements and global distribution endpoints, producers need more intelligent, automation-rich mixing and processing solutions to achieve the required scale and efficiency.

While audio experiences, whether through broadcast or streaming, have not always kept pace with audience demands for quality, cutting-edge audio standards and modern workflows present opportunities to accelerate innovation in audio. NGA establishes a new benchmark that will redefine audience experiences. From immersive audio to advanced customization, the potential for innovation is vast. With a growing appetite for superior sound and a market ripe for change, the stage is set for an audio revolution that transforms how content engages, excites, and inspires audiences.

Quickplay – Democratizing OTT performance: unlocking concurrency for all

Quickplay – Democratizing OTT performance: unlocking concurrency for all

Ashwani Kumar,  Associate Director, Software Engineering, and Rajan Chinnadurai, Principal Software Engineer, Quickplay

In the rapidly evolving world of Over-The-Top (OTT) streaming, access to cutting-edge technology should not be limited to industry giants. Democratizing TV technology means empowering every content provider, regardless of size, with the tools and knowledge to deliver exceptional user experiences. A crucial aspect of this democratization is understanding and managing user concurrency – the number of viewers accessing your platform simultaneously. This article breaks down how to derive this vital metric, making advanced performance analysis accessible to all.

Why concurrency matters for everyone

For any OTT platform, from a niche community channel to a global streaming service, understanding user concurrency is fundamental. It allows for accurate performance testing, ensuring your platform can handle peak viewership without buffering or crashes. This knowledge empowers you to optimize your infrastructure and provide a seamless experience, a key differentiator in today’s competitive landscape.

Accessible methodologies for concurrency derivation

While sophisticated Quality of Service (QoS) tools and active session tracking offer the most precise data, they can be costly. However, there are accessible methods for deriving meaningful concurrency estimates, even without extensive resources.

Heartbeat calls: a simple starting point

Heartbeat calls, periodic signals sent by clients to servers, can provide a basic estimate of active users. If you know the frequency of these calls, you can approximate the number of concurrent viewers. For instance, if heartbeats occur every 30 seconds, each call roughly represents an active user during that period. This method is straightforward and requires minimal technical overhead.

EMA Analysis: smoothing trends with playback requests

Playback requests, indicating content initiation, offer a more granular view of user behavior. Exponential Moving Average (EMA) analysis smooths these requests, revealing trends and peaks in concurrency. EMA prioritizes recent data, making it adaptable to fluctuating user engagement. This method, while requiring some data analysis, is accessible to those with basic spreadsheet skills.

Democratizing the process: tools and techniques for everyone

To further democratize concurrency analysis, it’s essential to utilize accessible tools and techniques. Open-source monitoring tools like Prometheus and Grafana provide real-time insights without hefty licensing fees. These tools, often used in conjunction, allow for the creation of dashboards that visualize user concurrency data, making it easier to identify trends and respond to performance fluctuations.

Load testing frameworks like Apache JMeter and Locust allow even smaller teams to simulate high levels of user concurrency, ensuring their platforms can handle peak loads. Integrating these tools into CI/CD pipelines allows for continuous testing and validation, ensuring consistent performance.

Addressing limitations and ensuring accessibility

It’s crucial to acknowledge the limitations of request-based analysis methods. They primarily capture users actively watching content, neglecting those browsing or searching. Additionally, they may not be suitable for platforms with minimal user activity or stringent privacy requirements.

However, by understanding these limitations and utilizing accessible tools and techniques, every content provider can gain valuable insights into their platform’s performance. The goal is to democratize access to these technologies, empowering everyone to deliver a high-quality streaming experience.

Conclusion: empowering the future of OTT

Democratizing TV technology means giving everyone the tools and knowledge to compete in the digital age. By making concurrency analysis accessible, we empower content providers to optimize their platforms, ensuring a seamless user experience for all. This is not just about technology; it’s about leveling the playing field and fostering innovation in the ever-evolving world of OTT.

 

Projective – Wrangling the wild west of post-production

Projective – Wrangling the wild west of post-production

Derek Barrilleaux, CEO, Projective

For decades, post-production has been a crucial and dynamic component of the media supply chain. From editing and sound design to color grading and visual effects, post-production ensures that the final product aligns with creative visions and meets industry standards, enhancing the overall viewer experience. Yet for many professionals in the industry, it can feel like the “Wild West.” This phrase frequently comes up in conversations with technology buyers, at trade shows, and during countless discussions about workflow challenges. It’s a fitting analogy—a lawless, chaotic environment where workflows are anything but streamlined, resources are scattered, and collaboration suffers.

But it doesn’t have to be this way. Post-production, while complex, can transition from disorder to structure with the adoption of thoughtful creative project frameworks.

Why post-production feels chaotic

When the discussion on post-production workflows veers towards the “Wild West” notion, it typically stems from the fragmented and unpredictable workflows that plague post environments. Editors and creators often operate under tight deadlines and high expectations, but the tools they rely on can exacerbate the challenges. Leading software solutions like Adobe Premiere Pro and Avid Media Composer allow creators the flexibility to work with media stored across various locations—local drives, cloud servers, and external hard drives. While this flexibility is empowering, especially for individual users, it often becomes a source of chaos on collaborative projects.

Think about it—how often do post-production professionals face these issues?

  • “Why is this file offline?”
  • “Wasn’t that footage saved in the cloud?”
  • “Who moved this asset, and why can’t I find it?”

These are daily struggles in post-production, often amplified by a lack of standardization and frameworks. Media can be misplaced, naming conventions are inconsistent, and critical assets may get lost in the shuffle. Add to this the pressures of faster turnaround times, tighter budgets, and an increasing demand for remote work, and the result is a perfect storm of inefficiency.

The cost of a fragmented workflow

The consequences of disorder in post-production workflows aren’t trivial. Disorganization leads to wasted hours tracking down files or redoing work that could have been avoided. Collaboration becomes strained, with team members frustrated by unclear processes and missing assets. Deadlines are missed, creativity is stifled, and morale within the team takes a hit.

The bottom line? Chaos in the post-production workflow is costly—not only in terms of time and resources but also in the emotional toll it takes on teams striving to deliver their best work.

Introducing creative project frameworks

If the “Wild West” is the problem, a structured collaboration framework is the solution. At its core, a creative project framework provides a roadmap for standardizing and streamlining post-production workflows. These frameworks are not meant to stifle creativity; rather, they create the order and predictability needed to help creative teams focus on what they do best—producing exceptional content.

A robust creative project framework includes three essential pillars of organization and efficiency. These include the foundation to streamline every stage of the post-production lifecycle—from media ingest to archiving.

1. Standardized processes and project setup

One of the first steps to taming post-production chaos is standardizing how projects are set up and managed. This process includes defining naming conventions, file storage locations, and folder structures. By establishing clear and consistent guidelines, teams can quickly locate needed assets, reducing time spent searching for files.

Automated project setup tools can further enhance this process. They provide templates for common project types, ensuring that every new project begins with a solid foundation. This minimizes room for error and ensures consistency from project to project.

2. Centralized media management

Centralizing media in a controlled environment eliminates much of the guesswork associated with locating files. When assets are stored in one central repository—whether it’s an on-premises NAS or a cloud-based solution—team members can access them easily and confidently.

This centralization reduces the risk of duplicate files, saves space, and ensures that everyone is working with the most up-to-date versions of assets. It’s a fundamental step toward improving efficiency across the board.

3. Enhanced collaboration tools

Collaboration tools are the backbone of modern workflows, enabling seamless communication and media sharing across teams, regardless of where they’re based. Utilizing solutions that integrate with industry-standard tools, like Adobe Premiere Pro or Avid Media Composer, allows teams to collaborate within familiar interfaces.

For example, systems that support shared workspaces and real-time updates keep everyone on the same page. Teams no longer have to rely on emails and Slack messages to coordinate changes manually. Automated notifications and version control ensure that no one is left out of the loop.

The benefits of structure

When a creative project framework is effectively implemented, the results are game-changing for post-production teams. They experience fewer workflow bottlenecks, allowing them to focus their energy on creative work rather than tedious management tasks. Deadlines are met with less stress, budgets are respected, and overall productivity increases.

More importantly, aligning teams under a structured creative project framework fosters an environment of collaboration and unity. Clear processes reduce friction, encouraging open communication and cooperative problem-solving.

Post-production transformation

The post-production industry continues to evolve rapidly, driven by advancements in technology and shifting market demands. These changes bring both challenges and opportunities for media professionals. Now more than ever, implementing robust collaboration frameworks is critical for staying competitive.

Frameworks not only mitigate the issues tied to the “Wild West” mentality but also future-proof workflows, accommodating emerging trends such as remote work and faster content delivery cycles.

At Projective, we believe that fostering structure and collaboration lays the foundation for creativity to flourish. By taming the chaos, we empower teams to produce their best work, exceed client expectations, and stay ahead in this fast-paced industry.

Are you ready to bring structure and creativity together in harmony? Integrating collaboration frameworks into your post-production workflow could be your first step toward achieving that goal.

Pixel Power – Automation and Content – how can they coexist?

Pixel Power – Automation and Content – how can they coexist?

Stuart Russell, Head of Marketing, Pixel Power

One of my favorite things to say about our media production world is that I’ve never worked in an industry that is so obsessed with the next generation of technology but is so slow to actually adopt it. Perhaps this conservative tendency is a side-effect of working with content that we see as culturally significant or valuable, or maybe it’s just the recurring 3am nightmare of the screen going black during primetime. But when industry commentators keep lauding the next new shiny thing as a game-changer, customers watch on with their hands on their hips waiting for one of their number to go first, make all the mistakes, and iron out all of the wrinkles.

I don’t say this as a criticism, and technology has clearly made a huge difference to how video content is produced and distributed over the last 50 years, but cooler and more pragmatic heads know that things never move quite as quickly as everyone says they will.

Let’s be honest – we’ve been talking about IP for over a decade, and I still remember an IABM annual conference in 2015 when the majority of hands in the room were raised in agreement to the question ‘will the majority of workflows will be IP by 2020?’. Here we are in 2025 and SDI still lives on. IP is certainly more prevalent but not ubiquitous, and we’ve seen a number of hot topics and technology platforms float past us and disappear down the river. 3D-TV anyone?

I’m being deliberately provocative, of course, and we have undeniably seen the development and introduction of some really interesting tools that have helped us boost creativity, improve efficiency, reduce the need for dull and repetitive tasks and improve our ability to share and collaborate remotely.

The Covid pandemic may be painful to think back on, but it is hard to deny the mark it has left on our industry when it comes to working practices and its role as a catalyst in accelerating new product development. Those two years were probably the equivalent of five regular years in our industry, as vendors and customers alike sought to develop new tools and methods that would help us respond to forced remote working patterns.

When I consider the last decade, I think that this issue of remote collaboration, sharing and production is one of the more significant themes. Cloud storage costs may remain stubbornly high – one of the obvious barriers to the adoption of full cloud-based production – and the environmental footprint of working in the cloud cannot be ignored – but the ability to move content around remote teams for comment, editing and approvals has been an important shift.

Pre-pandemic, we saw the introduction of innovations such as newsroom automation platforms and robotic studio camera systems. While these solutions were shrewdly marketed by vendors as tools to improve quality, consistency and accuracy – reducing human error and providing more predictable results – the side-effect was almost certainly some redeployment of human resource.

All of this brings us to AI – the latest hot potato to be juggled. Having previously worked for a company that is very active in the AI space, I can see how AI solutions can be used to automate some of the less glamourous heavy lifting work that needs to be done. As the amount of content being produced every year continues to grow, it makes sense to deploy tools that can help producers and broadcasters stay on top of the production and delivery landscape. As examples, we’ve seen the development of software platforms that can transcribe speech to text and then create broadcast quality subtitles, and clever solutions that can ingest a sporting event and create custom length highlight reels. While solutions like these are not perfect – I often describe AI as a toddler that should never be left unattended for any length of time – they are improving with every iteration.

Automating these repetitive tasks can help drive efficiency savings for broadcasters, and while ‘efficiency’ isn’t a particularly exciting theme, it’s only going to become more important as the amount of content keeps growing. With many online platforms now managing a mix of both short-form and long-form content, bringing order to complexity is the name of the game and AI can clearly assist here.

When it comes to the creative side of the business, the idealist in me wants to say that the future looks bright and relatively AI-free. While we’ve seen the emergence of some tools like Midjourney for image generation and Adobe’s latest version of Photoshop which can help generate small additional segments of video, I think we’re a long way off a full movie or TV show being AI-generated. I’m also not sure that there is (currently) much audience appetite for this. After all, AI is just the statistical analysis of historical data, and that doesn’t always lend itself to creativity. In this sense, AI may be smoldering in the corner of the room, but we’ve yet to see any flames. I’d like to believe that the content explosion coupled with greater efficiencies in production and delivery techniques will lead to an uplift in creative work, but of course time will tell.

Automation (in some form or another) has been a fact of industry life forever, and it has changed the composition of workforces and the way organizations are structured. While many have railed against the perceived deskilling of the industry, I would suggest that the skills base has simply shifted. I hate the idea of media production ever becoming a fringe offshoot of the IT world, but we do have to admit that the way we move content around, share it and manage it has irreversibly moved into this IT/networking domain. While wholesale adoption of new technology platforms rarely happens quickly, there is definitely a need for agility within our customer base – some traditional roles will continue to diminish in importance (and number) but new roles will emerge and will require a different set of skills.

 

Pebble – 25 years of playout, then and now

Pebble – 25 years of playout, then and now

Peter Mayhead, CEO, Pebble

25 years ago, many were simply relieved that their businesses had “survived” the millennium bug. Audiences were glued to George Clooney in ER, and Breathe by Faith Hill was the Billboard bestselling single of the year. TV Technology magazine published an article headed What’s Digital Television?

2000 was also the year that three colleagues  set up a new business, Pebble. They had worked for the then market leader in playout automation Louth (acquired by Harris, now Imagine), and they wanted to bring their extensive experience to bear on a new generation of systems.

It is worth remembering that there were technical challenges back then that have largely gone away. The biggest, of course, was that a lot of playout was still from tape, in large robotic libraries from Sony and Panasonic, Ampex and Odetics. The automation system had to look ahead to ensure the right tapes were loaded into VTRs, then cue them in advance for pre-roll.

Tektronix introduced the Profile video server in 1996, but it took a long while for large-scale disk storage to become practical and affordable.

The other big issue was that each device talked to the automation over individual serial links. Engineers of the time will recall soldering the fiddly D-connectors. System architects will recall the necessity for dedicated real time machine control hardware. PCs at the time – Pentium processor, memory measured in megabytes, integral disk maybe 20 gigabytes – certainly did not have the horsepower to synchronize multiple devices.

Pebble’s first products were designed to be ready for the future, a view which chimed well with the industry. Sales success quickly followed, with major broadcasters like ZDF in Germany.

This was a time for big investments in broadcasting. The first HD channel in Europe launched in 2004, and every broadcaster recognized that this was a transition they had to make.

Digital television created the capacity for many more channels, and everyone was keen to exploit the new revenue opportunities. Traditional broadcasters were forced to shift from offering one or two channels, perhaps with the odd regional opt-out, to becoming multi-channel delivery machines.

This, empowered by the rapidly growing capabilities of playout automation systems, created a completely new business model: the dedicated playout center. Probably started by Red Bee Media and rapidly followed by others like Ascent Media and Encompass, the playout center brought economies of scale by handling all the delivery requirements for multiple clients. Today, virtually every broadcaster across Europe outsources playout.

These new businesses valued agility, because it meant that they could quickly add outputs, and broadcasters in turn recognized that they could be much more flexible in what they offered. This was particularly important as competition grew ever fiercer.

This competition was not just from traditional rivals: in 2005 some engineers at PayPal developed a video sharing website, which they called YouTube. Couple that with Steve Jobs in 2007 waving the first-ever iPhone in front of an adoring audience, and the way in which video is consumed changed forever.

Pebble, meanwhile, was quietly successful. By 2013 revenues were around £6 million, and the founders looked to an exit as the next logical stage. This was when I joined the company.

In 2015 it was acquired by Vislink, at that time a leader in specialist hardware like wireless camera links. Vislink had a corporate vision to move towards software, recognizing that there were long-term concerns around industry-specific hardware.

This was a time of strong growth for Pebble: the business doubled in size from 2015 to 2017, partially on the back of mainstream HD transformation and thoughts of Ultra HD ahead. But it masked serious business issues elsewhere in Vislink, and in 2018 Pebble found itself the only trading entity of a publicly-listed company.

Taking over as CEO, I knew that we had to refocus Pebble’s business to escape from the financial instability that had been forced upon us.

The idea of an integrated playout device based on a workstation PC was well established. It was a key technology in giving large playout businesses the scale and agility they need. But by 2018 there was quite a price war going on in this part of the market, and I knew that price wars are impossible to win.

So we took the decision to focus on tier 1 customers, the major broadcasters and playout centers which had always been our core market. Of course we have an integrated channel product, but we see it as part of a larger architecture.

Then 2020 came along, and covid changed everything. We were forced to work remotely, and we have never gone back. There is no Pebble office or development center: we save the time, stress and carbon footprint of commuting and contribute from wherever we are. And it is a great insight to support our customers who also want remote capabilities for their businesses.

Looking forward, it is hard to see anything but uncertainly for legacy broadcasters in the face of competition from Netflix, Prime and the others. On the other hand, I see governments becoming quite nervous at the thought of losing “the national broadcaster”. If there is a crisis – and the world is in a pretty bad place at the moment – how do you address the nation?

Looking forward, and this may seem a strange thing for a company like Pebble to say, technology is no longer the primary issue . Modern computing power can do anything you want it to.

For broadcasters, the one question is how do they make money? How do they remain financially viable? How can they manage their operations and services, to an audience that now expects interactivity, multiple screens and “background television”, in an affordable way?

Broadcasters today, as they always have, believe that they are unique and their challenges are special, despite every single one of them doing pretty much  the same thing: a seamless stream of programmes and commercials. The streaming giants, as they move into sports and event programming, are discovering the exponential complexity curve: live television is really hard.

The future for the media industry is still shadowy. But it will continue to be exciting and stimulating, and I strongly believe that  Pebble will be around for the next 25 years, at least.

 

 

nxtedition – Telling the right story by mastering multiplatform content creation in today’s media landscape

nxtedition – Telling the right story by mastering multiplatform content creation in today’s media landscape

Adam Leah, nxtedition

Great storytelling isn’t just about what we say; it’s about how, where, and when we say it. In an era where content is consumed across a wide array of devices and platforms, delivering the right story to the right person at the right time is more complex than ever, and probably one of our industry’s biggest challenges. Content creators must adapt their storytelling to suit each platform and audience, while keeping the core message consistent.

Looking back, broadcast was relatively straightforward; one story, one way. But now in 2025, the way we create and consume content has changed dramatically. The platforms that once defined our workflows now look quite different. Traditional broadcast and newer digital platforms like TikTok, YouTube, and Instagram all serve distinct audiences, each with their own expectations. How do we ensure that content can adapt seamlessly to each of these? How can creators maintain the heart of a story while ensuring it works across such diverse channels?

This is where modular journalism can play a part. The shift toward breaking stories into structured, reusable components allows content creators to reassemble these pieces for different formats and audiences. A single news package transforms into multiple assets: a short-form video for TikTok, a longer explainer for YouTube, a deeper analysis for the website, and the more traditional broadcast package. The core story remains intact, but its presentation adapts to the platform and the audience it serves.

While this shift in how we tell stories is essential, technology alone can’t drive it. The real change happens when technology is paired with creativity and a shift in culture. The newsrooms getting it right today are those that have embraced the idea that technology enables creativity, but it’s the people who ultimately drive the transformation. Real change comes when journalists and creatives rethink workflows, cultivate platform expertise, and collaborate across previously siloed teams. It’s about more than just adopting new tools. It’s about adapting mindsets and approaches.

This is where agile production models come into play. The rigid, sequential production systems that once dominated the industry can’t meet the speed and flexibility modern storytelling demands. To keep up, organizations are adopting workflows that support simultaneous content creation across multiple platforms. These agile setups not only allow for faster iteration but also enable collaboration across teams, regardless of location. Then implementing into these agile platforms allows a lot of utility to operations. AI doesn’t replace creatives, but it frees them from routine utility tasks like transcription, translation, logging and tagging, so they can focus more on telling the stories that matter whilst easily finding the content to match due to AI indexing.

Yet, with these new possibilities come new challenges. As stories get reshaped for different platforms, the pressure to maintain journalistic integrity increases. Each version of the story, whether it’s a 30-second TikTok clip or a deep-dive YouTube video, needs to adhere to the same standards of accuracy and ethics. This is where creativity plays a vital role. While new formats and tools may be involved, the heart of good journalism remains unchanged. The best content creators will be those who combine platform expertise with strong editorial values, maintaining integrity while maximizing engagement.

Looking ahead, the complete virtualization of production will further transform how we work. With microservice technologies content creators will be able to work from anywhere, with powerful production tools at their fingertips. But the real game-changer will be how content is delivered. Through object-based broadcasting, content can be personalized to the individual viewer, ensuring they receive exactly the version of the story they need, when they need it. We’re moving beyond the idea of reaching the right audience at the right time to reaching each person with the story most relevant to them.

In an age where content is abundant, but attention is scarce, success will belong to those who can match the right stories with the right audiences. This requires more than just adopting new technologies. It demands rethinking how stories are conceived, created, and distributed. The creative potential of modular journalism, combined with agile production and utility AI, ensures that stories stay relevant, engaging, and impactful, no matter the platform.

If we don’t use these new tools in the same way we used the old tools, only then will we achieve the full potential of these new technologies, embracing these changes while staying true to the craft of storytelling.

Net Insight – Creating without limits: smarter workflows for live media

Net Insight – Creating without limits: smarter workflows for live media

Jonathan Smith, Solution Area Expert, Net Insight

In the world of live content production, creativity has always been king. Whether it’s the emotion of a last-minute goal, the atmosphere of a sold-out concert, or the breaking tension in a live news broadcast, the magic of live lies in its immediacy. But behind every seamless live broadcast is a complex network of workflows, technologies, and teams working together to make it all happen.

Today, they are empowered by flexible, scalable infrastructures that allow them to experiment, innovate, and deliver engaging content to audiences worldwide, all while balancing cost-efficiency and quality. This evolution is particularly evident in live content production, where creativity isn’t limited to storytelling alone but is deeply embedded in how content is delivered. From global sporting events to breaking news coverage and immersive virtual concerts, the ways audiences consume live content have diversified, prompting broadcasters to rethink how they approach distribution and workflow management.

The creative power of flexible workflows

Creativity thrives when barriers are removed, and in the world of live content, those barriers often come in the form of rigid, monolithic workflows. Historically, the production and distribution of live events. Tier 1 events in particular have relied on broadcast-grade hardware with strict, linear processes. While this approach ensured reliability and quality, it often limited the flexibility needed to experiment with new formats or cater to emerging platforms.

Today, the landscape is shifting. Broadcasters are embracing hybrid distribution models that blend the best of both worlds — leveraging the reliability of traditional hardware-based workflows for core live feeds while integrating scalable, cloud-based solutions for supplementary content. This hybrid approach allows media companies to maintain the high-quality standards expected from premium broadcasts while enabling the agility needed to adapt content for social media, regional broadcasts, and niche audiences.

By decoupling certain aspects of live production from the physical infrastructure, broadcasters gain the freedom to experiment without the fear of jeopardizing the integrity of the main event feed. Whether it’s adding real-time highlights for social media, offering alternative camera angles, or creating immersive second-screen experiences, flexible workflows empower creative teams to explore new storytelling avenues.

Efficiency as a creative enabler

While creativity drives engagement, efficiency ensures sustainability. In an era where rights fees for live events continue to soar, and audience attention is fragmented across countless platforms, broadcasters are under immense pressure to maximize return on investment. Efficiency, therefore, isn’t just a cost-saving measure — it’s a critical enabler of creativity. Streamlined workflows reduce the time and resources spent on manual processes, allowing creative teams to focus on content rather than logistics. For example, in live sports broadcasting, efficient routing of multiple feeds, real-time format conversions, and flexible bandwidth management are essential to ensuring that content reaches the right audience in the right format without delays or additional overhead.

With flexible infrastructure, media companies can make smarter decisions about where and how to distribute content. Premium feeds can be delivered through highly reliable, low-latency pathways, while lower-tier or supplementary content can utilize cloud-based solutions that scale based on demand. This targeted approach not only optimizes costs but also allows broadcasters to tailor content experiences to specific audiences, enhancing engagement and monetization potential.

Automation’s subtle hand

In discussions around efficiency and scalability, automation inevitably enters the conversation. But while automation is often seen through the lens of replacing human intervention, its real value in live content production lies in how it complements and enhances human creativity.

In today’s live production environments, automation handles the heavy lifting — managing tasks like dynamic feed routing, bandwidth allocation, and even real-time quality monitoring. These behind-the-scenes processes ensure smooth, uninterrupted delivery, freeing up human operators to focus on creative decision-making.

Despite its growing role, automation remains a tool not a replacement for human insight. In live production, where timing, emotion, and storytelling are everything, the human touch remains irreplaceable. Automation enhances this process by removing technical bottlenecks and providing creative teams with the agility to experiment and iterate in real time.

Embracing creative risk without technical roadblocks

Perhaps one of the most exciting shifts in live content production today is the newfound freedom for broadcasters to take creative risks without worrying about technical limitations. Flexible infrastructures, efficient workflows, and subtle automation create a safety net, allowing teams to push boundaries and explore new formats.

As live content production continues to evolve, the lines between creativity, technology, and audience engagement will blur even further. The next generation of workflows will not only support traditional live broadcasts but will also enable entirely new forms of storytelling. For broadcasters and media companies, the focus will increasingly be on how to harness these technologies to better engage wider audiences and maximize the value of every piece of content. And while the tools and platforms will continue to evolve, the core challenge remains the same: balancing creative freedom with technical reliability.