Telos Alliance – Next Generation Audio and its benefits to viewers

Telos Alliance – Next Generation Audio and its benefits to viewers

Larry Schindel, Senior Product Manager, Telos Alliance

Next Generation Audio (NGA) brings some long-overdue enhancements to broadcast and streaming audio. Immersive audio, personalization, and dialog enhancement are features that provide the most noticeable and substantive changes from the viewer’s standpoint. Under the hood, object-based audio and new emerging industry standards for metadata in the form of Serial ADM (S-ADM) are examples of the technologies that enable some of these new features for the viewer.

Immersive Audio

Immersive audio – which viewers commonly know as Dolby® Atmos® or MPEG-H 3D Audio – brings a far more cinematic experience to the viewer and can make them feel like they have seats right on the 50-yard line. In contrast to mixes designed for a cinematic setting, immersive audio for broadcast is accomplished in a much simpler manner. In television, we mix to a specific channel configuration, such as 5.1+4 employing overhead speakers.

One of the challenges in producing immersive audio programming is that not all of the content you want to use is in the same format. Using live sports as an example, a highlights clip from the last time the teams played each other, an interstitial element, or a commercial might be in 5.1 or even stereo. Airing this content directly as it was recorded causes sudden jumps in the sound field and is disconcerting for the viewer. Using a tool like the Linear Acoustic® UPMAX ISC to upmix audio to the desired immersive format allows a consistent soundfield regardless of the input source. Upmixing can be used as a quick and easy way to launch a surround or immersive service while the rest of the infrastructure is built up to produce and distribute content in this format natively. It can also be used to creatively enhance a mix; for example, inserting an upmixer on a subgroup of audience mics can make the audience feel larger and less correlated.

Personalized audio

Object-based audio (OBA), where each audio element and sound is its own standalone audio “object,” enables features like personalization and receiver mixing. Today, broadcasters deliver a complete mix containing the music, effects, and dialog. When multiple versions are needed to provide different languages or AD descriptive services, they are still delivered as a complete mix, which is not the most efficient use of the limited bandwidth available for transmission to the viewer and the reason why such content is usually delivered in stereo or even mono.

OBA and the personalization enabled by NGA really shine here, even compared to the immersive audio experience. By using one of the latest audio emission codecs, like Dolby AC-4 or MPEG-H from Fraunhofer IIS, it is possible to push the final mix of dialog with the M&E bed to the viewer’s environment, based on combinations, constraints, and limits of which the broadcaster has complete control. The M&E bed and individual dialogs – such as different languages, team announcers, and descriptive services – are sent individually at a much lower bitrate than required to deliver multiple complete mixes.

This not only allows for greater efficiency for the broadcaster but also ensures that every viewer can enjoy the full immersive or surround experience. Visually impaired consumers benefit in particular, as today, they are denied the immersive experience and are forced to listen in mono.

With the improved efficiency of NGA emission codecs, it is possible to fit immersive and personalized services into the same data footprint needed for 5.1-channel audio today.

Meanwhile, advances in supporting technology, such as Serial ADM (S-ADM), provide a standardized, vendor-agnostic approach to the metadata required to support NGA in the production, distribution, and playout stages of the broadcast chain. S-ADM can identify aspects of the various audio elements used to make up the complete audio program. At its most basic level, S-ADM can signal which channels comprise the bed mix, which is dialog, and the type of dialog (language, AD, team announcer, etc.). Since S-ADM is synchronous with the audio itself, similar to the older proprietary Dolby metadata carried in a SMPTE RP 2020 stream, it can be carried as an audio stream or in a SMPTE ST 2110-41 stream, making it compatible with both baseband and ST 2110-based workflows.

Telos Alliance is at the forefront of these next-generation audio workflows with products like the Linear Acoustic UPMAX ISC immersive audio upmixer, the Linear Acoustic LA-5291 and LA-5300 broadcast audio processors with integrated Dolby Atmos encoders, the Linear Acoustic AMS MPEG-H Authoring and Monitoring system, and the Jünger Audio™ flexAI platform which supports both MPEG-H and S-ADM workflows.

 

Tedial – Navigating the complexities and unpredictability of media operations

Tedial – Navigating the complexities and unpredictability of media operations

Emilio L. Zapata, founder, Tedial

The landscape of the media industry has undergone a remarkable transformation, fueled by the rapid evolution of software technology and the proliferation of omnichannel streaming platforms. The swift advancements in digital cloud technology are placing considerable strain on applications and solutions vendors within the Media and Entertainment (M&E) market.

The unease is evident when communicating that media operations are inherently intricate and unpredictable. Anyone claiming simplicity or ease has likely not experienced the challenges present in the real-world scenario. This scenario is not novel; a similar shift occurred over a decade ago in more established IT markets, such as health, finance, and retail. During that time, vendors justified shortcomings and high budgets by emphasizing the inherent complexities of supply chains.

To expedite digital transformation in the M&E market, software technology vendors must embrace best practices and tools developed in more mature IT markets. This entails adopting no-code solutions that ensure interoperability, scalability, resilience, and security.

What is No-Code?

 As the M&E market embraces a more expansive array of software, IP, cloud, and cross-platform technologies, the onus falls on technology vendors to furnish capabilities that facilitate the creation of impactful software. This, in turn, propels tangible business outcomes and catalyzes foundational cultural shifts for M&E companies.

No-code represents a revolutionary software development approach that doesn’t demand prior knowledge of traditional programming languages to craft applications. This is facilitated by no-code development platforms, encompassing visual modeling of business logic, allowing developers to seamlessly drag and drop preconfigured functional blocks to construct sophisticated applications. No-code platforms leverage technology to augment individuals’ innovation capabilities, boasting numerous advantages, with three standing out prominently:

  • Shortened Time to Market: The visual drag-and-drop functionality of no-code platforms expedites rapid prototyping, enabling products to reach the market far quicker than traditional programming methods.
  • Enhanced Solutions through Citizen Development: Leveraging the unique insights of business professionals who understand processes and customers intimately enriches the software development process. Collaboration between business and IT teams proves invaluable, harnessing diverse skills and experiences.
  • Reduced Development Costs: Graphical design of applications using preconfigured functional blocks facilitates component reuse, accelerating service design and production.

The critical question arises: how can users and the software team collaborate effectively in the initial stages of an automation project using software-defined workflows? The optimal approach involves starting with the design of simple processes (prototypes), progressively implementing them, and iteratively refining based on feedback. Prototypes serve as a validation mechanism for business ideas, allowing for rapid expression of concepts. This iterative process ensures that the development team quickly grasps needs and translates ideas into the application without months of comprehension time.

Market analysts stress the importance of no-code technology to accelerate digital transformation in the M&E market, compared to the old method of using scripts in the development of solutions.

MovieLabs’ levels of interoperability

 MovieLabs(*) has recently released an insightful paper titled “Interoperability in Media Creation: Enabling Flexibility and Efficiency through Interoperable and Composable Software-Defined Workflows.” This document outlines a set of interoperability principles designed to serve as guidelines for the industry in implementing solutions and benchmarks for measuring progress.

Interoperability, broadly defined as “the ability of a system to work with or use the parts or equipment of another system,” takes on a crucial role in the context of media creation workflows. It refers to the tasks and processes’ capacity to be recomposed, even when integrated into software-defined workflows. The evolution from lower to higher levels of interoperability transforms the creation and recomposition of a Software-Defined Workflow from a process that:

  • Requires the writing or rewriting of software to
  • Low-code solutions, such as modifying translation plug-ins or mapping tables, using custom scripts, and eventually to
  • No-code solutions, where a simple drag and drop of desired components builds and deploys a new workflow.

While custom scripts played a pivotal role in automating workflows, providing efficient control over media assets, the challenges of maintaining and updating these scripts due to their complexity and technical expertise requirements were significant.

No-code platforms represent a modern approach to tackling the complexity and unpredictability of media operations. Equipped with pre-built modules and templates, such as package business capabilities and smart packs, these platforms can be quickly configured to align with a company’s specific needs, leading to faster deployment times.

The transition from custom scripts to no-code platform solutions reflects the natural evolution of software technology to address the evolving needs of the media industry. Acknowledging that this shift can be both painful and costly for some vendors, the paper establishes that the highest level of interoperability is achieved with no-code solutions. Ideally, software-defined workflows become fully recomposable, allowing for a plug-and-play scenario when replacing one component with another, providing the same workflow function.

Various architectural patterns can be chosen for integrating components into SDWs. Point-to-point integrations, particularly when using component-specific mechanisms or patterns, tend to be less reusable. Common integration Platforms as a Service (iPaaS) offer the promise of a one-time integration, especially for aspects covered by the platform. Communication through a common platform based on a unified data model facilitates easier recomposition compared to integration relying on custom point-to-point connectors.

We recommend exploring the paper, as its principles align seamlessly with the annual analyses conducted by the prestigious technology consulting firm Gartner on iPaaS integration platforms.

Gartner’s iPaaS criteria

Gartner(**) defines integration platform as a service (iPaaS) as a vendor-managed cloud service that enables end users to implement integrations between a variety of applications, services and data sources. iPaaS emerged as the fastest-growing segment of the integration software technologies market in 2021 while many customers adopt iPaaS as an alternative to traditional integration platform software.

The iPaaS market started as a collection of unique products that were differentiated from their integration platform software counterparts due to ease of use, lower cost of entry and ease of access. Over time, iPaaS has evolved into an operating model that delivers all possible types of integration technologies. Moreover, for the most common integration scenarios, iPaaS vendors are providing packaged integration processes targeted to business users. Let’s look at some relevant characteristics of iPaaS:

  • iPaaS can Implement the following use cases for integration technology:
    • Data consistency to ensure applications are operating with the right information.
    • Multistep process to automate business processes and workflows.
    • Composite service to create new services composed from existing applications, services and data sources.
  • Enable end users to implement integrations directly and not mandate the use of vendor- or partner-provided professional services.
  • A low-code/no-code user experience (UX) for building workflows, user interfaces and forms.
  • The ability to deploy the iPaaS platform in a hybrid mode, including multicloud options across the iPaaS public clouds and IaaS public clouds, and within the customer’s data centers.
  • Provide secure connectivity for on-premises applications and data sources via some form of secure agent without having to open inbound firewall rules.
  • The organization’s security and regulatory compliance needs.
  • The long-term cost expectations and available budget.
  • smartWork: no-code iPaaS for Media

    iPaaS ensures interoperability and data consistency between applications and data sources, orchestrates multistep processes internally and for external business partners, and creates composite services.

    Conclusions

    In an era where there’s an abundance of tools across every aspect of the media supply chain, and an increasing reliance on multiple software applications and vendors, maintaining seamless connectivity amid software updates and new tools poses a daunting challenge. The paramount importance of interoperability between applications and systems cannot be overstated.

    Interoperability between components of a media supply chain enhances their reusability and simplifies the reconfiguration of SDWs, while minimizing the amount of custom translation and reintegration work, reducing both cost and risk.

    No-code integration platforms empower digital citizens to become creators of their own applications, representing a transformative wave in technology that allows business users to engage with technology without necessitating IT expertise. We assert that no-code software signifies the future of software development, offering companies reliable, scalable, and user-friendly solutions.

    The synergy of diverse skills, encompassing both technical and business acumen, dismantles hierarchies and functional silos, fostering a culture of innovation and business agility. Our belief is rooted in the notion that automation should be accessible to all, enabling individuals to work smarter, not harder. No-code iPaaS platforms emerge as catalysts, simplifying and expediting process automation in the dynamic M&E market, where constant innovation and the need to rethink or refine business processes are prevalent.

    In the realm of media operations, characterized by complexity and unpredictability, no-code iPaaS platforms stand out as a solution, fostering component reuse, interoperability, reduced engineering costs, and the scalability and resilience of solutions. Insights from technology influencers and experiences in other mature IT markets converge to affirm that no-code iPaaS platforms form a robust foundation for accelerating the digital transformation of M&E organizations, steering clear of vendor lock-in.

     

    References:

    *https://movielabs.com/production-technology/the-2030-vision/

    **https://www.gartner.com/doc/reprints?id=1-2CF6S3ZK&ct=230130&st=sb

     

Synamedia – Why flexing your business agility muscles with SaaS matters

Synamedia – Why flexing your business agility muscles with SaaS matters

Simon Brydon, Head of Sport – Video Network, Synamedia

 Headbands and sweatbands. Legwarmers and leotards. Step aerobics and jazzexercise. Fitness trends come and go but looking after customers’ needs and keeping an organization in good shape is always in vogue.

Times are tough for the media and entertainment industry. Alongside the ongoing debate about the death of linear and Pay-TV, there is growing unease about the underlying economics of streaming – especially for sports – with subscription income flatlining and high churn rates continuing as end-users evaluate whether they are getting value for money.

Keeping up with the industry’s relentless pace of change and staying competitive by scouting out ways to lower total cost of ownership while adding new monetization features to keep the balance sheet healthy have never been more critical.

From vanity to sanity

Agility. Flexibility. Reliability. Resilience.  With Olympic fever mounting, this sounds like the fitness mantra for top athletes fixing their sights on the medal podium in Paris 2024. But, closer to home, they are watchwords for content owners, operators, providers and streamers eyeing up their bottom line to build in more efficiency and flexibility to save money.

Take streaming services. Having been on a massive land grab for customers, they are now focusing on cutting costs and re-evaluating their business models to work out the economics of streaming. Chasing turnover for vanity has become chasing profit for sanity, underlining the need for effective approaches to streaming which help lower total cost of ownership.

Given the peaks and troughs nature of sport, the smart money is on deploying solutions to maximize resources. For example, for sports with spotty demand or catch-up TV or FAST channels running in the middle of the night, it makes sense to take advantage of technologies which now make it possible to reduce processing costs by only transcoding video when a stream is requested by an end user.

For more sustained live demand where quality and reliability are paramount, highly scalable solutions offering pinpoint picture perfection with deliverable low latency are called for, with best in class encoding and lower bitrates to reduce CDN egress costs.

A new SaaS regime

The latest tech developments point to a move by video platforms to a multi-tenant cloud SaaS platform approach, powered by best-of-breed streaming technology. This makes good commercial sense by enabling easily deployed, flexible solutions that can be adopted on a case-by-case basis.

Previously, service providers have had little alternative to customized, complex deployments involving heavy SDKs and pre-defined, sequential phases of testing with no overlap between phases. But in today’s rapidly evolving business and technology environment, it’s simply unsustainable to endure many months for acceptance testing – a necessary evil with so many consumer devices to support – for the launch of a single feature.

Alternatively, SaaS puts customers firmly in control. Flexible, affordable, and scalable – with the onus on the software provider to host and maintain the service – it means providers can pay as their ambitions scale, whilst reaping the benefits of new product enhancements, features and functionality added as frequently as multiple times a day.

Some early adopters are already turning their backs on inflexible, bespoke technology deployments and instead embracing SaaS solutions. Interestingly, these are not just those born-in-the cloud streaming services that might first jump to mind, but also more traditional Pay-TV providers and telcos.

A case of business agility: how Synamedia flexed its SaaS muscles

Not only is this velocity and agility game changing for our customers but is critical for us as a company.  Our pace of change of product delivery has increased an order of magnitude and we have also evolved our development approach to consider the complete customer experience.

Customer first has always been a key mantra for Synamedia – arguably tailor-made software is the ultimate expression of being a customer obsessed company. But where once our platform deployments were bespoke for each customer, moving to a SaaS model means customization can happen at the edges.

We don’t just talk the talk, we also walk the walk. Changing to a SaaS model has not only involved the technology shift to SaaS architectures such as public cloud, micro-service, multi tenancy, CI/CD, open APIs, and standard services but has also changed our company’s cultural mindset.

It has impacted every department including the way we sell, support, and contract with customers – from the creation of our front-line customer success organization and practices to how we provide backend support in HR, legal and finance, for example deploying more sophisticated billing mechanisms based on different licensing models like pay as you go.

Going for gold

 The good news is that video consumption is still on the rise.  The potential rewards are huge for the players who are smart about how they deliver to viewers.

From just-in-time video processing and IP distribution, to targeted advertising and launching new services at speed, this move to SaaS allows content owners and operators to focus on their core values and areas of expertise rather than on operating a video system.

Our industry is a late adopter of SaaS and one of the main reasons is that it requires changes in both the vendor community as well as for video service providers. Put simply, operators cannot realize the benefits of SaaS without changing their operating model to accommodate a high velocity and multi-tenanted approach, most notably acceptance testing.

Those that don’t change will be overtaken by more agile competitors, maybe not in the short run, but inevitably over time. Those that adopt SaaS will give their subscribers a better service and will benefit from a much lower cost of ownership.

Delivery the SaaS way has shifted Synamedia’s cultural mindset, and our internal teams have reorganized to support different priorities and responsibilities. In this golden age of content, where consumers want to change what and how they watch in the blink of an eye, it’s time for video service providers to shape-up, rev-up their SaaS routine and be ready for all the action.

Signiant – How AI is becoming a vital part of the intelligent transport workflow

Signiant – How AI is becoming a vital part of the intelligent transport workflow

Ian Hamilton, Chief Technology Officer, Signiant

The use of AI at all levels in the broadcast chain increased with dizzying speed throughout 2023 and into 2024. But while the likes of ChatGPT and its generative AI cousins have stolen much of the limelight when it comes to assessing the technology’s impact on the industry, in truth there is likely more work being done in other areas. AI has been part of the fabric of the industry for several years now, and at Signiant, this has become a vital component of what can be referred to as an intelligent transport workflow.

Machine learning systems learn and adapt using statistical models to make inferences from data. These systems take a set of inputs and make predictions for corresponding outputs; trained on examples, they can predict outputs for inputs they haven’t seen before. More data drives better predictions and, as a multi-tenant SaaS vendor, Signiant aggregates a lot of data that can be anonymized and analyzed for the benefit of all our customers.

With file transfer systems, many parameters can be adjusted to increase (or decrease) the resulting transfer rate, which ultimately depends on a series of interactions with the operating environment. The operating environment encompasses factors like dataset composition, storage capabilities, computational resources, and network conditions. Given an operating environment and a resulting transfer rate as inputs, an ML system can be trained to predict transfer parameters used to achieve the rate, but this on its own isn’t useful for maximizing the transfer rate.

By adding another input representing transfer rate quality, a form of contrastive learning (using positive and negative examples) can be applied to predict transfer parameters to create a model to drive high-quality transfer rates. This approach relies on labelling the transfer rate quality of the examples used to train the model, which can be driven algorithmically.

Changing parameters

Operating environment variables impacting the speed of file transfer include: networking factors (available bandwidth, latency, loss and jitter); compute factors (number and speed of CPUs and GPUs or other hardware offload capabilities); storage factors (read rate, write rate, optimal block sizes, type of storage and protocol support); and sizes of files in the dataset.

Signiant implements a transport-layer protocol deployed on top of UDP that outperforms TCP on long-distance high-bandwidth networks. Whether to use Signiant UDP or TCP is one key parameter impacting performance in different environments. On top of transport-layer optimizations, Signiant uses an HTTP-based parallel stream application-layer transfer protocol. Multiple streams facilitate scaling out the transfer across multiple servers, among other benefits. Large files can be split across multiple streams or multiple small files can be sent on a single stream. At the application-layer, the version of the HTTP protocol, the number of streams used, and the amount of data sent over each stream are adjustable parameters.

As mentioned previously, defining a “good” transfer is a key part of the system. At the simplest level, a good transfer rate is a rate that is as close as possible to the available end-to-end bandwidth of the environment. The real available bandwidth isn’t necessarily known, but it can be estimated with reasonable accuracy leveraging operating environment instrumentation.

The advantages of AI

The main benefit in using AI tools lies in the elimination of time-consuming manual testing and tuning. The possible combinations and permutations of transfer configurations are practically unlimited. Running enough test transfers to cover appropriate combinations and eliminate run-to-run variability is time consuming and wasteful when contrasted with looking at real world transfer data obtained under similar conditions. ML just provides an effective and well understood mechanism for analyzing this data.

To quantify success, we look at the percentage of transfers achieving good transfer rates before and after applying the model. Given the objective of the system is to maximize this, a significant improvement in this metric (based on a given classification method) shouldn’t be surprising. It’s also somewhat self-referential.

A perhaps better measure of success that can be translated into a business benefit is the portion of transfers that still benefit from manual performance tuning. This has dropped to effectively zero. Prior to introducing our intelligent transport capability, manual adjustment of transfer parameters was frequently required to achieve transfer rates above 5 Gbps. With our ML-based tuning approach, we regularly see rates over 15 Gbps with no manual tuning.

The current and future state of play

Signiant has one issued patent (USPTO No. 16,909,382) and one pending patent on our “Cloud-Based Authority to Enhance Point-to-Point Data Transfer with Machine Learning”. Signiant has fourteen total issued patents, with two pending patents, covering a broad range of the underlying technologies we use to provide efficient global access to media. At the risk of oversimplifying, the claims of patent 16,909,382 cover aspects of how we collect and process the information necessary to train a model and how we apply the results of that model to new transfers.

We are in active use with our Jet product. Jet is used for automated system-to-system transfers, which tends to benefit most from this type of optimization.

As for where we are heading, a challenge for continued refinement is ongoing collection of training data. We need to make sure we aren’t just implementing a positive feedback loop that reinforces already established patterns. Additionally, while the initial configuration determined by this system plays a critical role, there is also the opportunity to adjust transfer parameters over the duration of the transfers. For example, like our transport-layer protocol adjusts to the operating environment in real time through its flow and congestion control mechanisms, there is an opportunity to also adjust application-layer parameters during the transfer to optimize for variability in operating conditions.

RT Software – Why AI won’t steal our broadcast graphics jobs – but it might change them…

RT Software – Why AI won’t steal our broadcast graphics jobs – but it might change them…

Mike Fredriksen, Commercial Director, RT Software

We have all seen the consternation in the media about the rising challenge of AI in a wide range of industries and the potential for mass job losses as a result. Should we be concerned that the same could happen to workers in the broadcast graphics sector? The trouble with these kinds of sweeping statements is that they cover such a broad set of roles that it becomes meaningless. To make informed comments we really need to address each niche within ‘broadcast graphics’ separately and look at what AI could do, or is already doing, to see how it affects the users involved. What is true for some areas may be very different for others.

Let’s start with telestration, aka sports analysis. Telestration is the process of adding lines, arrows and highlights to sports action. Although it is done for many sports, we most often see this on high value sports rights such as football. Its aim is to share insights about team tactics and to more deeply engage the viewers at home. By drawing the viewer’s attention to some critical action, or mistake, through the addition of pitch graphics, the broadcaster is attempting to impart inside knowledge about winning strategies.

Historically, the process for adding these lines onto the pitch, shafts of light above players, or arrows that illustrate the ball’s flightpath, was done manually by a human operator. But the task is often complicated because these elements need to be added over moving video. For example, the camera pans and the players run, but the graphics need to appear ‘stuck’ to the players or fixed to a position on the ground while all this is happening.

This requires a fair amount of technical skill for the operator to make a convincing analysis sequence. Although product developers aim to make it as easy as possible, it still requires time, effort and focus from the user. It remains very easy for an unskilled operator to do it badly!

This type of job role is prime territory for AI to move into. It’s normally a manual, laborious process that is prone to error, whereas AI can, for example, automatically identify players, and automatically attach graphics to them much more quickly and accurately than a human operator. AI can also be used to automatically calibrate a pitch (so graphics have the correct perspective), or track camera movements (where slight errors in how virtual graphics ‘stick’ to the pitch are very obvious to the human eye). Whilst it is possible to do manually, these are fiddly, tedious tasks and doing them well requires technical skill and concentration. In contrast, AI performs these tasks perfectly every time and considerably quicker. Data from RT Software found that the AI in their Tactic Pro product was between 60 and 150 times faster than the equivalent manual processes.

Some reading this might argue that AI is therefore de-skilling the role of the user, and potentially even eliminating them completely. What are they for, if not to carry out these essential, but ultimately slightly fiddly jobs? Well RT Software argues that the AI does not eliminate the operator or reduce the skill level at all. What it does is change the type of skills that the operator has to have, and they argue it changes it for the better.

Let’s back up for a moment and consider what skills an operator really ought to have. What are we asking them to do, really? It boils down to informing the viewers at home about the player’s skills, team tactics, management strategies and so on. It’s really about communicating an insider’s perspective to the layperson.

To achieve this, pre-AI, the operator needs to be adept at defining motion paths and configuring the software to achieve their aims. Post AI, the operator no longer needs those technical skills. Instead the job role can be defined around professional football experience. Someone who has been a football trainer, or player is now within scope of the role of telestrator, because handling the idiosyncrasies of controlling a technical software product is less important than understanding the game from the inside. What’s really important is the professional insight they bring to the role. Surely this is a great step forward for broadcasters and viewers alike?

There is evidence that this change is happening. Broadcasters might have a reasonable expectation regarding the technical ability of their staff. However, the same software packages are also used in professional teams to aid the coaching process rather than for TV broadcasting. The professional trainers need a system that supports their need to explain tactical successes and failures to their players in the training room. The available skills are different, and AI based telestrators let coaches focus on communicating their thoughts and insights to players more effectively because all the heavy lifting is taken care of.

We now see evidence that this process is going full circle. What may have started with the broadcasters, was enthusiastically grabbed with both hands by the teams themselves. The broadcasters now realise they could be using different types of operators with greater professional insights. Adopting AI based systems actually helps them in the battle for viewing figures.

Where is all this leading you may wonder? What’s clear is that we are only at the start of the AI journey. The next stage of AI telestration development is already underway in leading products and will move beyond the current ‘simple’ level of telestration. The next level will see more advanced computation deployed to instantly identify the different teams and be able to understand their shapes and formations to be able to impart ever greater insights to viewers at home.

I hope this has made the case that just as previous technology changes have led to evolution rather than disaster, so it will be for AI. No doubt each segment is different and the changes will be unique to each type of work, but there will be plenty of opportunities as familiar systems continue to evolve and embrace AI. To find out how RT Software can help, visit www.rtsw.co.uk.

Octopus Newsroom – Building tomorrow’s newsroom

Octopus Newsroom – Building tomorrow’s newsroom

“The new reality of content creation centers on three C’s: Connection, Collaboration and Communication. All done remotely. Octopus Newsroom is the best solution for that reality.” 

Scott Fitzgerald, Director of Sales, North America, Octopus Newsroom

Remote Work

The pandemic forced many companies into a remote work environment with little preparation time. What nobody realized at the time was the unleashing of a powerful new way to expand staff, bring more experience, and cater to audiences with the remote work cycle. Four years later, that emergency-generated workflow has become a standardized practice of production.

By hiring remote producers and reporters, stations can tap into a broader talent pool beyond their immediate geographic location. Additionally, remote production can lead to cost savings for stations by reducing overhead for maintaining physical office spaces and equipment.

High-speed internet, cloud-based collaboration tools, and advanced communication platforms have made it possible for producers to work from virtually anywhere.

Octopus, by design, acclimates to a client’s needs. Our system is as adaptive as it is flexible to meet the needs of small newsrooms or network multi-channel powerhouses in any kind of work environment. Octopus software allows content creation in real-time and remotely with access granted to the client’s preferences, not pre-set categories.

Networked Production

This trend involves the convergence of hardware and software technologies to create more flexible, scalable, and efficient production workflows.

One of the key advantages of using network switches over traditional routers is their flexibility and programmability. Network switches can be configured and managed using software, allowing broadcasters to adapt their production workflows more easily to changing requirements.

Another benefit of using network switches is their scalability. Unlike traditional hardware routers that have fixed port configurations, network switches can be easily expanded by adding more switches or ports as needed. Many new production workflows use a hybrid of hardware, software and cloud to create a complete universe of content creation tools.

Furthermore, software-defined networking allows for greater automation and orchestration of production workflows. Broadcasters can use software-defined networking controllers to dynamically configure and manage network resources based on workload requirements, optimizing resource utilization and improving overall efficiency.

Flexible Workflows

Newscasts, webcasts, podcasts, lifestyle shows, streaming, OTT, on-demand, digital content, social media – the expanding list of “being there” shuns any hopes of a one-size-fits-all model of workflow.

Exemplary systems focus on the fast but not furious. Octopus Newsroom’s software architecture is designed to be scalable, allowing broadcasters to add users, expand storage capacity, and increase processing power as needed. This scalability ensures that production workflows can easily adapt to changes in content demand without compromising performance.

Octopus Newsroom provides tools for customizing workflows to suit the specific needs of each broadcaster. This flexibility allows production teams to tailor their workflows to accommodate varying content formats, delivery platforms, and production requirements, ensuring efficient scaling of production operations.

Dynamic Content Streams

In 2024, it seems laughable there was ever a time when everyone got to focus on one newscast or event a day. Adapting to unscheduled and irregular programming is now the golden goose that wins the contact battles in individual markets and nationwide.

  • Real-Time Collaboration: Octopus Newsroom provides a centralized platform for editorial teams to collaborate in real time for breaking news, live events, or other urgent and important content dissemination.
  • Flexible Story Planning: Octopus Newsroom’s story planning tools allow editorial teams to create flexible programming schedules that can easily adapt to irregular content streams. Even with automation, the flexibility of the system caters to live events.
  • Multi-Platform Distribution: Octopus Newsroom supports multi-platform distribution, allowing newsrooms to publish content across various channels, including television, websites, social media, and mobile apps. Story- or rundown-centric approaches to content production allow every exit point of content a place to live in plain view, yet they allow individual teams to focus on their particular stream.

Evolving Newsrooms

One of our favorite journalists often said, “News doesn’t happen in the newsroom.” Time, energy, and operating expenses of running back and forth from interview to scene to station to live shot in perpetuity were eventually going to burn itself out.

Tools such as video conferencing, cloud-based collaboration platforms, and project management software enable journalists to communicate, share information, and collaborate on stories without being physically present in the same newsroom.

The need for community and hyperlocal journalism often requires journalists to be embedded within the communities they cover, working from local coffee shops, community centers, or other shared spaces rather than traditional newsrooms.

The ability to distance ourselves from the confines of a newsroom through technology brings us closer to the communities that we serve.

Time Optimization

Optimizing processes and workflows provides more than just getting the story out there first. It aims to remove redundancies and exhaustive daily tasks that fuel employee burnout.

With the proliferation of digital platforms, newsrooms are focusing on cross-platform publishing strategies to reach a broader audience. This involves repurposing content for various digital, social, broadcast, and non-linear channels.

Through automation, scheduling, content organization, and alerts, you can customize time-sensitive workflows that adapt to the needs of all employees, from the edit bay to the anchor desk to the live shot 50 miles away.

AI Integration

As a trailblazing company, Octopus was one of the first to integrate AI tools into the system without forcing it on clients who aren’t ready to adapt to that landscape. We see Artificial Intelligence as another way to accomplish all of the above goals – efficiency, evolution, adaptability, and saving time.

Automated content creation, curation, and personalization enhance efficiency and engagement. Fact-checking tools ensure accuracy, combating misinformation. AI-driven analytics provide valuable audience insights and guide content strategies. Automated transcription and translation expand accessibility and reach.

Overall, AI integration empowers newsrooms to produce high-quality, diverse, and impactful journalism that resonates with audiences in any global community.

 

Net Insight – Automation is shaping the future of media

Net Insight – Automation is shaping the future of media

David Edwards, Product Manager at Net Insight

The media industry is moving at super high speed. New business models, offerings, and business deals are changing the industry dynamics, leading players to rethink their strategies to remain competitive and grow. At the same time, consumer appetite for compelling and enriched content, including shoulder programming doesn’t seem to be subsiding.

Media companies are faced with a real opportunity and challenge — they need to be ready to manage a significant spike in live streams from acquisition right through to content delivery across platforms to tap into more audiences and revenue. This means more complex workflows and core media networks to acquire and deliver content to a plethora of destinations.

Traditional models can no longer cut it. The next chapter of video production and delivery requires speed and automation. Next-generation software-defined media networks remove complexity and ensure the agility and responsiveness that media organizations need to keep up with the pace of change and take on a leading role in shaping the future of media.

Riding the automation speed train

Launching new channels requires several months of planning and preparation when using legacy methods. Today, bringing new media offerings to market can be achieved at a far more rapid pace. The power of cloud technology has meant that software-defined media platforms and hardware products operating with software deliver the flexibility and agility that media organizations need to innovate and evolve quickly and reliably.

The sheer increase in the volume of content acquired and delivered means that it’s essential for media businesses to be able to control this content seamlessly. This is where automation becomes a business-critical capability. By establishing ‘software layers’ of control and monitoring based on open APIs, media organizations can configure, reconfigure, expand, and monitor systems and networks very quickly and effectively. For example, automated workflows enable a system to be reconfigured simply and rapidly ahead of a big live sporting event. This means the network can scale up fast and add more live feed destinations, tapping into growing monetizable audiences. It can then scale down quickly to accommodate network traffic troughs and avoid the costs of running at full speed unnecessarily.

The right workflows and technology foundations deliver ease of use. Media companies can utilize predetermined configurations for specific use cases and set up systems with a click of a button. This level of automation also means that a large video network will alert when there is an error with a service, enabling fast response and resolution that minimizes risks. This is particularly important when it comes to high-value live content where the stakes are high, and any error can prove extremely damaging. Imagine being in the middle of a top-tier live event when a misconfiguration leads the entire network to shut down or an ad not to be served. The reputational and financial damage can be enormous. Automated systems can safeguard against errors by identifying and alerting so the engineering team has full control and visibility over the health of the media network and can intervene to resolve issues as soon as they arise.

Importantly, given that these software-defined media networks and platforms are often powered by the cloud, they can be managed and controlled from anywhere in the world without having to rely on physical presence. This flexibility opens up even more opportunities for remote collaboration, tapping into an expanded pool of talent, and the efficient use of resources.

Automated media networks also create strategic opportunities for media service providers. Launching new media offerings no longer requires building everything from scratch. Automated tools allow media services partners to better productize their services, driving more efficiencies and delivering clearer services to their customers.

Boosting efficiency and unleashing creativity

Next-generation software-defined media networks remove unnecessary complexity from processes, workflows, and networks. As such they reduce costs and drive operational efficiencies. Within a simplified and efficient production framework, content owners are able to be creative with their content. They can not only produce a higher volume of content but also make it more exciting and compelling. From premium Tier 1 live events to companion content for digital platforms, industry players can increase consumer engagement and drive more monetization.

In a complex and dynamic media environment, automation provides content owners with the peace of mind that their content is managed and controlled to the highest standards so that they can focus on business growth.

Automation as an innovation driver

On top of improving operational efficiencies, software-defined media networks unleash unparalleled creativity and innovation with the speed and reliability they deliver. Automation tools that manage the content end-to-end and can adapt media platforms and networks at a moment’s notice are mission-critical — they drive dramatic efficiencies and guarantee the seamless and error-free operation of systems.

Media companies and service providers can leverage simplified and secure core media networks and workflows to gain the control they need to shape their business strategies and take advantage of new monetization opportunities. In a fast-evolving media industry, automation is the innovation enabler for forward-thinking players.

MainConcept – Juggling bitrate, latency and quality in broadcast

MainConcept – Juggling bitrate, latency and quality in broadcast

Frank Schönberger, Senior Product Manager, MainConcept

The broadcast industry is an incredibly exciting and dynamic place to be right now. Digital transformation is driving media companies to rethink their workflows and innovate, and as the industry continues its transition to IP infrastructure, many are adopting new technologies. Streaming has taken the broadcast industry by storm and has now reportedly overtaken traditional TV and cable viewing. But for a seamless viewing experience, content needs to be delivered with low latency, so viewers see the action on screen, as it happens. Getting the right balance can be challenging to achieve.

Latency matters

 Latency, the time delay between the moment content is captured and when it is displayed to the viewer, can result in buffering, interruptions, and delays in the video playback. Low latency is particularly important for live events such as sports, online gaming and betting, awards ceremonies, news, and concerts. Take sports for example: fans want to experience the highs and the lows of the action as it happens – not after a delay when the moment has passed.

Low latency streaming is an important part of delivering a seamless and engaging experience for viewers, but it’s challenging to achieve when delivering content to large audiences. There are many processes introduced between encoding and decoding. While real-time or near-real-time encoding and decoding may be employed, these processes can sometimes be computationally intensive, stacking up the possible delays.

Problem solving with CMAF (Common Media Application Format)

CMAF solves two problems that have been with us for almost 20 years, those being to allow for the decoupling of the packaging format and the manifest’s signaling, and to reduce the matrix of objects that would need to be delivered to clients. It is a standard that emerged from a collaboration between Apple (HLS, .ts) and the DASH-community (MPEG-DASH, .mp4) regarding a common format. CMAF is not a new ABR format (it is not an alternative to HLS or DASH). HLS/DASH are descriptive presentation formats, while CMAF is a common container format that can be referenced between them. Its main aim is to close the gap between HLS and MPEG-DASH, making sure that you could have one segment type to deliver to any platform and to any device.

CMAF alone does nothing to reduce latency. However, one of the nice side effects of the CMAF specification is that it allows for ‘chunking’. Low-Latency/Chunked CMAF (LL-CMAF) is a subset of the CMAF standard that specifically addresses the challenges of delivering low-latency video and audio content over the internet. Chunked CMAF reduces the latency of live streams when paired with certain technologies across the video delivery ecosystem. If you have a use case where you need low-latency delivery at massive scale, CMAF is most likely one of the better approaches.

A difficult balancing act

Scaling the process to deliver live content to a large audience is where bitrate, the amount of data processed per unit of time, comes into play because by compressing the data, you can reduce the amount of information being transmitted. While this has the desired effect of reducing latency, it also impacts quality because bitrate determines the level of detail and clarity in the content. A higher bitrate generally allows for the transmission of a higher quality image while a lower bitrate usually results in a loss of quality, leading to pixelation.

However, striking the right balance is no easy task because a bitrate that is too high will strain the network and cause buffering issues on playback. While compression can achieve lower bitrate, over-compression may result in an unacceptable loss of quality, because finer details are sacrificed. Additionally, maintaining quality when compressing data is also dependent on the efficiency of video encoding and decoding processes.

Taking the strain without compromising quality

Video streaming codecs such AVC/H.264 are instrumental in enabling broadcasters and content providers to optimize bitrate while preserving quality. More advanced codecs, such as HEVC/H.265 and its successor, VVC/H.266, employ increasingly sophisticated compression algorithms compared to older codecs like H.264. These algorithms can identify redundancies and irrelevant information more effectively, resulting in higher compression efficiency and allowing broadcasters to transmit high-quality content at lower bitrates, reducing the strain on network infrastructure, though resulting in the need for more computing power.

When delivering a video stream to users who consume content on a range of devices, screen sizes and resolutions, not to mention under different network conditions, content must be encoded in multiple quality layers. Adaptive Bitrate Encoding (ABR) is used to adjust the quality of a video stream in real-time based on the viewer’s network conditions. The goal of ABR encoding is to deliver the best possible viewing experience by dynamically adapting the bitrate of the video to match the user’s needs. This helps to minimize buffering, provide smoother playback, and ensure a consistent user experience across a variety of network conditions.

However, many of the encoding steps are repetitive, and this is inefficient. In a typical workflow, each encoder goes through the same, basic steps of motion estimation, image analysis and encoding, using the same input image. These are redundant and often unnecessary steps, so if some of these tasks can be combined, processing efficiency is improved and the encoding process for both live and VOD workflows can be simplified and streamlined.

Looking ahead

Broadcasters and video providers must continually adapt to meet the evolving expectations of viewers. And as users seek out ever more engaging and immersive viewing experiences, achieving low-latency, ultra-low latency and near-real time delivery of content is only going to become more important. Delivering the best possible viewing experience means balancing bitrate, latency, and quality so viewers can access content on a variety of devices and platforms.

By embracing cutting-edge solutions and staying at the forefront of industry advancements, broadcasters can navigate these challenges and deliver broadcast-grade content that engages audiences worldwide with a premium viewing experience.

 

LTN – Driving business success with automated versioning technology

LTN – Driving business success with automated versioning technology

Rick Young, SVP, Head of Global Products

Media companies are actively exploring ways to maximize their content delivery and operations. Technological advancements, particularly through the smart use of metadata and IP distribution, are revolutionizing resource-intensive processes. These innovations aim to reduce the overhead required to create great channels and tailored live events while enhancing the precision and speed of content delivery.

Automation driven systems must strike a delicate balance. They must be robust enough to tackle the challenges posed by today’s evolving market dynamics yet user-friendly for technical experts and non-technical staff. Compatibility with existing infrastructure is crucial to ensure seamless automated versioning without disrupting ongoing operations.

This is not to say it makes certain roles redundant. There will always be a need for on-site engineers, operations teams, and production staffers. Their expertise in equipment maintenance, troubleshooting, and ensuring uninterrupted, high-quality, on-air experience complements automated workflows. But, by harnessing automation to drive versioning technology, media companies will realize the tangible benefits of getting content to market seamlessly and start maximizing monetization potential.

Enhanced content production

With broadcasters having vast media archives and often sophisticated on-prem investments, a hybrid cloud solution that ties hosted software and on-prem infrastructure together and collects the info required to build automated processes will be key to driving success in live event and playout automation.

One example where we are seeing automation creating new and creative channels being delivered to consumers across platforms is in newsrooms. As news, at its core, is most valuable when built around live, its workflows are very complex and require automation but not at the expense of flexibility. But with the right automated versioning tools, staff can leverage investments in newsgathering and editorial teams to maximize the value of the material and put out an array of options to platforms and evolving online news consumer demographics.

By automating the channel creation process and the control room, media organizations will reduce unnecessary investments – both capital and operating – and enable teams to focus on getting the best content targeted to viewers. In a landscape that is increasingly demanding, content production will run smoother from the start of the content process to the end, giving media companies control over their content distribution.

Greater flexibility and collaboration

Collaboration is key to success, but this can only be achieved with flexible solutions. Flexibility is a key goal for automation systems, as it enables streamlined remote monitoring and seamless integration into broadcast workflows.

With the right playout automation options, ideally married to content publishing and delivery networks, systems should be able to seamlessly integrate with remote monitoring setups. Whether it’s monitoring live feeds, adjusting content delivery, or responding to unexpected events, automation adapts. It’s the silent partner that ensures uninterrupted broadcasts, even in environments where scaled-down teams are forced to do more with less.

Utilizing automation methodologies mustn’t be seen as just a tool; it’s an integral part of modern broadcast workflows. From ingest to playout, it can help orchestrate the entire process. As media companies continue to embrace smart technologies, they pave the way for a future where efficiency, accuracy, and creativity coexist seamlessly.

Maximizing cost savings through playout automation

 It’s no secret that automation saves money. But a better understanding of how these cost savings can drive business success is important. With budgets and time tighter than before, media companies will be keen to use the cost savings to enhance their business strategy.

Creative application of unique automation powering solutions can take away repetitive and time-consuming tasks from staff. Media companies can make the most of the extra resources to free up human capacity for strategic decision-making and creative endeavors on the content production side of the business. This can drive innovation that will unlock revenue streams.

Automation is a powerful tool that can also ensure timely project delivery while adhering to budget constraints. In a diverse workforce, each employee brings a unique approach to tasks. However, reconciling these variations across multiple team members can be resource-intensive. By embracing automation, organizations minimize manual discrepancies and mitigate risks associated with critical functions such as structural analysis, estimations, detailing fabrication, and on-site execution. Media companies are empowered to create increased efficiency, allowing teams to achieve more with fewer resources.

Fostering business success with a technology partner

Automation is more important now than ever for getting content to air quickly and accurately. Media companies need the right technology to maximize global reach and value, whether live events or fresh content.

It makes sense to consider partnering with a trusted service provider that can harness automated versioning technology that puts efficiency and scale at the heart of its tech capabilities.

The journey toward automation is a collaborative one, where technology and human expertise converge to redefine what’s possible. A technology partner aligning with your new content product roadmap for both channels and live events is key to driving long-term business success. With next-generation automation technology, organizations can overcome today’s pressing challenges, streamline their operations, and propel their content to new heights while minimizing costs.

Jutel RadioMan

Jutel RadioMan

Originating from the northern landscapes of Finland, Jutel Oy proudly joins the IABM as a new member. As the world’s leading expert in media and radio automation solutions, Jutel excels in the digitalization of radio. The company delivers innovative solutions that streamline radio workflows, enhance media content management, and ensure seamless publication across diverse distribution channels. Serving leading media operators globally, Jutel’s reach extends across Europe, North America, the Middle East, Africa, and Asia.

Radio Botswana – Brand New On-Premise Studio 

Jutel’s main solution, RadioMan®, has represented the company worldwide since 1992 with thousands of users producing media for various broadcast channels around the clock.

Jutel provides comprehensive operations support including an experienced 24/7 helpdesk. Additionally, the company offers consulting and integration services, complemented by an extensive network of partners.

Founded in 1984, the company quickly established itself in the fields of broadcast systems integration and manufacturing high-quality audio consoles and telecommunications equipment. Jutel initially delivered turn-key radio station solutions to commercial stations across Finland. These solutions encompassed from acoustics and studio systems to audio processing, STL installations, and even including transmitter sites complete with antenna towers.

In 1985, Jutel expanded its portfolio by acquiring the high-end Kajaani Audio broadcast consoles product line, later branching into telecom equipment manufacturing. This included products such as high-precision frequency standard systems and advanced RF design for cellular systems.

The experiences from turn-key broadcast integration and popularization of personal computers enabled Jutel to start the development of digital audio platforms with full workflow support. The latest RadioMan 6 solution represents the sixth generation of this full-range broadcast solution, showcasing Jutel’s continuous innovation and commitment to advancing broadcasting technology.

RadioMan® Solution: The backbone of Jutel services

To date, RadioMan® is the world’s first and only full-range radio-as-a-service solution, capable of fulfilling the diverse broadcasting requirements of both major national broadcasters and smaller local stations. Its exceptional portability and scalability position RadioMan® as a leader in radio automation, thereby significantly improving the user experience for the client’s employees.

Radio Kaleva – Cloud-based commercial station

RadioMan® is a full-range cloud-native radio broadcast automation system designed for both commercial and large national broadcasting. In addition to cloud deployment, it provides the flexibility to be deployed in hybrid or on-premises environments. The full-range solution encompasses strategic and daily planning, audio and news production, music management, and multi-channel location-independent on-air activities. It includes built-in contribution and distribution controls, and archiving solutions. RadioMan® supports browser-based user interfaces accessible on both workstations and mobile devices.

RadioMan® is utilized in a variety of broadcasting environments, from large multi-studio, multi-channel IP-based broadcast corporation facilities and networked regional broadcasters to newsroom settings and small cloud-based commercial stations. Its web browser accessibility ensures ease of use for both on-site and remote broadcast operations, offering a flexible solution for today’s dynamic broadcasting needs.

The advantages of the RadioMan 6 platform include complete location independence, sustainable audio production, straightforward deployment and maintenance, the ability to combine various workflows for different broadcasting profiles, simplified studio structures and installations, and inherent support for remote operations, remote contributions, and built-in distribution control.

Latest member of the Jutel family: RadioMan® Clipper

RadioMan® Clipper is the latest addition to the Jutel solutions family. The Mobile and Web Audio Production Platform development was driven by the goal of offering a unified workflow from mobile interviews to ready-for-distribution audio content. RadioMan® Clipper serves as an all-in-one platform, enabling audio journalists to record, trim, transfer, edit, and manage audio seamlessly on iOS and Android devices, as well as through browsers on laptops.

RadioMan® Clipper Mobile and Web Audio Production Platform

Clipper enables radio journalists to streamline their workflow, removing the necessity for multiple audio production tools. This allows broadcasters and audio content producers to simplify their operations by not having to maintain a diverse set of tools. Consequently, the Clipper toolset reduces costs, decreases carbon footprint, and enables sustainable audio production.

The Clipper environment seamlessly integrates with the RadioMan 6 platform, yet it also offers flexibility for deployment as a standalone service for broadcasters or as an OEM offering to third-party broadcast systems.

AI solutions: streamline radio operations for efficiency

RadioMan® solutions, featuring cloud-based architecture and built-in Rest-API interfaces, provide a unique platform for incorporating AI services into broadcast workflows. Built-in capabilities for remote voice-overs, text integrations, and automated recorders/players facilitate the rapid development of AI-generated and streamlined workflows.

Broadcasters are already leveraging the RadioMan 6 platform to air AI-generated news. The RadioMan® Clipper Multitrack Audio Editor now includes support for generating speech directly onto an editor track from text notes, significantly speeding up the production of audio contributions. This is particularly effective as algorithms for person-based speech generation continue to advance rapidly. Additionally, Clipper offers speech-to-text capabilities for the automatic generation of metadata notes.

Broadcasting and audio storytelling are increasingly merging into a unified field. While AI significantly boosts productivity, the human touch remains essential in audio journalistic work.