Amagi – Revolutionizing live sports and events coverage with unified cloud workflows

Amagi – Revolutionizing live sports and events coverage with unified cloud workflows

 

Kalaivani Sivasankaran, Senior Director – Product Marketing, Amagi

In 2023, live broadcasts overwhelmingly topped the charts of the 100 most-viewed telecasts, with a clear preference for live sports, which claimed 56 of the highest rankings. Remarkably, the coverage of Oscars and the Grammys stood out as the leading entertainment broadcasts. The historical interest in live programming can be attributed to its unique offering of immediacy and the opportunity for real-time engagement. This format excels in delivering the excitement of witnessing events unfold in the moment, creating a compelling sense of participation and community among viewers. So it is no surprise that the live streaming sector is expected to experience a significant boom, with projections estimating its value to reach $3.21 billion by 2027.

In recent years, digital channels have risen to match cable TV’s audience share at 39%, signifying the growing role of direct-to-consumer (D2C) streaming apps in delivering live events. Innovations like Sky Sports Germany’s vertical streaming for TikTok and Disney+ Hotstar’s immersive live cricket experience showcase the shift towards engaging digital broadcasts. This trend emphasizes the need for media companies to integrate digital strategies with traditional live broadcasting to attract and retain viewers, especially younger audiences who prefer consuming content across various digital platforms.

Given the needs of modern workflows highlighted, the limitations of traditional fixed hardware infrastructure become apparent. In response, cloud-based workflows emerge as the pivotal solution. The shift to cloud technology enables events to be streamed globally with greater flexibility and scalability, transforming how live events are produced, distributed, and consumed to meet the demands of digital platforms.

Cloud workflows & automation for modern live sports coverage

Live events vs. ROI

Traditional broadcasters face significant hurdles in covering live events. Setting up dedicated production workflows and studios is costly and cuts into profits. Additionally, operational limitations make it challenging to cover multiple events at once, especially impacting Tier 2 or Tier 3 events like local sports, community events, regional award shows, concerts and niche conferences.

Expensive on-ground production costs, particularly for tier-2 and tier-3 events, often outweigh the revenue generated, making deploying costly crews and equipment impractical. On-demand cloud production offers a promising solution for efficient event coverage without extensive on-site infrastructure. Furthermore, the flexibility of having producers, directors, and operators remotely anywhere in the world enables access to the best talent available or the most cost-effective options, ultimately enhancing the overall quality of the production while optimizing resource utilization.

Support for complex live workflows

In sports or news broadcasting, several key factors are paramount to ensure a seamless and high-quality experience and it is essential for a cloud-based live system to encompass these capabilities. These include maintaining an operator experience similar to on-premises setups, ensuring glass-to-glass low latency for real-time transmission, implementing multi-region redundancy to guarantee high availability and minimize downtime, and offering flexibility to integrate external workflows such as real time graphics and live captioning. Moreover, the ability to deliver high video quality, including HDR 4K and Dolby Atmos support for premium events, is crucial for providing an immersive viewing experience.

Additionally, cloud-based solutions should offer the capability to quickly spin up and spin down resources for specific events without the need for a fixed infrastructure while also enabling the migration of all hardware infrastructure to the cloud, including switchers, routers, multi-viewers, graphics systems, captioning systems, recording systems, playback systems, and statmux.

Live to VOD

Other key elements that help amplify the overall viewing experience include automation techniques such as digital playlisting and live-to-VOD conversion as the live event progresses. These methods ensure continuous viewer engagement by enabling broadcasters to populate their schedules around the clock with relevant and captivating content. This adaptability allows broadcasters to produce varied live segments for both linear and on-demand channels, again maximizing revenue.

Efficient  & Enhanced Operations

Cloud workflows significantly boost operational efficiency, slash costs, and elevate the live viewing experience further by automating critical tasks like content scheduling, distribution, and ad insertion. With automation ensuring smooth, error-free delivery, operators and staff can concentrate on strategic and creative priorities needed to elevate programming quality.

Cloud based AI/ML workflows facilitate the automatic generation of closed captions for live events breaking language barriers for a global audience. Additionally, automated insertion of graphical elements and interstitial content maintains a cohesive viewing experience and creates opportunities for personalized content delivery. By harnessing data-driven graphics and targeted advertising, organizations can deliver a more tailored viewing experience, ultimately enhancing viewer engagement and loyalty.

Powering innovative storytelling & powerful data analysis

The shift toward unified cloud workflows and automation isn’t just a passing trend – it’s essential if broadcasters want to thrive in today’s digital landscape. With traditional TV facing challenges and streaming platforms gaining dominance, adopting these workflows is crucial for efficient and cost-effective live sports and event coverage.

With access to advanced production tools and real-time analytics, broadcasters can deliver personalized content experiences that actually resonate with individual viewers. Whether it’s interactive graphics, multi-camera angles, or augmented reality overlays, cloud workflows empower broadcasters to push the boundaries of creativity and deliver unforgettable moments to audiences worldwide.

Media companies are realizing the benefits of cloud-based solutions, which enable automated content versioning for distribution across multiple platforms. They’re also leveraging these tools to tailor coverage to diverse audiences, maximizing their return on investment and digital reach.

 

 

Accedo – Can AI support the transition towards a more sustainable video ecosystem?

Can AI support the transition towards a more sustainable video ecosystem?

François Polarczyk, Sustainability Director, Accedo

 The OTT industry has undergone some major changes over the past few years. Market growth slowed somewhat compared to previous years and video providers have broadened their monetization strategies and shifted focus from subscriber growth to profitability. Despite this, the OTT video industry remains buoyant; according to analysis by Statista, the industry is projected to show an annual growth rate of 6.30% between 2024 and 2029, to reach US$429.40bn by 2029. This change of focus towards profitability is driving service providers to provide a better experience for viewers and optimize their services. However, there is a need to balance this drive for profitability with the industry-wide need to transition towards a sustainable video ecosystem.

This is a complex challenge, and we know that it will take considerable innovation. Might the rapidly accelerating innovation we’re seeing around AI help the industry to pave a sustainable path? Already we’re beginning to visualize the impact that AI might have on content creation, OTT design, engineering, delivery and consumption. How might AI support the industry to respond to the challenges of today to make the transition towards a more sustainable video ecosystem tomorrow? Clearly, the potential for AI to provide positive change is huge.

 Reducing impact of content creation

Content production is responsible for producing huge amounts of carbon dioxide. Environmental organization albert estimates that the production of a TV show typically produces tens of tons of CO2 while a feature film might produce tens of thousands of tons. A huge amount of resources go into set production, not to mention the carbon emissions from energy consumption. If more sets can be created virtually with the help of generative AI, fewer physical sets need to be created which would reduce resource consumption and waste generation. There is also potential that virtual sets could lessen the need for location filming, which would reduce emissions from transporting people and equipment, as well as reducing emissions produced by generators.

AI could also be used to enhance sustainability in OTT content creation by helping producers determine which production processes are the most efficient. This can lead to more efficient use of energy, making production methods more sustainable. Content producers are already using AI to streamline post-production and localization by automating time-consuming tasks. As more AI powered tools come to market, we’ll no doubt see higher levels of automation across the content supply chain, which ultimately will contribute to making the industry more energy efficient.

Optimizing software development and design

Just as AI-driven solutions are already playing an integral part in the post-production process enabling content to be produced quicker, generative AI is also being used by software developers to optimize the OTT video service development process. Already we know that AI can help developers to write code more efficiently by automating repetitive coding tasks and speeding up the review process. Might AI also help developers to optimize code so that the computational requirements of the video software are lowered? It’s reasonable to hope that AI can help not only with making the practice of software development itself more sustainable, but also with helping to reduce energy consumption further along the value chain.

There’s also potential that AI may also positively impact the product design process, to enable services to be designed in a way to ensure that both user experience and energy efficiency are always optimized and balanced in harmony with one another. Perhaps AI-powered tools could help individuals and households understand how their viewing behavior and decisions affect their carbon footprint.

Delivering content in the most energy efficient way

Content needs to be delivered to viewers in the most energy efficient way possible. We’ll likely see AI-powered video compression algorithms helping to reduce the size of video files, without compromising quality. This has the potential to help lower bandwidth requirements and reduce energy consumption during data transmission.

Additionally, by using AI algorithms to analyze large datasets around usage and network conditions, video providers will be able to better understand energy consumption patterns and identify inefficiencies in real-time in order to dynamically optimize content delivery.

AI-powered tools are already starting to come to the market that allow Content Delivery Networks and even data centers to optimize network efficiency while at the same time reducing energy consumption. AI can greatly help with ensuring resource intensive workloads are predicted by AI and therefore help with scalability and elasticity. Video providers will be able to automatically optimize the routing of data through the network, choosing paths that consume less energy. This will be particularly valuable for large Content Delivery Networks where efficient routing can lead to substantial energy savings.

Content discovery is another issue that impacts on the energy consumption of streaming. As the volume of available content increases, people spend more and more time looking for something to watch, which means TVs are powered on for longer periods of time. One concept that we’re working on to address this issue is to incorporate AI-powered content discovery that would reduce energy by dimming the entire screen while also getting the right content in front of the viewer, faster.

Looking ahead

While there is little doubt that AI can help tremendously with efficiency, training and running large AI models does itself consume huge amounts of energy. A recent study concluded that by 2027, global electricity consumption relating to AI could increase by 85 to 134 TWh annually, which is comparable to the annual energy consumption of a small country. Although significant, if this energy can be generated from renewable sources, then that obviously goes a long way towards reducing its impact.

There are indicators that AI may well be a technology that could help on both those fronts: firstly by helping video providers enhance UX and service quality to deliver more value to consumers, and secondly by enabling optimization at all stages of the value chain to improve efficiency and reduce energy use.

 

Mike Purnell

Mike Purnell

We were deeply saddened to learn of the passing of Mike Purnell, who died suddenly on 6 February aged 78. Mike served on the IABM Board from 2011 until his retirement in 2015 and was an enthusiastic supporter of the MediaTech industry’s international trade association.

We are indebted to Chris Smeeton, Managing Director at Argosy, for the following tribute to Mike:

“Mike was born 1945 in Brighton, where he spent most of his formative years, intertwined with several years in France; this may be where his love of travelling emerged. He started his working life in the electronics industry with ITT Cannon in the 60s, where he established his abiding interest in connectors and first encountered the Cannon XLR audio connectors; these provided his introduction to broadcast technology.

“Mike was a family man and was the proud father of two sons, Andrew and Nick. Mike and his wife Pauline set up a home in Buckinghamshire and it was from here that Mike was to establish Argosy, alongside his business partner Doug Julley, soon to be joined by long-term friend Ken Eckardt, specialising in serving the Broadcast Industry.

“Their company started like many small firms: while one person quoted customers and processed orders, the other loaded the van and drove into London to deliver the products.  As their customer base grew, Mike and Ken were able to upsize into bigger premises in Stokenchurch and then latterly in Long Crendon with branch offices in Salford and Dubai. Intensely patriotic, Mike was at his core an internationalist; he thrived on traveling overseas to work alongside people from different backgrounds and cultures, building close relationships with many of the people he met.

“Mike and his wife Pauline became fixtures of the Broadcast sector; Pauline would often accompany him on business trips to places as far flung as Las Vegas, Amsterdam and Dubai.

“Mike’s  business acumen, intelligence, courteous nature, loyalty, generosity and humour were legendary across the Industry.  He built Argosy around a family ethos, bringing youngsters in, training and supporting them – Mike always matched his words with deeds. He funded OU degrees for me and my colleagues, and embraced lifelong learning: for example, I was sent on management, marketing, contract law and board courses as my career with Argosy progressed. Mike gave us the skills, time and encouragement – a father figure for us all.

“Even after he retired in 2015 following the sale of Argosy, his invaluable mentoring and advising continued; he was very helpful in the process of us buying the company back again in 2018, and continued to be on hand to help right up until his passing. I am determined to rebuild the company in the image Mike and Ken had set for it over their 30-year tenure. Mike was an inspiration, a father figure and true friend to me, and I will always treasure the memory of our friendship.

“Mike also found time to make an active contribution to our trade association, IABM, of which he was a strong supporter. He found the collegiate environment of working with his peers to be very valuable, building synergies with other member companies. His commitment to IABM was illustrated by his sitting on the IABM Board for a number of years.

“Outside of work, Mike enjoyed a game of golf and was a member of Round Table and 41 Club.  Good company, good food and a glass of good wine were definitely on the menu for a night out with Mike.

Our abiding memory of Mike will always be of a patriotic, family-orientated man, who loved his family and friends alike and treated his colleagues, friends and business associates with love, affection and support.”

 

The news of Mike’s death brought in tributes from an enormous number of people around the world – an acknowledgement of the impact he made on so many people over so many years. Here are just a few examples:

Mark Barkey, Actus Digital

A very dear, loyal, trusted friend, icon in the world of broadcast. To me a mentor, the quintessential gentleman who wrote the book on how to deal with customers.

Pat Hoyle, JP Broadcast

A great loss to the broadcast engineering world. Mike was always a genuine ‘people person’; he had time to understand your requirements (whether large of small) and made my projects supply chain much easier at Channel 4.

Chris Lund, Lund Halsey

Mike was one of the legends of our industry. A hugely respected, generous of spirit, wise colleague and good friend. And a lot of fun over the decades.

Paul Wallis

Wonderful man, great sense of humour, and always a keen eye for business. I am happy to have shared some wonderful experiences with Mike.

Lucinda Meek, IABM

Mike was a kind and lovely man. He was a great supporter of IABM and I sat with Mike on the IABM board. He was always courteous and extremely helpful – going above and beyond in his help to the IABM team. Mike was also great fun to be with. Condolences to his family and his Argosy family.

 

 

 

 

Zixi – Factors contributing to the TCO of streaming at scale

Zixi – Factors contributing to the TCO of streaming at scale

John Wastcoat, SVP, Strategic Alliances and Marketing, Zixi

In the dynamic world of video streaming, media organizations are constantly seeking efficient and cost-effective solutions to manage their large-scale implementations. One of the key metrics that has to be met to validate any purchase decisions is Total Cost of Ownership (TCO). And, like Maslov’s famous Hierarchy of Needs, TCO analysis must start with foundational requirements.

Unfortunately, many media companies suffer from what we can term upside-down thinking.  They start at acquisition cost and never fully look at the total picture.  As we will show below, solutions that utilize an Open Source Protocol are assumed to be the most cost-effective, but when fully analyzed end up being more expensive and inefficient. The ultimate goal for many is monetization, but that is not likely to occur or be optimal without QoS and QoE.  Without fully analyzing the full stack of costs leading up to the potential for monetization, upside-down thinking can occur, and benefits are either not realized or are pegged back to be less than they would be if the solution deployed was optimized.

Considerations for optimizing TCO

There are a broad range of factors we must consider when looking at a solution for streaming at scale. We list some of them below.

Compute Efficiency: Increased compute efficiency plays a pivotal role in reducing the complexity and cost associated with managing the large-scale implementations of the modern broadcaster. Open Source protocols tend to be under optimized and suffer dramatically in this regard, leading to snowballing penalties in terms of energy costs and when measured against sustainability metrics. The factor varies with the solutions chosen, but we see figures of our own optimized SDVP with the Zixi Protocol only requiring 7% of the compute requirement of other options, so this is a major consideration.

Reduced Infrastructure Requirements: Improved efficiency eliminates the need for excessive virtual machines, which can also lead to substantial cost savings. By requiring fewer virtual machines to handle the same workload, organizations can significantly reduce operational costs, including compute, engineering, operations, and energy consumption.

Bandwidth Optimization: This is a crucial part of the process as it can help media organizations reduce their bandwidth consumption and transport stream egress costs by up to 50% while maintaining video quality and producing the pre-encode bitrate post-decode. There are several technologies that can assist here such as Null Packet Compression and Video Payload Awareness, allowing broadcasters to allocate their resources more efficiently, redirecting budgetary allocations to other critical areas. Needless to say, this cost reduction can have a profound impact on the TCO of video streaming implementations.

Media Processing:  In an ideal scenario, transcode capabilities should be able to be used anywhere in the media supply chain: on-prem, in the cloud, at the edge, or in a hybrid solution.

This adaptability allows bit rates to be optimized, as high bit rate production signals are reduced when the stream meets a business partner or is prepared for audience consumption.  In live events, linear workflows can leverage low bit rate slates to minimize egress costs in between higher bit rate programming events.

Architecture: As media organizations move to IP and cloud-based workflows, it becomes obvious that architectural design drives project economics.  Point-to-point calculations are not relevant as workflows scale.

There are several considerations here

  • Individual point-to-point and Virtual Machine (VM) versus, for example, media server clusters.
  • Unit cost per stream and processed services managed at scale.
  • Hybrid infrastructure that marries on-premises, direct network connections and the cloud.
  • Managing the value of redundancy. Tier 1 content versus Tier 3, and what is the commercial value assigned to a program’s SLA.

The goal is to be able to optimally leverage all available IP infrastructures to minimize compute and egress costs while providing the base for growth at scale with fractional incremental investment. Multi-cloud deployments are supported, allowing optimized egress rates across vendors while providing network diversity for redundancy.

Control Plane:

Legacy program channels multiply exponentially as they convert to IP streams.  More takers generate more revenue, but management of existing resources across a variety of technical skill sets requires a force multiplier. A software-defined SaaS solution gives finite resources the ability to hyper scale.

As an illustration of this, we have recently worked with FOX in our Affiliate Program.  Implementations like FOX start with a few hundred affiliates/channels, but once you add primary and secondary ingest, intra-cloud, streams for eyes on glass monitoring and then egress of primary and secondary streams to the ultimate number of takers it becomes 3,000+ streams.  That can become quickly unmanageable without the right solution in place.

Time is Money. It is critical to assess how fast a technical deployment can be rolled out. How agile is it through the change process? How easily can Phase 1 be extended to Phase 2? How do you build a modular tech platform across the media enterprise that is future-proof?

System interoperability needs to be considered too. Modern workflows and technical workstreams utilize many vendors across the media supply chain. Established interoperability minimizes deployment risk and ensures reduction of traditional project timelines.

Operations:

Once a project is rolled out, the operational phase becomes critical.  How many new tasks have been created? How do operational and technical resources orchestrate workflow individually and between teams?

Monitoring aspects: The traditional approach of “eyes on glass” that would cover 10-12 channels per operator becomes untenable as 100s of channels and 1000s of streams are delivered. Exception-based alerts driven by telemetry are the new normal, and only when there is an issue does the system alert generate a call to action.

Incident and Root Cause Analysis (RCA): Knowing there is an issue naturally is followed by where it is and how to resolve it.  This leads to a large, multidisciplinary set of teams that need a common view and toolset that confirms and rectifies the issue while providing all the relevant telemetry and logs to verify the analysis.

The industry average for RCA of a given program channel is $500k per year.  The ability to reduce the raw time and resource exposure in RCA and allow system scale with current resources means that focus can remain on content creation and audience engagement rather than staffing.

Engineering:

Engineering Subject Matter Experts (SME) in broadcast video, networking, security and cloud are inherently scarce and expensive. This means that it is more important than ever to inform and mobilize those resources across a range of disciplines. Detailed telemetry, logs and reports allow SMEs to interact efficiently with inter and intra-company teams without prejudice, using a single source of truth and resolve an issue while implementing safeguards to prevent future reoccurrence.

Migration to IP and the cloud with live video is a new and specialized engineering skill set. From production to distribution, the expertise required in any given workflow varies.  Having a unified software-defined platform allows highly efficient synchronization of engineering disciplines in real time that has faster times to resolution while allowing teams to cover the scale of operation on a global basis. The streaming workflows are deployed at divisional, regional and global levels and require the diverse technical resources to design, implement and commission the streaming roll out as well as prepare for operational hand off.  Any solution’s ability to use scripting, APIs and multi-vendor integrations allows this process to be highly agile and rapid, while utilizing engineering resources most efficiently and effectively.

Industry transformation: 

Migration to IP and the cloud ideally requires a single, unified software-defined platform that spans the transformation from legacy deployments to modern, advanced IP-based workflows for higher quality content, FAST channels and enhanced viewer experiences. Major use cases that need to be considered include satellite rationalization, OTT, D2C, 5G, and multicast to STB.

In conclusion, TCO optimization for organizations managing large-scale video streaming implementations is a vital consideration. The ideal solution marries efficiency, reduced infrastructure requirements, bandwidth optimization and transport stream egress cost reduction together to empower media companies to maximize their resources, minimize expenses and deliver high-quality video content reliably. By leveraging the correct technology, organizations can embrace a future where cost-effectiveness and operational excellence go hand in hand, opening up new possibilities for growth, innovation and enhanced live video streaming experiences.

www.zixi.com

Viaccess-Orca – Mastering TV monetization with AI-driven solutions

Viaccess-Orca – Mastering TV monetization with AI-driven solutions 

Dror Mangel, Vice President of Products and Services at Viaccess-Orca

In the television world, generating new revenue can be a significant battle. Broadcasters and video service providers face growing competition for eyeballs, changing viewer demands, cost pressures, and an array of regulations, amongst other challenges. As the television industry evolves, broadcasters and service providers need to find new ways to attract viewers, engage audiences, and increase revenue.

This article will highlight some of the key challenges that broadcasters and video service providers face when monetizing content and offer innovative solutions for generating new TV revenue, including personalized FAST channels, targeted TV advertising, tailored content packages, and shoppable TV.

TV monetization challenges

The biggest challenge that broadcasters and video service providers face today is the evolution of TV business models. For years, the rules were clear: consumers paid subscription fees to watch television content and transactional fees to view the latest releases from solo content providers. With competition heating up and viewers having endless choices, more and more video services are now offering content for “free” or almost for free. A recent survey from Hub Entertainment Research found that almost six in 10 viewers expressed a preference for ad-supported subscriptions if it meant a lower monthly fee.

Even when a viewer decides to subscribe to a service, the video content is offered at a fixed price, without upsell opportunities. Broadcasters and video service providers need monetization strategies if they want to improve their profitability.

Advanced opportunities for monetization

Despite these challenges, there are ample monetization opportunities available in the TV environment. Let us look at how personalized targeted TV advertising, FAST (Free Ad-Supported TV) channels, upsell content, and shoppable TV are elevating revenue for broadcasters and video service providers.

 Targeted TV advertising

Targeted TV advertising is one of the hottest trends for monetizing television services in recent years. An Ampere report found that in 2022 addressable TV advertising accounted for more than 40%, and is growing, generating $139 billion globally in traditional TV ad revenue. Addressable TV revenue is predicted climb to more than 60% of that total TV ad revenue by 2027 globally.

To effectively deliver targeted TV advertising, broadcasters, service providers, and advertisers need a streamlined strategy that matches relevant ads with appropriate audience segments. It is important to find a solution that consolidates demand from various sources and offers a broad audience reach. Attaining valuable, in-depth insights into viewers’ behaviors and preferences is also key. Moreover, it is crucial for broadcasters and service providers to safeguard their audience data from potential exploitation by digital advertising giants.

Furthermore, broadcasters and service providers need a holistic view of the potential revenue that targeted TV advertising could bring their business. Utilizing freely available tools such as VO’s Revenue Projection Simulator broadcasters can pre-assess their potential earnings, taking into account their service’s geolocation, user-base size, and other relevant parameters processed through a unique algorithm.

Utilizing targeted TV advertising solutions powered by AI and ML technologies, broadcasters and service providers can precisely categorize audiences according to their viewing interests, household composition, life moment events, demographics, and other relevant factors. This approach not only enhances engagement but also extends viewing time, optimizing revenue generation.

The key for targeted TV advertising is dynamic segmentation. By applying AI and ML to first-party TV data, broadcasters and service providers can successfully extract relevant granular segmentation, opening up the possibility of ad replacement in linear primetime content. This method makes it possible to increase the number of ad slots without increasing ad load; charge premium rates; and decrease churn as viewers have been shown to respond more favorably to targeted ads.

Through data-driven AI analysis of usage flows and identifying consumption patterns, a dynamic segmentation model continually optimizes performance and makes segmentation more effective, empowering broadcasters and service providers to deliver more relevant content and as result maximizing their monetization and boosting ROI.

FAST channels

FAST channels are rising in popularity. Digital TV Research predicts global FAST revenues for TV series and movies will more than double over the next few years reaching $17 billion in 2029 up from $8 billion in 2023. However, there are hundreds of FAST channels for viewers to choose from. As a result, there is a high probability of viewers feeling overwhelmed and fatigued when determining what to watch.

To effectively monetize FAST channels, broadcasters and service providers face the challenge of curating content that aligns with audience preferences while also simplifying discoverability. The emergence of Generative AI technologies offers a promising solution to optimizing content discovery and the creation of FAST channels. Generative AI enables service providers to offer AI-driven recommendations, tailoring the selection of FAST channels to better resonate with audience interests. This approach is instrumental in enhancing user engagement and satisfaction by facilitating easier access to preferred content.

FAST channels are increasingly seen as a new and improved form of linear television, blending the scheduled programming aspect of traditional TV with the benefits of streaming, enabling a more diverse selection and smarter content curation. Moreover, the inclusion of hyper-targeted advertising, made possible by AI-driven insights, enhances the monetization potential of these channels.

Tailored content packages

In today’s TV content paradox of choice and super aggregation, users want access to preferred content while keeping their overall TV expenses low. A trend that will accelerate is dynamically changing the super aggregation services a user subscribes to within a budget cap. This is based on recommendations from the platform, tailored to the user’s preferences.

With tailored content packages, users can access content that meets their needs and availability within their budget. Since viewers cannot watch all available content, the proprietary method is to offer the user, at any given time, the best package that aligns with their viewing preferences. For instance, if a new season that highly interests the user starts on one service, their subscription to another service can be canceled, allowing them to watch the most relevant content within their budget constraints. This dynamic pricing strategy is a simple way to prevent subscriber churn and increase margins on content while also enhancing user satisfaction.

Shoppable TV

The TV, which already holds a significant cultural place in many households, is set to go beyond entertainment, with the boundary between content consumption and e-commerce blurring. As advertisers look to transform passive advertising into an interactive shopping experience, shoppable TV is the answer. Shoppable TV combines e-commerce with video content, enabling viewers to purchase items appearing on the screen either through the user interface on the TV or on a secondary screen such a smartphone or tablet by scanning a QR code. For example, viewers can pause the video content they are watching to see what shade of lipstick an actress is wearing, and then directly purchase that product. Shoppable TV collects data about viewers every time that they click to find out more about a product within the video content, helping advertisers close the attribution loop.

AI-Driven solutions are key to unlocking monetization and driving viewer engagement

Broadcasters and service providers are under increasing pressure to keep viewers engaged and boost monetization. To stay competitive, they need to leverage the latest technologies including AI and state-of-the-art, holistic solutions that support a wide range of monetization models, including personalized FAST channels, targeted TV advertising with dynamic segmentation, tailored content packages, and shoppable TV.

 

 

Veritone – Maximizing revenue generation from your sports content

Veritone – Maximizing revenue generation from your sports content

How using AI to manage and license your content can lead to even bigger wins

Gary Warech, Head of Sports & Entertainment at Veritone, Inc.

In the realm of sports, AI technology is helping content managers and rights holders activate their content in a way that enables them to reimagine the value derived from live sporting events, as well as their archival content, unlocking new revenue opportunities in a dynamic landscape.

A shifting playing field

Sports content is more than just game footage; it’s a valuable asset encompassing highlights, interviews, outtakes, documentaries, historical footage, and more. The emotional connection fans have with their favorite teams and athletes is what makes sports content so valuable, revealing a wealth of opportunities for organizations to build and enhance fan connections through their content. While these sentiments remain, the landscape of sports consumption is changing.

 Fan expectations are shifting

Sports’ digital evolution and fragmentation are reshaping where and when audiences engage with teams and franchises, pushing sports organizations to rethink how to meet this demand and deliver a better fan experience, whether that’s converting casual spectators into fans or enhancing engagement with loyal enthusiasts.

 Content is more powerful than ever

This makes their content more important than it’s ever been. Engaging audiences long after the game ends by repurposing and reimagining existing archival content or distributing video footage from recent events across new channels can not only expand reach to new audiences but also unlock new opportunities for revenue streams.

 Intelligent technology helps harness media

This is where AI platforms and intelligent technology offer tremendous benefits to sports organizations, allowing both content managers and rights holders to automate workflows,  resurface iconic moments, and repurpose existing content in new efficient ways that create opportunities for active and passive revenue streams.

Traditionally, managing and licensing sports content was labor-intensive, often resulting in missed revenue opportunities due to inefficiencies and a lack of organization. AI changes that, enabling rights holders and content managers to effortlessly enhance the value of their assets, reach new audiences, and capitalize on new opportunities.

Transforming former cost centers into profit centers

With the efficiency of AI, sports organizations and entities are transitioning their content libraries from cost centers to profit centers by monetizing their sports content in two major ways:

  • Growing audience reach through licensed content distribution
  • Creating marketplaces for sharing and selling their existing content

Two examples of this are the San Francisco Giants and the Los Angeles Chargers.

In 2020, the San Francisco Giants utilized Veritone’s AI-powered platform to process and tag recently digitized legacy footage that was collecting dust on a shelf in its physical archive state. Manually, that task would have taken about one year of work for a team of 15 interns. Similarly, the LA Chargers were able to scale 371 days of working on metadata tagging into hours with the same powerful platform.

The results of these organizations’ efforts enabled them to equip their internal teams with the power to effortlessly resurface, repackage, and reimagine archival footage in new ways that reinforced connections with sports loyalists and extended reach to a new generation of fans.

The adaptability of sports brands’ solution architecture will be a determining factor in their future success. By enhancing accessibility and streamlining the content production cycle, organizations like the Giants and the Chargers can establish more effective year-round engagement—preserving their content for future generations of fans and acting as a crucial foundation for unlocking fresh revenue opportunities.

How AI can help rights holders get the most out of existing content

Archived content offers rights holders a wealth of potential, and that potential is enhanced when backed by AI tools, platforms, and services. This could look like:

Licensing content to buyers

Licensing presents an easy way to repackage content and convert the cost center of content production, management, and storage into a profit center, helping rights holders generate incremental revenue from their existing assets by licensing content to media buyers in broadcast, film, advertising, and more.

 Building a digital marketplace

Personalized content marketplaces allow a simplified and powerful way for organizations and rights holders to maintain brand integrity and offer a premium experience for partners, stakeholders, broadcasters, journalists, and more. With licensing, it’s essential for buyers to have confidence and trust in the source. Not only do they provide a better user experience, but AI-powered digital storefronts or marketplaces for content also enhance the owner’s credibility in the licensing process.

 Strengthening security and rights management

AI can track and claim content that is being used without consent. One significant area where revenue potential has often been untapped is social media. A lack of a formal monetization strategy on platforms like Facebook, Instagram, and YouTube has led to significant dollars being lost annually. AI can help capture and retain revenue from content shared on these platforms.

Closing thoughts

The journey towards maximizing revenue opportunities in sports content is ongoing, with AI continually evolving to meet the changing demands of the audience and the industry. To thrive in the evolving world of sports content rights, embracing AI technologies is not just a choice but a necessity, as it offers unprecedented potential for growth, innovation, and revenue generation.

Norigin Media – Sourcing: In or Out?

Norigin Media – Sourcing: In or Out?

Ajey Anand, CEO, Norigin Media 

Sourcing – In or Out? It is a long cyclic debate within any business – whether it’s better to build or buy. Such discussions revolve around business investments, where any spending must be weighted alongside the value of IPR ownership.

Within media and streaming TV businesses, while many have periodically switched between build and buy options, it has never been one or the other. The contributing factors are not only business mandates for ownership but also fluctuating economic climates. As seen after the pandemic, high inflation or recessions in different parts of the world, and large changes in staffing and skill-management contributed to decisions regarding spending.

Technology evolves at great speed, so in-sourcing requires large investments in training and skill development. Out-sourcing results in higher costs for key staff and loss of control or access to intellectual property. It is a hard decision, but calculable in terms of value. Businesses decide based on longer-term mandates, including ownership, but also investment capability and confidence in business verticals during uncertain times.

In a world of acquisitions and consolidations, larger businesses prefer to build, while new entrants prefer to buy stable existing solutions as they await their ROI and growth. The Media and Streaming TV industries have seen some great acquisitions with Disney and Discovery launching global brands for VOD while also acquiring tech companies to build internally. New and smaller niche entrants attempt to make their mark across the globe by buying solutions that are ready and tested.

Netflix on the other hand has always developed in-house with most of its brand identity coming from an ever-evolving product. This innate innovative building approach has been inculcated over years of perfecting a singular service where owning its IPR and brand is synonymous. The price tag for this is, of course, not provided, even on request – as with any luxury product which is difficult to imitate.

Looking at this from the other side, tech companies or teams evolve and improve by working with multiple customers on various projects where they gain a multitude of experiences. When acquired, they end up focusing on a single scope, and so there are limitations to the learning, delivery, and improvements that are possible.

My advice would be to cut your coat according to your cloth and set your TV App building or buying goals based on your limitations – of budget and capability to innovate. Grow your ambitions alongside successes that are quantifiably achieved. At Norigin Media, we do not say one is better than the other; building and buying are rather intrinsic and subjective strategies, which we support based on the needs of the broadcasters who we work with.

 

Newsbridge – Best practices for evaluating the carbon emissions of cloud-based media workflows

Newsbridge – Best practices for evaluating the carbon emissions of cloud-based media workflows

Philippe Petitpont, Co-founder and CEO at Newsbridge

 

In the era of increasing environmental consciousness, media companies are under growing pressure to address their impact on the world. Going digital may seem like a step in the right direction, but the digital sector relies on equipment and infrastructure that is networked across the entire planet. The environmental impact of this infrastructure is proving to be increasingly worrisome. According to estimates, the digital sector represents 2% to 4% of global greenhouse gas emissions, and energy consumption is growing by 9% each year. Behind these figures, it’s important to emphasize that video represents 82% of internet traffic, and the volume of data stored in data centers is experiencing hyper-growth of 40% per year.

 

This article will explore the challenges of measuring carbon emissions output, sharing strategies for media companies to ensure sustainable cloud-based media workflows.

Why measuring emissions is challenging

Numerous other industries have benefited from established standards or best practices for evaluating their greenhouse gas emissions over the years. Yet, the IT sector has been slow to adopt suitable benchmarks and methodologies, largely due to its inherent complexity. The extensive and intricate supply chains associated with ICT hardware, the prevalent use of shared resources demanding specific allocation techniques, and the multifaceted features of ICT services – which can vary widely even within a single company’s customer base – contribute to this challenge.

While there are existing references and methodologies for assessing the emissions of digital equipment and services, the data sources available for analyzing the life cycle of equipment remain quite limited. The challenges are further compounded when dealing with the “dematerialized” services of cloud-based activities, which are notably opaque and difficult to evaluate. Guidance underscores three key areas for measuring emissions in cloud services: data center emissions, network emissions, and emissions from end-user devices. This complexity makes obtaining a clear overall picture a formidable task.

Guidance highlights three key areas for measuring emissions in cloud services

 These challenges are intensified by the fact that carbon emissions measurements are no longer just optional; they are now a requirement to enhance sales and investment potential. The question becomes: how can media companies measure emissions along the supply chain?

Comparing the different types of carbon emissions

Measuring carbon emissions is a multi-phase process. There are three “scopes” of carbon emissions, according to the Greenhouse Gas (GHG) Protocol: direct emissions by a company’s activities (Scope 1), indirect emissions generated by a company’s energy acquisition (Scope 2), and indirect emissions produced by the company’s supply chain (Scope 3).

The      three “scopes” of carbon emissions, according to the Greenhouse Gas (GHG) Protocol

While Scope 3 emissions are the most challenging to track and not mandatory to report, reducing them is the only way to create real, lasting change. Scope 3 emissions take into account things like: purchased goods and services, business travel, employee commuting, waste disposal, use of sold products and services, transportation and distribution, investments, and leased assets and franchises.

 Strategies for assessing the emissions of cloud-based services

A growing volume of media content is migrating to and being accessed in the cloud. Unfortunately, the use of the cloud often conceals a significant portion of emissions from direct view. Currently there are two methods available to manually calculate the emissions of such services:

  • A bottom-up approach that requires identifying specific equipment associated with the service, and the measurement of this equipment’s energy use. This can be used when total emissions of a data center are unknown.
  • A top-down approach that allocates the total data center emissions using an appropriate method; for ICT services, this involves prorating the usage of the shared component.

To estimate emissions from using cloud services, it is imperative to gain a thorough knowledge of their operation. User services, often involving computing or storage, execute operations on a virtual machine (VM). These VMs operate on computer servers, IT devices that host multiple VMs, within a data center.

Collecting data from the cloud provider is also vital. The services and usage of VMs constitute known data, as these are the components every customer procures from their cloud provider. Usage data, available on billing consoles, provides essential information for reporting.

Furthermore, identifying the characteristics of the hardware supporting VMs, particularly energy consumption based on load, is essential. This information can help media companies accurately calculate emissions. After identifying hardware characteristics, complexities can arise when determining the load rate affecting energy consumption. Allocating the load to specific VMs rather than others and understanding the replication rate for storage are additional challenges.

Once machine energy consumption is estimated, factoring in the Power Usage Effectiveness (PUE) is necessary. PUE, provided by cloud providers, is a ratio depicting the efficiency of a data center in utilizing energy – specifically how much is used by computing equipment. Upon knowing total energy consumption, converting it into GHG emissions using the location-based method is the next step. Applying a conversion factor corresponding to the local energy mix for each data center, based on geographical location (data provided by electricity network operators), provides insights into operational emissions.

Beyond energy consumption, accounting for emissions linked to the manufacture of equipment is crucial. This entails obtaining life cycle analysis data for the equipment, a task frequently marred by challenges. It is important to note that this process is not a perfect science, and any assessment that media companies obtain is merely a rough estimate of emissions. Future access to missing data will improve the accuracy of assessments.

Leading change for a better future

Estimating emissions related to cloud services is a multifaceted approach that encompasses understanding the system, collecting data, identifying hardware characteristics, estimating energy consumption, and accounting for both operational and embodied emissions. While there are challenges to measuring cloud-based workflow emissions, the media industry’s ongoing efforts will drive more accurate assessments in the future.

Partnering with technology providers that are focused on sustainability is key to a green future. As a leading AI company, Newsbridge is committed to reducing the media industry’s impact on carbon emissions.  Click here to find out more about Newsbridge’s sustainability journey and how the company is measuring its own carbon footprint.

Net Insight – Boosting monetization with media-centric video delivery networks

Net Insight – Boosting monetization with media-centric video delivery networks

 Jonathan Smith, Solution Area Expert at Net Insight

With the global economic headwinds pressuring all industries, media companies are strategizing about expanding their content’s reach, tapping new audiences, and driving more revenue streams.

Delivering super high-quality live video content swiftly, reliably, and on a large scale is non-negotiable. As media companies pivot to reach audiences across markets, they need the right network backbone to remain agile. However, many media organizations still rely on generic transport workflows for their premium content, missing out on the advantages of new, software-defined transport networks explicitly tailored for media.

Innovation in software-defined transport networks that are media-centric in nature renders these networks ready to meet the stringent quality, synchronization, and reliability requirements of the media industry. When it comes to valuable live content, media companies can’t compromise for anything less.

The foundations of media-first video delivery networks

In the media industry, we often hear that the Internet wasn’t built for the primary delivery of live media content, but, at the end of our industry value chain, we have seen how powerful the consumer IP distribution role has become, making perfect use of ubiquitous global connectivity to reach audiences and devices otherwise out of reach or requiring large capital investments in satellite or terrestrial infrastructure.

It doesn’t take a lot for a cloud provider to move video signals and packets from point of origin to partner A and to platform B and beyond over an IP network. However, this is simply not good enough as generic IP networks, by design, lack the fundamental behaviors needed for live video transport. This is of special concern when transporting high-value media content where you only get one shot to deliver to that primary distribution point. Regardless of how good the content is, consumers don’t keep their eyes glued to it if the delivery quality and reliability aren’t exceptional. Media companies risk missing a trick in the monetization game if they don’t ensure their video feeds can be scaled across geographies and platforms in the right way.

While ARQ protocols solve the basic technical challenges of recovering packets over lossy transport, the next-generation software-defined networks that are built specifically for media transport combine the benefits of hardware-defined networks and the cloud by leveraging media-centric foundations:

Observability

A software-defined network delivers monitoring metrics that provide insight into the video signal delivery every step of the way. End to end provisioning and monitoring brings cohesive visibility that enables broadcasters and media service providers to control their media delivery and ensure it is efficient, high-quality, and seamless.

Protection

When it comes to protecting video signals, a generic IP delivery network can be vulnerable. Live video delivery requires 100% uptime and 24/7 robust and redundant services that are simply not available by default over generic IP networks.

A media-centric approach leverages the benefits of both ‘traditional’ broadcasting and modern cloud engineering to enhance the video feed protection and overcome reliability challenges.

Flexibility is key to managing the cost of IP network protection. With the right media-centric network, media companies can define the level of protection of different types of content on an input/output basis without having to make huge investments upfront.

Synchronization

Synchronization across contributed and distributed video signals is a critical capability ensuring that all destinations, regardless of their region, receive feeds at the same time. Synchronization is also crucial for the betting industry as any millisecond delay can have a big impact on the real-time betting experience and overall fan engagement and cause the broadcaster financial and reputational damage.

Driving monetization with a media-first approach

Smart broadcasters harness the power of media-centric delivery networks to distribute great quality video to new destinations flexibly, efficiently, and seamlessly. More importantly, they deliver video to destinations that make business sense when it makes sense.

A traditional approach to feed distribution would see media organizations investing upfront to get their tech infrastructure ready to scale to new video takers without, however, being able to scale down if needed. This means that media companies will need to make such investments regardless of whether they only want to take on new destinations for a specific live event or if it’s proved that distribution to a specific market or taker is not valuable or sustainable.  The final traditional trap was to underinvest in the distribution, prioritizing cost-effective technology solutions in order to balance the perceived risk vs. reward, often at the expense of quality and reliability; when eyes are switched-away, you have failed before you have even started.

A media-centric delivery network can provide the agility and scalability that media organizations need when they need it and for as long as they need it. This means they are in control of their investment, they benefit from cost transparency, and they can experiment more with new destinations and live events. In other words, organizations can enjoy the flexibility of jumping on new market opportunities while delivering quality, but equally, the ability to ‘fail fast’ and remove themselves from markets that don’t bring business value quickly. Monetization opportunity cost has never been that simple or straightforward!

 The right media network for media

IP and the cloud are revolutionizing the media industry but to truly harness their potential, media organizations need an extra layer — that of media-centric delivery networks.

IP transport networks that have been created specifically or overlayed to meet the requirements of media delivery, provide the quality, scalability, and agility industry players need to further monetize their video content, test different business models, and tap into new revenue streams.

In times when content monetization and proven ROI are pressing media companies, a media-first delivery network is an invaluable revenue-enabler to enable them to grow and drive efficiencies that make a difference.