Etere – How to generate automatic metadata for newsrooms

Etere – How to generate automatic metadata for newsrooms

Metadata is essential to find the video you need, especially in the newsroom. Etere empowers broadcasters with features including automatic metadata tagging, Face Recognition, Speech-to-text and OCR in video images.

Face recognition and OCR allow you to catalogue any video:

By leveraging high-performance AI capabilities, Etere empowers users to automate tasks, gain insights, and improve operational efficiency and media quality.

 

AI-driven facial recognition allows broadcasters to manage these actions easily through the automation of an otherwise lengthy process:

• Identify sports players in a match

• Retrieve highlights quickly

• Transcribe games commentary based on players’ performance

 

Using image recognition, Etere analyses individual scenes in the media library and produces a list of matches in seconds. The advanced facial recognition technology works even for media files with low lighting and obscure angles.

 

For example, users can launch a search based on an artist’s profile image in personal data. The technology gives users greater control over an extensive media library. Furthermore, it makes it easier for broadcasters to search for and repurpose media files to create better content for their audiences.

 

Flexible Metadata

• Enhance media files by inserting custom metadata for different asset types, including comments, descriptions, images, loudness, file info, graphic style and asset status. In addition, you can create profiles for different groups of assets to manage multiple files in different locations effectively.

Web Application for the Fast Insertion of Metadata in MAM:

Etere Web allows users to easily perform tasks from any mobile device including Smartphones and Tablets based on iOS, Android and Windows Mobile operating systems. In this way decisions can be made more quickly and with fewer errors since data becomes visible across the entire organization and under a single database, thus allowing multiple users to work on the same object. The web client has been redesigned to perfectly display on the tablet the agenda calendar, assigned tasks, media previews, content approvals and any additional order task. Web users can now easily manage sales campaigns, approve delivered content, track ingest/editing rooms, monitor personnel shifts and access maintenance schedules.

  • Etere launched a new HTML 5 web application to preview, access, edit, arrange, and insert rich metadata. The new web application allows Etere users to add rich metadata to media assets from any web browser. It opens a new level of content management to rediscover new revenue opportunities and boost productivity. Etere MAM is an end-to-end software that orchestrates workflows and fully optimizes the value of your assets through centralized management of media content and associated metadata.
  • Throughout the multimedia production and distribution chain, rich metadata enriches the content and allows broadcasters to increase the value of their media assets. With extensive media libraries of media assets containing descriptive and personalized data, assets can be easily retrieved, repurposed, and interconnected based on a set of relevant relationships with one another.
  • Assets can contain descriptive and technical data, including asset titles, artists, asset types, duration, file formats, etc. The possibilities are limitless with Etere; you can even personalize your metadata to fit your workflow. Furthermore, you can preview your media assets from the same interface for a seamless operation. The user-friendly interface can be accessed from any web browser, enhancing collaboration and remote working from any location.

Optical Character Recognition (OCR) gives you more accurate data

Optical Character Recognition (OCR) technology recognizes text within a digital image. It is commonly used to identify text in scanned documents and photos. With OCR technology integration, users of Etere can convert a physical paper document or an image into an accessible electronic version with text.

 Advantages

  • Increased Efficiency and Productivity
  • Improved Data Accuracy
  • Digitalization of Data
  • Better cost-efficiency
  • Easier text searches, editing, and storage

Boost your ROI with Speech-to-Text

Etere Subtitles Tool for Closed Captions supports Google Cloud Speech-to-Text, which includes support for up to 120 languages, with each language specified within a recognition request’s language code parameters. Each language code parameter includes a BCP-47 identifier tag that identifies the language and region/country of the language. With the upgrade, operators can instantly convert audio to closed caption text to support a global audience base. Etere can support voice-to-text commands, real-time streaming and the processing of voice commands/audio transcription from call centres. In addition, Etere is also able to process pre-recorded audio using Google’s machine-learning technology.

 Key Features

  • Supports Google Speech-to-Text
  • Over 120 languages and variants
  • Instant streams of text results
  • Returns recognition of text from audio in a file
  • Analysis of short and long-form audio
  • Option to filter inappropriate content
  • Digital intelligence with automatic punctuation and word hints

The system uses on-premise resources for enhanced privacy of valuable data

Etere offers both options of a permanent license and Software as a Service (SaaS). We have you covered even if you do not use cloud services; Etere empowers you to select what works best for your business – on-premise, cloud, or hybrid models, allowing you to choose the best setup for your business. Etere offers professional newsroom software with an on-premise, permanent license. Unlike a Software as a Service (SaaS) model, users own the software after purchase.

Etere Nunzio simplifies your newsroom:

Etere Nunzio Newsroom is a newsroom management software that empowers media enterprises with the best tools to develop a story from the initial idea to the final broadcast, all from a single interface. With Etere’s mastery of integrated workflows, you can achieve better performance. Nunzio Newsroom is a highly-flexible and cost-effective solution with fully empowered Newsroom Computer System (NRCS) capabilities to manage the entire tapeless workflow of a Newsroom environment, starting from the planning of news stories (virtual assets) to the control of on-air playback.

Etere Nunzio Newsroom empowers users to work faster and better. With its full NRCS features, users can easily manage the end-to-end workflow of any newsroom environment, from planning news stories to managing newsroom automation on-air playback. Furthermore, it can concurrently handle complex stories with multiple media elements, including video files, live streams, graphics, and secondary media files.

 Direct integration with NLE

When a journalist inserts a placeholder for a video, a workflow begins. The workflow triggers a new asset creation, and subsequently, a request to NLE to create the approval process or a tapeless reception request for someone to download the file is triggered. Etere NLE integration license enables stations to integrate seamlessly with Adobe Premiere without third-party data transfer software.

  • MOS-compliant
  • Supports Newtek NDI
  • Insert and edit lower thirds, graphics, and news tickers without external graphics inserter
  • Supports E-paper and website uploads
  • Flexible and scalable production workflow
  • Fully configurable data model
  • Customizable user rights, agenda, and communication tools, ready to support large teams working remotely
  • News tickers to create custom carousels
  • Supports Dante audio for superior audio quality
  • Real-time updates support a collaborative workflow
  • Supports teleprompters
  • Add and update playlists from social media, including Facebook, LinkedIn, Twitter, and YouTube, directly from the Nunzio interface

EditShare – Will AI really transform the media industry?

EditShare – Will AI really transform the media industry?

Stephen Tallamy, EditShare

Unless you are a hermit, you cannot fail to have noticed all the talk about AI at the moment. It is everywhere.

If you believe the hype, then we are all doomed. The machines are ready to take over, and there will be no need for any human to do any work ever again. We are all rather more cynical than that, and we know deep down that we can probably hang on to our jobs at least for a while.

For a long time now, we have known one fundamental thing about computers. They are good at dull, repetitive tasks, while people are good at creative tasks. And, despite the reports in the popular press, AI largely conforms to that rule.

Ask an AI system to create a new popular drama series and it will look at all the popular drama series in the past and conclude that to succeed a drama must include a police officer with one significant character flaw, who has a sidekick to which everything must be explained. It might make a valiant attempt at a pilot script for such a drama, but it will not be original and it probably will not be a hit.

But I am certainly not saying there is no place for AI in our industry. In fact, I can already see a lot of places where it will be of value, where it will take the load off people, providing the support we have never had before to allow us to concentrate on creativity.

Imagine you are making a programme. It might be a documentary, or maybe a nature programme. You have a mix of new shoots and archive footage. You want the final result to look as seamless as possible.

The issues are obvious. Your new shoot will use your camera of choice, and you will apply the best LUT to get the color balance you want to tell the story. The archive footage, though, will be in a mix of frame rates – 24, 25, 29.97 – and probably wildly varying color balance.

Getting the smoothest results from frame rate conversion takes some time. Matching colors is potentially a long – and not very creative – job for a skilled colorist. So here are two ideal applications for intelligent systems: not just make all the footage run at the right speed with legal colors, but analyze the content and match things up as near perfectly as possible.

That seems to me to be the right balance. If you are an editor, you should be focusing on telling the story in the best way possible, not worrying about color science. Once the story is in place, you can do a final pass to smooth off anything that is not perfect and hand it to a colorist to bring creative color choices to set the tone and atmosphere to enhance the experience.

Another application is the one that comes up most often when people talk about AI in media workflows: logging and tagging. Making a good speech-to-text transcript is very valuable, especially in the newsroom where the archive now has word-for-word metadata of every shot in the library.

Analyzing images is also a good way to add to metadata, and actually the way a machine does it is very different to the way that the journalist or archivist would tag content. Despite the name, AI does not actually have intelligence: it looks at a piece of content and tries to recognise everything on screen. So it will note everything that happens, not just what a human operator considers “important” or “relevant”.

We all know about the psychology experiment where you are shown a video clip of people passing a basketball around. Asked afterwards, people might recall the number of passes, but supposedly no one notices the gorilla running through the scene. The AI logger would.

Taking a step back, it’s possible to see that AI would be useful in planning and organizing a project. It would get to learn how the facility and the editor works, and try to predict what is happening in the workflow. It might well spot what it considers bad practice, but it should learn that is the way things are done around here and provide the right resources at the right time.

Again, this may not have much impact on a one-off, high-end drama. But if you are working on a bulk order of, say, a daily game show, it would speed things up to have an AI assistant lining up all the strings and transitions you need at the moment you need them. It could give you the time to spot the interesting reactions that lift the individual performance out of the ordinary.

What is my take-home on AI? Yes, it is powerful and could make life easier, smoother and more productive. Embracing AI to take away the laborious tasks and unlock more creativity, as well as satisfy the ever-increasing demands of creating more content quicker. Creativity is an act of humanity and audiences quickly become tired of anything artificial. So embrace human collaboration, just aid it with AI.

Densitron – Outsourcing can help companies refocus and build business agility

Densitron – Outsourcing can help companies refocus and build business agility

Pete Semerak, SVP Product, Densitron

 

In an uncertain world, it pays for broadcast vendors to refocus their activities on what really makes them distinctive. And one way in which they can do that is to outsource some of their design, manufacturing and integration needs to a dedicated ODM provider, says Densitron’s SVP Product, Peter Semerak.

From tough times come new opportunities, and that has certainly been true of the last few years. While no one would dispute the scale of recent challenges for manufacturing, they have also provided a chance for companies to review the way they work and have some frank internal conversations about their strengths and weaknesses. As we shall see, this tends to support the case for specialist original design manufacturer offers – such as our own recently-launched ODM+ service – that allow companies to refocus on their core activities and invest more time in planning for the future.

In many ways, broadcast could be said to represent a microcosm of the pressures to have affected manufacturing as a whole these past few years. As well as being subject to the shutdowns that were universal to all sectors, it faced a prolonged supply chain crisis focused on electronic components. We’re pretty much back to normal now, but long-lead times took their toll, not least in the capacity that companies had to allocate to finding alternative components. Inevitably, the ability – or not – to fulfil orders also had an impact on the generation of new business, not to mention new ideas.

All of which raised some burning questions about business agility – something that many companies had already been concerned about in the midst of a period of huge technological change. IP, the cloud and, increasingly, AI have massive implications for the future development and delivery of broadcast solutions. Factor in growing worries about a skills shortage in various areas of production and engineering, and you have all the necessary elements for a lot of late nights and anxious Zoom calls.

It’s against this complex background of factors that we, as a company, have been thinking about ODM services and how they can help broadcast vendors to refocus on their core activities and become more agile. For those not familiar with the term, an ODM is an original design manufacturer – a company that designs and manufactures a product that is then rebranded by another firm for sales and marketing.

For some time we have been convinced of the scope for an enhanced ODM service which includes the traditional ODM responsibilities of managing the complete product lifecycle for customers, but also provides access to Densitron’s own extensive IP, which includes groundbreaking control and display technology patents that can be incorporated into customized hardware solutions. To say the least, this is not a common occurrence in existing broadcast ODM services, which overwhelmingly focus on more straightforward outsourcing and product management functions. Hence why we have decided to call our service ODM+ as it really does add something extra!

Above all, ODM+ recognizes that the industry now requires enhanced design and build services that allow customers to be more targeted and responsive to a rapidly changing market. In addition, due to our own effective internal structure and extensive long-term planning, it acknowledges that Densitron itself has escaped many of the worst effects of the past few years. Indeed, the manner in which we have navigated the challenges confirms that our business itself is highly agile.

Robust approach to hardware R&D

There are a couple of key reasons for this positive result. First and foremost, we have been able to maintain good levels of strategic stock, with existing long lead-times on that stock mitigating the challenges that were occurring elsewhere. Secondly, it became apparent that we were in a good position because, in the broadcast market generally, more companies were switching to delivering software solutions. We therefore saw a pinch-point on the developmental side in those organizations where people were looking more to develop software. It wasn’t that they were neglecting hardware as such, but it has often not receiving the same attention as software.

So while we were able to keep our own display and control hardware production moving to schedule, we also realized that there was growing scope in the market to innovate in the hardware realm. It was at that point that the whole notion of offering a dedicated and comprehensive third-party design & build service began to formulate.

Benefiting from our own robust supply chain and production processes, Densitron ODM+ was always conceived as a service that would allow our customers to refocus their operations in a way that builds resilience and agility. At one level, it involves Densitron taking over the complete hardware product lifecycle from customers – from design to manufacturing to supply. This includes access to our well-established supply chain management processes so that all of the main concerns associated with sourcing components – including contingency solutions – can become the domain of Densitron.

But it also enables customers to take advantage of our R&D capabilities, which are founded on years of ambitious original work in developing control systems, control surfaces, displays and computing solutions. This is especially beneficial given the extent to which many companies have had to focus their R&D activities on finding solutions to supply chain problems related to existing solutions. With the freeing up of resources that is made possible by ODM+, companies have a much better chance of doing more focused R&D that really plays to their greatest strengths.

It’s also good news for an industry whose long-term prosperity is completely dependent on innovation and original solutions development.

Future production fears

Hence it appears to us that, in this period especially, opting for a comprehensive ODM service holds significant advantages. One of the reasons for this recommendation is the growing list of factors that could potentially disrupt future production. For example, few would disagree that geopolitics is now at its most uncertain since the 1990s, with any number of world events threatening to disrupt supply chains and cause further hindrance to production of semiconductors and other vital components.

Then there is the darkening macroeconomic picture. Many countries could tip into recession and, already in the red due to the pandemic, governments may only be able to offer limited assistance. Sadly, at the highly value-attuned components level, it’s likely that some businesses will hit the wall altogether.

So it makes for a real headache for the industry and one that isn’t going to go away anytime soon. In which context, it surely makes sense for individual companies to remove those areas of concern that they can confidently outsource to a third-party with a long track record of targeted hardware production and delivery.

It’s in that spirit that I would encourage any companies with hardware production concerns to get in touch and see if ODM+ can help you. We are already working with some leading names, and anticipate that this list will grow in the near future. It’s the start of what we hope will be something big and even transformative for our part of the industry.

For more information on ODM+ and other Densitron services, please visit https://www.densitron.com/design-and-build

 

Consult Red – Aggregate more

Consult Red – Aggregate more

Rahul Mehra, CTO, Consult Red

Communication service providers can unlock competitive advantage and deliver additional customer value through green field service aggregation, underpinned by the operator Intelligent Edge.

Media and telco operators have had a good run with triple and quad-play – combining fixed broadband and voice with pay TV and mobile – but, with these services now table stakes, the ability to differentiate their offer from the competition is constrained.

Operators understand that they will need to offer more value on top of their existing services to keep customers on board, especially at a time when home budgets are being scrutinized and stretched.

We believe there’s a once-in-a-generation opportunity for telco and pay TV providers to diversify their retail offer, by moving beyond quad-play to ‘omni-play’ and beyond super-aggregation of video content, to the aggregation of our increasingly smart lives.

By ‘aggregating more’ operators will be able to increase customer loyalty and raise ARPU, from services as diverse as multiplayer live gaming and home security, to smart domestic energy management.

Leverage existing technology

The technology exists for providers to deliver a single gateway to services, a unified UX with single billing and customer care, and in so doing they can move from discounted bundles to ‘mega-bundles’.

This is the golden opportunity to take control of the emerging Intelligent Edge – the future brain of the connected home.

The Intelligent Edge, hosted on operator CPE such as the video set-tops, connected TVs and broadband gateways, can be harnessed to improve existing TV/streaming services and launch new entertainment offers, including gaming which is more likely to appeal to younger demographics.

The same technology can be used to take a leadership position in Smart Home services.  The Intelligent Edge gives operators a flexible, dynamic and elegant means to introduce new services without the overhead that is often associated with software deployment to devices in the field.

Aggregate more is already in play

The innovative bundling of new services with smart home tech like video doorbells, indoor cameras, motion/contact sensors and environmental sensors, exemplifies the new possibilities when we look beyond entertainment and connectivity and ‘aggregate more’. Sky for instance recently launched an innovative smart home protection service, Sky Protect, offering customers comprehensive home insurance and smart home monitoring devices, bundled in one app.

The recent market introduction of connected TV platforms such as Sky Glass and Comcast Xumo are additional examples of aggregated consumer offers. Sky Glass bundles premium hardware, that includes a 4K display, 5.1 channel sound and camera, with financing that is attractive to both consumer and operator. This is before adding a compelling bundle of apps and OTT services, delivered via a sophisticated, unified user experience, featuring universal search and far-field voice control.

Leading operators are clearly looking for the most efficient, cost-effective and attractive way of providing as wide a range of services and applications to their consumer base as they can.  Through existing managed CPE devices, telcos and pay TV operators already have an edge in the emerging ‘Aggregate More’ market. But perhaps not for long with Big Tech snapping at their heels.

Containerization is a key technological enabler

The power of the operator Intelligent Edge is underpinned by Downloadable Application Containers (DACs) – via an application platform with a common backend and authoring that spans the cloud and CPE.

Containerization further supports agile product development for a community of third-party application providers that operators could then roll out under their service and provide additional value.

The same approach not only enables providers to stand up and nimbly deploy these new services but breathes new life into legacy devices as well. Since operators have invested significantly in their existing infrastructure and devices, they want to make sure they maintain their longevity. With DACs, they can. Containerization brings cloud-like power to the home and full capability for flexible, cost-effective deployment.

The opportunity is not all about new customer-facing services. There are underlying operational benefits to be tapped as well. Operators can use the Intelligent Edge to ensure QoS and QoE across all services and applications to ensure operational effectiveness. They can deploy applications to monitor the performance of their network and key services, and proactively identify issues before customers notice and raise calls to the call center. The net result is higher customer satisfaction, fewer truck rolls, and lower operating cost.

Clear-Com’s flexible technologies provide invaluable business agility

Clear-Com’s flexible technologies provide invaluable business agility

Bob Boster, President, Clear-Com

If there is one truth in this business, it’s that nothing ever stays the same when it comes to clients’ business needs. Ever-changing requirements and a wide variety of environments and workflows mean communications – and the tools needed to deploy them – look different in every situation, even when done by the same team.

Clear-Com® is broadly known for providing real-time, multichannel full-duplex voice communication for close collaboration, which is particularly needed in studio and outside broadcasting environments, corporate broadcast, and media and live event production.  In recent years, following an overall move within the industry to leverage the ubiquity and flexibility of standard IP networks, Clear-Com’s systems have facilitated a previously unimagined level of ‘business agility’ to our customers in terms of updated workflows and ease of re-configuration.

Traditionally, these systems functioned in a single-group operational mode, even in applications with multiple systems communicating together. In these traditional scenarios, systems were designed for one task and the intercom was organized around that. As the need for remote production became more prevalent, technology advanced to allow systems to link to outside sites, such as outside broadcast trucks or news vans. Initially, these were linked by one or two phone lines, generally managed from the Master Control Room on behalf of a specific studio or program. In the mid-2000s, Clear-Com developed a ground-breaking technology to provide these connections over IP, which became increasingly popular and in demand as this kind of connectivity started to become more prevalent for broadcast operations and traditional phone lines, not to mention higher bandwidth connections like ISDN, started to become scarcer.

That IP connectivity capability sparked the evolution of our current market-leading family of tools, which allows for both traditional wired and wireless full-duplex systems to extend over standard IP networks, including internet-based connections, to facilitate a wide spectrum of communication needs.  Clear-Com has continued to extend the workflows of traditional production intercom in a variety of ways, all of which have allowed our clients to respond to changing requirements with extraordinary flexibility.

These options include devices like the LQ® Series of IP Interfaces that can extend communications from existing hardware-based solutions, including those of alternative intercom manufacturers.  Many of our current hardware options can be implanted over lower-bandwidth connections as well as locally over uncompressed industry-standard IP connections like AES67, fiber, and dedicated copper. We also have broad support for SIP, which provides a tool kit for integration with local, campus-wide, and global communication tools such as IP-based phone systems and two-way radios. Finally, our award-winning IP mobile and desktop clients Agent-IC® and Station-IC™ allow team members to run dedicated and tightly integrated intercoms on their personal devices like mobile phones, tablets, or laptops.

This range of options provides industry-leading flexibility, which was particularly valuable during COVID-19 when workflows suddenly changed dramatically.  Our versatile tool kit allows people to work from where they are, whether from home offices tying into TV studio operations for the NFL draft, or on film sets where team members needed to conform to new protocols but keep the wheels of production turning. Clear-Com tools have been critical to re-establishing broadcast operations in a matter of minutes following disruption by hurricanes and enabling entire new sports production models for localization (i.e., different language presenter teams) around the world for major events or even streaming-based smaller league coverage models. They have empowered massive teams of floor managers and assistant producers, even extending coverage across vast distances, for major city events like the New Year’s Eve celebration in Times Square, Jubilees in London, and the Carnival parades in Brazil.

Now, we are bringing the flexibility that has driven so much of the innovation we’ve brought to our traditional intercom customers over the last decade to our award-winning Arcadia® Central Station. Up to now, Arcadia’s focus has been on live event solutions that require a channel-based workflow across multiple platforms of user endpoints, including FreeSpeak II®, FreeSpeak Edge®, and HelixNet® Digital Partyline. The newest enhancements to Arcadia enable I.V. Direct™ connections, the same innovation in our Eclipse® HX E-IPA cards and  LQ® Series of IP Interfaces, straight into the unit, guaranteeing the further integration of all these solutions working together across local, area, and even global connections as needed.

Ultimately, Clear-Com’s rich set of capabilities to empower team communications in different configurations combined with our vast experience helping our partners and end users to develop and deploy these unique solutions have supported unprecedented business agility throughout our history. We anticipate even more innovation emerging in the coming years as this industry continues to evolve dramatically, and we can’t wait to share with you what’s coming next.

 

Chyron – Exploring cloud vs. hardware solutions: addressing customer needs and financial considerations

Chyron – Exploring cloud vs. hardware solutions: addressing customer needs and financial considerations

 

By Chyron VP of Marketing Carol Bettencourt, Senior Product Marketing Manager; Hayes Stamper, Director of Product Marketing; Dan Macdonald; and Chyron LIVE Operations Specialist and Rochester Sports Network Founder Daniel Higgins

In the world of modern content production, the choice between cloud-based solutions and traditional hardware solutions has become increasingly critical. This is a question that Chyron has tackled along with customers and prospective customers, gaining insight into common assumptions, requirements, and opportunities with regard to cloud-based solutions for live production.

Assessing fundamental considerations

Any business that’s looking at moving from a conventional hardware-based model to the cloud for live production benefits from examining the entire equation to understand what that shift can mean for the work they’re doing now and how it might shape their future opportunities.

Key aspects of this shift that are important to consider include the move from capital expenditures (CapEx) to operational expenditures (OpEx) and related business implications. The move to an hourly subscription model for a cloud-based live production platform eliminates the historical barrier to entry to high-end production value within live production and broadcast, which can be hundreds of thousands of dollars for hardware deployments. Enhanced production value can in turn bring an increase in revenue.

With access to multi-camera switching, real CG-grade graphics that fluidly animate, and telestration, it’s possible to build rich, visually compelling scenes with graphics that are prime real estate for advertisers. By telling a story and analyzing plays the way sports fans want and expect, a broadcaster can make a case to sponsors to pay for ad space in goal graphics, a replay transition, or a sponsored player of the game graphic, for example. A high level of visual quality can also convince viewers to pay a subscription to watch the live coverage and helps to build a loyal audience — and more eyes for those ads — at a fraction of the cost of a traditional production.

One way to better understand cloud-based production is to liken it to transportation. Buying a vehicle involves a fixed cost that’s typically significant and heavy up front. But if the car doesn’t get used every day, or even every week, then renting might be a better option. Even then, however, it’s still a fixed cost per day or per week. But move on to a Lyft or Uber model, where costs are incurred only when the wheels are moving, and the car is available whenever and wherever it’s needed, and things get interesting. No need to pay for gas, for maintenance and upkeep, insurance, and so on.

For organizations looking to stream one or two live events a week — and without the resources to build a traditional control room — a cloud platform presents an appealing opportunity. Whereas a production facility or control room requires up-front investment that can take years to accumulate, and then ongoing infrastructure and maintenance costs, production by the hour delivers instant access to current tools, with a meter that only runs when the cloud-based service is in use.

This kind of flexibility has never really been available to the broadcast production world until cloud-based solutions and SaaS business models came along. And now the flexibility and scalability introduced by the low-OpEx on-demand model can be united with higher production values and distributed workflows that allow companies to tap into a larger collaborative talent pool to capture revenue in ways that simply weren’t possible before. One instance supports all the roles needed for a particular live production: the TD, video playback, audio mixing, replays, telestration, and even commentary. In addition to removing geographic constraints and reducing travel and associated costs, this model opens up fresh opportunities for remote collaboration throughout the whole production spectrum.

Creating that ‘Eureka!’ moment

Communicating the value of this or any technology solution to buyers starts with showing up and having conversations. The value of listening cannot be underestimated in working to promote adoption of something new, whether that new thing is high-end broadcast (for people new to that) or the cloud (for high-end broadcasters new to it). Rather than make assumptions about what people think or what’s meaningful to them, it’s important to listen to their needs and their objections, and then to help them move forward with ideas about how to monetize their product.

In moving forward, “show, don’t tell” also can be a useful guideline for effective communication. When people can see the quality of output they get with cloud-based live production tools, for example, they immediately recognize that rather than compromise quality, they can collaborate in using professional-grade tools to boost production value. During a demo they realize that, working together in a real-time state — with full visibility — within the platform, multiple connected users can maintain that familiar magic of working as part of a control room team or production truck crew. Those are “eureka!” moments for prospective users.

Equally compelling, in many instances, is the realization that cloud-based workflows and their benefits are readily available to the competition. While the same old titling and switching systems might seem adequate, a glimpse at what’s possible (telestration! high-end motion graphics!) and the thought of a competitor enjoying affordable access to that standard of production can be significant motivators. At some point, all of a sudden, people who think that their content is good enough without bringing it to the next level will find that other videos look better than theirs and can be monetized in more ways, and more effectively. In a larger sense, it’s about shifting the focus from getting content on air for as little as possible to thinking about the opportunity to grow the product and increase its capacity to drive revenue.

Aligning business models with evolving customer needs

The rise of cloud-based services has brought with it new business models, for broadcasters and the technology suppliers delivering various services and solutions. As with conventional hardware-based projects and installations, the requirements for cloud-based implementations and workflows for any given business will differ — and suppliers are finding they must adapt accordingly.

Chyron works with customers ranging from the largest sports networks to production teams with just a few staff members or volunteers. While all of these customers can benefit from a cloud-based live production platform, they work with it very differently. Though they both take advantage of professional-grade graphics tools, their operations and business models look nothing alike. The hourly pricing model for a small team that produces a couple of live events each week doesn’t actually add up for a much larger sports production outfit logging a massive number of hours over dozens and dozens of competitions. While the smaller production team may simply need to minimize overall cost, a university sports production team may need to align costs with a specific budget tied to a season or academic year. In both cases, we speak with customers individually and identify, and sometimes help shape, the business models and pricing structures that make sense for them.

By taking a consultative approach and addressing customers’ concerns and requirements, technology suppliers can help prospective buyers make informed decisions regarding cloud and hardware solutions. They can communicate the benefits and costs of each approach, helping broadcasters and other content creators to find a solution that aligns with their current reality and with their long-term vision for their business.

Caton Technology – Cover the globe with internet connectivity

Caton Technology – Cover the globe with internet connectivity

Michael Yang, Caton Technology

Media is now a global business. Audiences anywhere are clamoring for content from everywhere.

The K-Pop phenomenon means that a concert taking place in Seoul can attract a huge audience in Seattle and Sienna. In recent weeks sports fans globally have been gripped by world championships: cycling in Scotland; netball in South Africa and football in Australia and New Zealand.

Media connectivity is more than just television coverage of sports or concert relays to theaters.

Like every other aspect of media technology, once we were limited by expensive, bespoke hardware: now the tools for production and delivery are freely available to all. It is the connectivity that has struggled to keep up. How do we get signals from wherever the event is – football stadium or operating theatre – to wherever the audience is?

Traditionally, this meant satellite links. However, the bandwidth capacity is limited and very highly priced. Dedicated fiber lines may be available to premium locations, but the essence of new agility is the need to connect any location to any point of delivery.

There is a fabric which does now connect everywhere to everywhere: the public internet. The problem is that the internet is a wild, unmanaged environment that is unreliable and unpredictable for the performance and quality requirements of the media industry.

We are all aware of the problem of finding our internet connections going down. Network failures are very common. Cisco has provided a live tracking website, thousandeyes.com, which shows where all the major outages are, in real time. As many as 300 major network failures are seen every week, with close to half in the United States alone.

Each link in the internet is provided by a telco, and it is in their interest to move the signal on to the next telco as quickly as possible – the “hot potato” principle. But where there are failures, or data traffic bottlenecks, then the ISPs are forced to reroute streams over longer, and therefore slower, paths.

At Caton, we have developed tools to trace the paths between all the peering nodes touched by a data stream and found that latencies can vary by a factor of as much as 100 times. This is completely out of the control of the typical user. For a broadcast engineer, expecting very high reliability, no jitter and stable latencies, this is difficult to accept.

All these issues pose major challenges for delivering high quality, high bitrate media streams across the public internet.   On the other hand, it is widely and readily available: virtually every location now has broadband internet access. And it is potentially very low cost.

But as the saying goes: there is stable and reliable; there is fast to implement; and there is cheap. You can have one, maybe you can have two. But having all three is tough.

Modern protocols like SRT and Zixi are great at delivering the content, but they are at the mercy of the network, simply having to accept the unreliability, latency, and capacity bottlenecks.

What is needed is a network architectural level solution that provides efficient management of the network as well as the individual streams. One that probes all the way along the routing, constantly checking network availability to ensure that the media streams are always on the best performing links with the right capacity.

Ideally, the algorithms in the network probes should be fine-tuning the route for reliability, latency, stability, and cost, to ensure that the end-to-end circuit not only delivers the program but does so for the lowest possible cost.

To achieve this requires AI powered machine learning and big data modeling, to build and maintain a network map in real time, and to make proactive rerouting decisions that are efficient, effective and instant. That is what we have done with the Caton Media XStream.

It uses our own global Caton Cloud architecture to track signals from the origin, over the public internet through our mesh network of points of presence (PoP) (we currently have more than 100), to the final destination or destinations.

Through its AI dynamic network management, it is capable of making switching decisions in less than 20ms to guide traffic through the most optimal paths. Those decisions are made for security, availability, latency, and cost.

The result is that we can offer six nines of reliability: 99.9999% up time for high bitrate transmissions for live Ultra HD and beyond. It is low cost because what we operate is an overlay core network based on the public internet infrastructure. And it is very agile, being capable of deployment virtually anywhere in a very short space of time: typically hours from enquiry to connectivity.

We know that the internet is unreliable. We cannot change that. What we can do is take a view across hundreds of mediocre, unreliable links to create one extremely reliable superhighway, through the use of AI dynamical switching to ensure the signal always gets to its destination, error free, jitter free, with minimal delay and minimal cost.

Blue Lucy – Bringing the cloud to earth

Blue Lucy – Bringing the cloud to earth

Julian Wright, CEO, Blue Lucy

I was quite pleased to be asked recently by a Canadian colleague what our “theme” would be for the upcoming IBC show. For me, theme reflects the ethos of the Blue Lucy approach to trade shows; we don’t tend talk about product features or the specific capabilities in our roadmap.

Development of features and connectors is just something we do in pursuit of delivering customer value. If an operator needs a connector to a system or service, we will build it into a microservice BLidget – which we’ve been producing at a rate of two a week for five years. Listing 450+ BLidgets or detailing our CORE or UI functions in a tradeshow press-release doesn’t convey the value of our BLAM platform or approach. We prefer to showcase business-focused solutions, which tend to follow a theme that relates to business needs at that time.

Forecasting cloud

Some will remember our cloud stand at IBC 2018 – it was very popular with the crowd, particularly after the show closed. At that time, we were about 18 months into the development of BLAM-3 and around a year out from the completion of the first customer implementations with PLAZA Media and Off the Fence. Our 2018 IBC cloud stand was a little tongue-in-cheek, highlighting the paradox of demonstrating a cloud-based platform from a dark hall in the RAI. At that time (three years on from AWS’s acquisition of Elemental) the media and broadcast industry was finally beginning to appreciate the functional power and flexibility of cloud services. The trend – and our IBC 2018 theme – was very much about “cloud migration” which we thought was inexorable, hence the stand design – the cloud is now, and Blue Lucy is there, ready.

But the BLAM platform was actually designed to be completely infrastructure agnostic – so it can be deployed in any cloud, or on-prem, infrastructure. This is a core tenet of the architecture, although we forecast that the vast majority of deployments would run in cloud infrastructure. Five years on, we are surprised as to how deployments have manifested.

A mixed reality

Eighty percent of our customer base is operating cloud-ground “hybrid” BLAMs. These systems tend to have the core services of the databases and the application interface together, with one or more workflow runners (the microservice orchestrators) running in cloud infrastructure, mainly provisioned by Blue Lucy as a managed service. In addition, workflow runners are deployed on-prem at the operator facility. These manage on-prem storage, LTO libraries and other resources such as right management systems, transcoders, file-based QC tools, edit systems (Avid and Adobe) as well as baseband recording and playout systems.

In hindsight, it was unwise for the industry to assume that the entire production and distribution capability would move to the cloud over a few short years. In many cases, it just doesn’t make sense: operators do not wish to move away from on-prem tools that are providing business value, and that are still being amortized. For distributors, the concept of forklifting their inventory, which may extend to many petabytes, to cloud storage doesn’t make economic sense. Using the cloud for distribution, particularly to FAST or OTT platforms is very common and workflows that utilize cloud services is extremely efficient. BLAM operators are using these pipelines for content distribution to fulfil content sales – this model of ‘leaving material where it is until it’s needed or can be monetized’ – is common. Equally, we have a number of customers who operate with all browse material in cloud storage, but delivery is fulfilled from cloud or ground – based on which is the most cost-effective overall. Naturally that logic is built into the BLAM workflows.

IBC theme

There are many and varied reasons why media operators cannot or do not want to go all in for the cloud or why they wish to control the migration. So, it is ground-cloud, hybrid workflows which will form the basis of our theme for IBC2023 where we’ll showcase how Blue Lucy customers are harmonizing on-prem systems and cloud services and applications to create highly efficient and cost-effective media workflows with BLAM. In short, we’ll be bringing the cloud to earth at IBC2023.

We are keen to talk on the basis of operational outcomes, we can work out the most cost-effective place to run the workload later, and even change our mind. We are at stand 6.C29.

Blu Digital – Fere nihil sine deo – thoughts on AI, localization and humanity

Blu Digital – Fere nihil sine deo – thoughts on AI, localization and humanity

Silviu Epure, Senior Vice President, Content Globalization, Blu Digital Group

 “It was the best of times, it was the worst of times…”

What a glorious decade for global media distribution. Content consumption is higher than it’s ever been, borders have been stretched, pushed or removed entirely, “foreign” content is captivating “foreign” audiences and the inaccessible is finally becoming accessible to all.

This high-stakes global distribution marathon is made possible and fueled by the propagation of Localization and Accessibility. In simple terms, Localization takes content from one language and creates a set of parallel files (audio tracks, subtitle files, artwork, etc.) in a different language than the original. In contrast, Accessibility sets forth the creation of additional audio, video and text files that enhance the viewing experience of individuals with visual, auditive or cognitive disabilities.

This is the groundwork that gives every piece of content the ability to make its way into the hearts and minds of audiences anywhere in the world. Whether it’s bringing the vision of a Brazilian film director into the home of a Norwegian movie lover, or whether it’s helping a sightless child in France “see” an Australian animated series, the value of the myriad of writers, translators, adapters, voice actors, voice directors, audio engineers, audio mixers, production coordinators, producers and all other creatively talented people that make Localization and Accessibility happen cannot be understated.

Nevertheless, as the demand for localization and accessibility increases and the flow of content rages, so do the logistical and capacity challenges that accompany it – limited talent availability, diminishing studio capacity, supply chain inability to keep up with demand, etc.

And so, in an effort to self-correct, the market trend has been to push localization producers into a frenzy, demanding that months of much needed creative production work should happen within weeks, that respect for artistic detail should take a back seat to “out the door-ism”, and linguistically nuanced subtleties (which enhance viewer experiences but which slow down deliveries) should find a warm place in the corner and die.

In contrast and in defiance of general market trends, a handful of global studios, broadcasters, streamers, distribution outlets and vendors continue to fight off internal / external pressures and stay true to high artistic standards of quality. Even through failure, the aim remains high on every minute of content created – that’s because this group of professionals knows all too well that even the best piece of global storytelling can be brought to its knees by a bad localization experience.

And then, there was Automation and Artificial Intelligence.

According to techopedia.com, “Automation is the creation and application of technologies to produce and deliver goods and services with minimal human intervention.” According to the same source, Artificial intelligence (AI) is “a branch of computer science that focuses on building and managing technology that can learn to autonomously make decisions and carry out actions on behalf of a human being.”

Automation has been used and refined in various industries since the 1700s, when the first spinning mill was created. Its ability to constantly improve efficiencies, increase outputs and reduce costs by speeding up tasks that were previously performed entirely by humans has been crucial to the development of our global supply of modern day goods and services.

AI on the other hand, while it has fascinated writers, movie makers and computer scientists for decades, it has not been widely applied until its recent accelerated breakthroughs have placed it front and center in our news cycles and for some, our lives.

And so, the pattern becomes clear – in our collective efforts to advance our societal ever-growing wants, we’re moving from “minimal human intervention” towards “on behalf of a human being”.

Is this the right path forward? I believe the answer to that question is nuanced and highly dependent on the field in which we are exploring its applications, on the ethical / moral / philosophic perspectives that we choose to employ in our analysis, as well as our openness and ability to accept and embrace change.

I do think one thing is certain though – from a business perspective, ignoring the patterns and the realities of the era is akin to working for Blockbuster, watching the first Netflix billboard go up and thinking that it’s not worthwhile your limited attention span.

Artistic, Creative, Human vs. Everything Else.

The rabbit hole of thought when it comes to AI’s pros and cons is deeper and wider than one can imagine at first. Once you jump in it, you can praise, despise, embrace and fear its potential applications, all in the same breath. And although we don’t quite know the real side effects that come with ChatGPT passing the US Bar Exam, I’m sure it would be worthwhile to give it some more consideration before we unleash its true potential into the world.

In the localization space, just like in many other industries, the cost / time saving opportunities brought forth by automation and AI are simply too financially seductive and operationally valuable to be ignored. From automated workflows and AI driven speech-to-text transcriptions, all the way to instantaneous AI translations, neural / synthetic voices and deepfake video manipulation, the applications seem to be endless.

That being said, in an effort to ensure that these newfound processes and technological advances help heal the industry’s quality-quantity divide rather than deepen it, it’s important to acknowledge that every form of localization service incorporates two components:

— The first component includes highly creative tasks which require human talent, subjectivity, experience and artistic vision – tasks such as translation, voice acting and creative audio mixing.

“The” Translation

Translation can mean different things to different people. If you’re traveling abroad in search of making friends around the world, translation is simply a tool that you use to communicate. In that context, scrolling through a dictionary or using an AI powered headset / visor to help translate sentences serves the exact same function. The process doesn’t need to be artistic, creative or perfect.

However, translation in the context of subtitling or dubbing TV and film content requires a completely different analysis. Every movie and TV show ever made is a manifestation of a message created by the show’s writer / director / cast. When you’re translating TV and film content from one language into another, you’re not only translating the words spoken by the on-screen characters. You’re actually translating the message of the original creators into a new language and developing a new creative experience for members of a different culture.

Linguistic nuances, formality of context, cultural improprieties – choosing the right word, at the right time, in the right circumstances, for the right character – that is a truly artistic / creative / human endeavor that AI cannot get close to successfully mimicking (for now).

“The” Voice Acting

Up until 10 years ago I had never thought about how a dub is created. I had worked in media production and distribution for many years prior but I always looked at those “foreign language audio tracks” as lifeless imitations of the original content. It was only when I started producing dubs that I realized the enormity of the challenge and the importance of the result.

So let’s paint a mental picture together where You are the voice actor.

You step into a semi-dark room equipped with a microphone and a TV screen. You put on a pair of headphones and you’re asked to pay attention to the picture, the voice director, the audio engineer, the note taker and the script, all at the same time. You are also asked to perform in such a way that seems as though you’re there, in the action, living the same adventures, breathing the same air and having the same type of emotions as the character you see on screen. But don’t just mimic the emotions, do it in a way that is appropriate for your language, your friends and your culture. And don’t just say the words, sync them to the lips of the character that you see on screen. And not just the words – every audible gesture, every drop of saliva, every sigh. And now do it again for a different character, in a different way. And do it well.

Session after session, day after day, talented, dedicated voice actors put on those headphones, stand in front of that microphone and transform silence into sound. Through their performances, they create foreign language dialogue tracks that sound just as good (or sometimes even better) than the original. They offer authentic, culturally appropriate, creative experiences without ever showing their faces. And although text-to-speech, synthetic voicing, audio cloning, and other facets of AI are making truly impressive advances in the space, they cannot begin to generate the qualitative results that these talented actors are producing or the human emotions that their performances evoke.

“The” Creative Audio Mixing

Simply put, audio mixing is the process of taking various audio tracks (dialogue, music, sound effects) and blending them all together to create an authentic / immersive audio experience.

In a dub, every time you hear a loud, echoey voice on stage or a whispered sob on a phone, that was the creative decision of an audio mixer, choosing the “just right” volume levels, reverbs, effects, modulations and everything else needed to create that perfect immersive experience. All of these voices may have been recorded in a silent soundproofed recording booth and yet it is knowledge, creativity, passion and dedication that make them sound as if they were recorded on the street / in a car / on stage or in outer space.

— The second component of localization includes logic-driven sequences of actionable tasks, such as asset / project management, transcription, timing, and technical QC (to name a few).

This component can and should be automated / AI driven because it provides the ability to optimize production capacity, support the supply chain and ensure both quantity and quality of output.

By choosing to spend less time dealing with spreadsheets, emails, manual file transfers and many other essential yet repetitive, action driven production processes, we gain the ability to invest more human time into creative translations, voice recordings and mixing sessions.

The same is true about cost optimization – by using automated / AI driven workflows in transcription, timing and technical Quality Control, we can invest more resources into creative / linguistic / artistic QA.

Ultimately, I believe automation and AI can be a blessing or a curse depending on how we choose to use them. One road leads to optimization, growth and enhanced artistic experiences, while the other to the deepening loss of our creativity and quite possibly – our Humanity.

I hope we choose wisely.