Localization – Global Content & Local Acceptance

Localization is an accurate translation of content through the local cultural and contextual lens. In broadcasting terms, localization refers to creating compliant and culturally relevant versions of video assets for the international markets. A well-curated localized content is the one that makes an impact locally. Localization helps in expanding the customer base by adapting to a specific audience. The language used for creating a media asset might turn out insignificant if it’s not understood by the people who don’t speak your language. As businesses around the world are going to great lengths to ensure effective communications with their target audience, localization ensures that the content reads as if it were written by a native speaker. Thus, providing great growth opportunities for content providers in foreign markets.

Localization is a catalyst that removes linguistic and/or cultural barriers that come in the way of international success. However, businesses need to acknowledge that just localizing the product or service would not give the desired result, as it would require the other materials relevant to the product to be localized. Localization builds brand awareness. As the brand embraces localization, customers become more aware of it and it leads to an increase in sales.

Localization does not necessarily mean word-to-word translation, but rather an adaptation of content to the language and cultural preferences of the target area. This helps to refresh the product, reaches out to a larger number of audiences, and boosts brand prominence.

Digital Nirvana’s MonitorIQ has presented a great example of localization by assisting a New York-based American media conglomerate with operations in India. With 40+ channels in multiple languages and different genres, they try to create a multicultural experience for viewers. We addressed the challenges faced by the client by improving the Quality of Experience (QoE) including video and audio quality, black screen, static screen, freeze frames, low or high audios, missing closed captions, and missing metadata. MonitorIQ helped them monitor competitors by recording and monitoring their content, advertisements, talent pool, different shows, and ratings.

The government agency in India monitoring the compliance for different aspects of broadcasting, such as content and advertisement-to-content ratios, necessitated the need for recording off-air signals. Digital Nirvana’s MonitorIQ helped record channels for the purpose.

Understanding cultural differences is probably the toughest part of localization, and this is more evident in video localization. With more than 100 million people watching videos online, user engagement has proven to be highly effective; so now, more than ever, businesses must adapt strategies to increase accessibility and visibility of their video content. Localization in the form of captioning, subtitling, dubbing, etc., gives users worldwide access to a variety of content.

Digital Nirvana’s Trance has helped companies generate subtitles in all major languages including French, Spanish, etc.  Apart from cultural sensitivities, some of the other points to be considered when localizing videos are to ensure that the content creates an international appeal to capture a global audience and that the content is approved by an expert in the local dialect. Despite a few limitations, localization paves way for international success.

Practical Guide to Transcription & Closed Captioning

Digitization has affirmed the importance of having real-time information. Content consumption has seen exponential growth in the last few years as consumers obtained easy access to a large quantity of information, 24/7. Financial services and news and sports broadcasting are the two industries where real-time information accessibility and accurate presentation is a major success criterion.

Financial services are reliant on time-sensitive and accurate information as 24/7 global financial markets and must react to global events instantly. Professionals working in this industry, thus emphasize keeping a tab on the information. However, devoting more time to follow what’s happening around the world is a tough ask for these professionals. Keeping track of the financial details of the thousands of publicly traded companies is a walloping task for analysts.

Digital Nirvana, with its well-trained and qualified financial analysts with deep industry knowledge, offers financial transcripts that empower analysts to review financial information and provide opinions and estimates. Our transcript team, which includes best-in-class editors, has made us one of the most sought-after vendors in the field of transcription.

Another field where Digital Nirvana has successfully established itself as a preferred service provider is news and sports broadcasting. Over the past few decades, news and sports broadcasting have evolved rapidly, and the challenge to meet and comply with FCC regulations that evolved in 2017, has been ever-growing. Under the new mandate, they faced the new challenge of inserting highly accurate captions into assets within 12 hours from the time of actual broadcast.

In the light of FCC’s increased focus on enforcement, Digital Nirvana helped one of the largest US TV networks dealing in news and sports, to be compliant with all the necessary requirements, which includes the returning of captions in a shorter time frame in spite of the increase in volume. Digital Nirvana’s Platform-as-a-Provider (PaaS) enabled closed captioning services are part of its media services offering. After implementing our captioning services, the client could scale up the production at short notice, keep control over the budget, and access API integration with ease.

Digital Nirvana assisted the client with asset retrieval closed caption generation and provision for timely and quality output. We leveraged artificial intelligence (AI) to automate the process thus provisioning benefits to our clients in terms of cost, time, and resources.

The ability to understand our client requirements, recommend solutions, and work with clients through the process made Digital Nirvana a preferred solution provider. Our finely-tuned operations and thorough process management, combined with 24/7 customer service, ensure the highest degree of transparency and accuracy.

Constant refining and fine-tuning have always ensured that we are in line with market expectations and developments. The parallel functioning of the operations and tech teams has enabled us to make our process more efficient, technically abreast, and flexible to client requirements and expectations.

What are Software Defined Workflows?

Software defined workflows span the media ecosystem and the wider technology ecosystem allowing assets, systems and processes to be joined and run together. Workflows address repetitive tasks that an organisation runs regularly and can optimise pipelines by allowing content to be processed frequently and quickly.

Building a workflow: coding Vs visual design

There are multiple ways to build workflows. Traditionally, setups have been built through a language such as C#. This is great where a business has developers on hand, though it requires experience and expertise in software development, or a certain scale of team. However, it creates problems in smaller teams and organisations that don’t have the resource on hand to create a workflow using coding skills.

More recently, workflow vendors have been working to create visual interfaces that allow workflows to be created in a rather more drag and drop fashion. This involves, for example, dragging an element onto a visual canvas, adding variables or confirmation parameters to that element and connecting it to another element. This enables team members without coding know-how to build out complex workflows which address the needs of the business. This is the route Masstech has taken with our software defined workflow builder within Kumulate, an example of which can be seen here:

Why is language and taxonomy important when designing a workflow orchestration engine?

There are a number of workflow or software defined workflow engines, not only catering to the media and entertainment (M&E) landscape, but also across the wider technology industries. The most successful of these are those that apply clarity and consistency to the language and taxonomy that they use. Consistent use of icons for workflow steps, and the language used as descriptors and categorizations (e.g. Task, Asset, Context, Participant, Infrastructure) – when these are applied consistently within the application, it’s much easier for non-coding end users to understand the concepts involved, and makes the construction of the workflows they need a simpler and faster process.

Triggering and orchestrating workflows

Workflows can be triggered within systems such as Kumulate in a number of different ways. Here are a few examples:

  • From the video timeline – While working with a proxy of a video for example, it’s possible to trigger a workflow on the complete asset or a segment of the asset.
  • From within the orchestration engine – Within the workflow orchestration engine itself workflows can be triggered to run immediately, set to run on a schedule or when certain conditions are met.
  • From the content manager – Multiple content items from any storage location can be selected from the Kumulate content management interface, and sent into a particular workflow for processing.
  • From the storage manager – Specific storage-related workflows (movement, transcode etc) can be performed on all assets stored in a given location, directly form the storage locations module.

On the orchestration side, workflows are run by a server, or set of servers. Workflows are built within a browser by a member of the administration team and are then deployed to one of more orchestration engines within the customers platform; these servers can reside in the cloud or on premise. Individual workflows can then be executed and operated from each of these orchestration engines, allowing the operations to easily scale with the business, and to operate in the most appropriate and efficient virtual or physical environment.

Keeping workflows and their assets secure

As workflows and the assets they process move increasingly into cloud and hybrid-based environments, and require input from multiple partners and vendors, security becomes of paramount importance. Whether securing access to content itself, or simply pointing to a storage location such as a cloud bucket, users need to consider how to securely provide access credentials to the required cloud services, and how the billing for those services will be handled. This is referenced in some detail by MovieLabs in their white paper and video on security for production workflows. [1]

Using software defined workflows to drive organisational pipelines

Software defined workflows’ main function is to enable the user to create repeatable pipelines of activity. Whether fully automated, or a combination of automated and manual steps, these pipelines are designed to drive content of a variety of forms, (in our examples video media), through a series of predefined stages. In much the same way that a factory would process content on a production line, software defined workflows create a digital production line for processing content.

The pipelines allow you to consistently and repeatably process content for many purposes, such as transcode and delivery, content enrichment or storage management. Let’s look at some examples where workflow automation can assist your business in replacing manual steps that team members may have to take today.

  • Multi-Platform Delivery: i.e. a transcode to multi transcode formats from a single or multiple source files. With an accelerated delivery.
  • AVID Nexus Storage: i.e the synchronisation or triggering of content movements based on rules between Nexus storage and other storage volumes.
  • AI Enrichment: i.e. the enrichment of one or more pieces of content utilising AI services such as Amazon Rekognition or Graymeta Curio.

All of these simple examples could have been performed manually by a member of a team. However, these workflows and their associated triggers allow the organisation to consistently perform tasks automatically, with little to no human intervention, thereby reducing the time to perform them, eliminating human error, and enabling additional reporting.

Integrating software defined workflows with your other tools

Utilising software defined workflows can allow rapid integration with third party tools and systems, for processing, data exchange or even manual integration. Here at Masstech we have many template workflows created, with many integrations, such as:

  • Amazon Rekognition: automated content analysis for recognition of people, objects, locations, non-compliant content etc
  • Tape and disk storage migration: automated, rapid migration of content, e.g. from on-premise to cloud, as an automated background operation that doesn’t disrupt daily operations.

And, of course, there are many others as part of Kumulate’s software defined workflow toolset which allow you to integrate with a range of cloud and on-premise services, including any of your own in-house systems or tooling.

How can Masstech help

Masstech can help you to automate both manual and digital processes. We provide:

  • Visual software defined workflow builder with drag and drop interface
  • Pre-built template workflows
  • Workflow orchestration engine
  • Powerful workflow status and reporting
  • Professional services as required to support your team

We’re happy to help identify challenges and bottlenecks within your business that could potentially be optimised and automated, obligation-free. Our workflow experts can pinpoint where pre-built or customised workflows can address your specific business requirements, as well as provide professional services to assist you and your team however we can.

info@masstech.com  www.masstech.com

References:

[1] MovieLabs – https://movielabs.com/news/new-whitepaper-future-of-security-for-production-workflows/

That’s a Wrap! Newsbridge Top 10 Highlights of 2020

It’s fair to say that this past year we’ve been up, down and all around. Literally overnight lives changed drastically and for the first time we were living through the kind of global pandemic that we’d only read about in history books. I think we can all agree that we’re living in unprecedented times.

That said, despite the extremely low lows of 2020 there were also some silver linings. With international lockdown and increasing work from home orders, many families were reunited, pollution rates steadily dwindled, digitization peaked, the season of innovation flourished.. and hey we were also graced with free balcony concerts from across the globe!

In times of crisis, human will will always prevail, or at least that’s what we’ve found here at Newsbridge. Here are some of the team’s top 2020 highlights:

1) New Office 

Right before confinement hit, Newsbridge moved its headquarters 2 blocks down and 7 stories up! In order to accommodate the growing team, it was time for an office upgrade and a new view of our beautiful neighborhood. Needless to say the entire team is pleased with our blue sky bureau and still waiting for a post-covid time to do our housewarming party or ‘crémaillère’.

2) R&D Gains and New Product Releases Recap 

One way Newsbridge has been able to differentiate itself from competition is our continuous commitment to evolve and stay abreast of the latest trends in Multimodal AI along with taking our customer feedback into account and turning it into actionable solutions within the platform. This year, alongside hiring a new in-house AI Researcher focused solely on improving Multimodal AI applied to transcription and speaker detection, we had a total of 7 new product releases ranging from SRT Live Stream transmission protocol & YouTube integration to a reimagined cart experience and updated media player keyboard shortcuts, just to name a few!

Now users can send their videos directly to YouTube from Newsbridge’s MAM & production platform.

Check out all of our new product releases here.

3) Growing Client List: Top european sports organizations 

Newsbridge is proud to be the MAM of choice for the French Federation of Rugby along with a growing list of other top european sports organizations, one of which is soon to be publicly announced this upcoming year…so stay tuned!

Take a look at how Newsbridge is helping the FFR with AI powered indexing technology composed of facial recognition, smart label detection, content organization and cognitive research here.

4) New Faces!

This year Newsbridge added 4 new and talented employees to the mix:

  • Gael, Back-end Dev
  • Emilie, Account Manager
  • Olivier, DevOps
  • Yannis, AI Researcher
  • Romain, Solutions Architect

Together, this group of new employees boasts combined backgrounds in broadcast and systems engineering, corporate dev and specialized PhD research. We’re certainly thrilled to welcome you to the team. (Special shout out to all of our amazing interns not listed!)

5) Covid Crisis Averted: Rescue Remote Solution for our Clients 

Out of all of our annual accomplishments, this is perhaps our proudest point. Here at Newsbridge, one of our values is the fact that we’re ‘User-Focused’, always putting our clients first. When global lockdown first hit in the early months of 2020, our cloud-based Multimodal AI powered Media Asset Management Platform allowed production teams to kick-off rough cuts in a matter of minutes. At the same time, our SOS remote production solution gave our clients unprecedented access to content from the comfort of their own home- promoting collaborative workflows among shared content while teams were working apart, avoiding on-rem bandwidth and server capacity issues. In return, our users experienced an uptick in employee autonomy and ROI with a major decrease in production time.

Here are a couple of our case studies:

TF1 Media Factory + Newsbridge Remote Production

Newsbridge + AFP Webinar: Cloud-Based Production 

6) Launching our Media Asset Collections powered by Multimodal AI

This year we worked hard to re-think the media asset management experience for sports and media rights holders. In response, we released our latest innovative feature powered by AI: Collections.

Collections of media that automatically update with matching content, and are made available to the story producing team members and partners.

Media logging, organizing and sharing your assets is an essential part of your everyday workflows. Media room collaborators spend lots of time searching and grouping media assets from different places to match their project needs. Collections is now dramatically simplifying the way you access and give access to your media assets while also gaining more insight from your content.

7) 1000 LinkedIn Followers

You love us, you really really love us! This year we were beyond excited to hit our 1k followers mark on LinkedIn. It’s amazing to see our network growing in parallel to our business growth. We have some new milestones to reach for 2021…watch out 2k here we come!

8) Newsbridge’s New Website Launch: Showcasing Multimodal AI

Among an array of new and improved functionalities, this year we unveiled a newly designed website highlighting Newsbridge’s next-level Multimodal AI technology which allows production teams to cognitively index digital media assets via cross-analysis of facial, language, object and context detection.

The newness didn’t stop there! We also worked on a new version of our logo:

9) Most Read Insight Piece of 2020: Understanding Multimodal AI

This past year according to Google Analytics, we saw that our most popular blog insight piece was:

Multimodal AI: Computer Perception and Facial Recognition. This article recapped the multimodal approach that Newsbridge’s technology uses to identify specific sequences through various metadata, and how it is the same logic that the human brain uses to better perceive external scenarios and draw conclusions.

For a complete overview, check out the article for yourself!

10) Annual Newstrip 

Every year the team takes part in a highly anticipated, yet quite mysterious, team building week-end event. This past year’s Newstrip (fortunately pre-covid) was held at a chateau south of Paris with a Murder Mystery Dinner Party theme and various top secret team building activities. It was a great opportunity for the team to celebrate its accomplishments from the previous year and enjoy the company of our awesome colleagues.

Newsbridge Thoughts: Gratitude for 2020+  

Despite the obvious difficulties of 2020, Newsbridge has been working extremely hard to provide top tier solutions for our clients and their unique media asset management and cloud production needs related to digital transformation. Throughout the entire year we have remained ever-grateful as a team to be helping others ‘carry on as usual’.

As a team we’re really proud of all that we have accomplished over the past year and looking forward to our upcoming initiatives. Stay tuned for our soon to be published blog post featuring our 2021 highlights to come!

MAM, PAM, What’s the DAM Difference?

Asset Management can be confusing! Let’s break it down.

DAM, MAM, PAM…maybe you’ve heard these terms pop up in everyday business jargon. But have you ever stopped to think about what they stand for?

Many times, they are used interchangeably, causing confusion among even those who use these platforms on a daily basis. Whether you’re a Marketing or Media Asset Manager, CTO, Video Producer, Documentalist, etc. it’s hard to keep up with constantly evolving media asset management platforms and technologies.

With content creation at an all time high, with more than 56% of international organizations investing in increased spending for content production as of 2018, it is predicted that more and more organizations have already or will soon be required to adopt high-functioning media valorization platforms.

So, how are organizations selecting the correct platform for their unique needs if they don’t even know which one they’re supposed to be using? Although DAM, MAM and PAM solutions all fall under the ‘asset management’ umbrella, they have extremely different meanings.

Let’s find out the DAM difference!

4 Asset Management Options: Explained 

1) DAM: This acronym stands for ‘Digital Asset Management’. DAM systems store all kinds of digital assets (photos, audio, videos, documents and html files). Although this system has the capacity to store multiple file types, a DAM system is most well-suited for smaller files such as photos or word documents. Usually larger file types are not easily supported. A DAM system is largely used by product marketing professionals to index, find and utilise organizations’ brand-related content.

A DAM is most commonly used for uploading and storing photos or word documents.

2) MAM: Otherwise known as ‘Media Asset Management’, this type of system specializes in the management of digital media files. This platform is at the center of video, image and audio workflows, built to ingest larger-sized media files. A MAM serves an integral role in the audiovisual production process, allowing editors to upload and store content in a centralized space. It is especially useful if you are archiving high quality media files for a long duration.

3) PAM: This stands for ‘Product Asset Management’. A PAM system is uniquely built for production workflows, such as films or video games. If you are working with fast paced assets as part of various workflows that need more precise editing capabilities, a PAM is best suited for you. On the other hand, PAMs are not usually used as a platform to store and work with archives.

4) Media Valorization Platform Powered by AI: The ‘All-in-One’ solution. There is one media asset management platform that takes all of these previous needs into account. It’s a solution that addresses most all digital assets, of all sizes, along with advanced auto-indexing and storage options. Backed by a powerful multimodal Indexing AI technology, the platform also offers auto-creation of media Collections, or ever-growing folders of relevant media assets along with content resale showcasing options.

Newsbridge’s advanced media valorization platform powered by AI (MAM, DAM, PAM…in one).

About Newsbridge

Newsbridge is a cloud-based solution offering video indexing tools based on Multimodal AI contribution for Media and Sports Rights Holders, leading to next-gen content valorization.

Taking into account facial, object and scene recognition with audio transcription and semantic context, Newsbridge provides unprecedented access to content.

Whether it be derushing, archiving or investigative research- the solution allows for smart media asset management. Today our platform is used by journalists, editors, TV Channels, documentarists, production houses and sports federations in contribution and post-production workflows.

To learn more about our offerings, please contact us today for a free demo.

SOTA Face Recognition Systems: How to Train Your AI

Overview: Face Recognition Market

The modern-day software to protect against fraud, track down criminals and confirm identities is among one of the 21st century’s most powerful tools, and it’s called face recognition.

In 2019, the face recognition market was estimated at over $3.6 billion. This number continues to grow, with a predicted $8.5 billion in revenue by 2025, with various governmental and commercial applications ranging from security and health to banking and retail, among thousands of others.

The Man-Machine Approach: Bledsoe’s Manual Measurements 

The origin of face recognition stems back to the 1960s, when American Mathematician Woodrow, or “Woody” Bledsoe co-founded the organization Panoramic Research Incorporated, where he worked on a system specializing in the recognition of human facial patterns. During this time, Bledsoe experimented with several techniques. One of which was called the ‘Man-Machine’ Approach in which a bit of human help was also part of the equation.

Fun fact: Bledsoe referred to his technology as his ‘computer person’ that had the ability to ‘recognize faces’- this is a photo of his own facial measurements – image source

Using a RAND tablet (often referred to as the iPad’s Predecessor), working with hundreds of photographs, he manually noted multiple facial feature coordinates onto a grid using a stylist. From there, he transferred his culminated data to his self-programmed database. When new photos were introduced with the same individuals, the system was able to correctly retrieve the photo from the database that most closely matched the new face.

Sounds like he would have described his career as picture-perfect 📷

Although one of his last projects was to assist law enforcement in finding criminal profile matches based on their mugshots, the technology’s legacy was far from over. Soon after, commercial applications sky-rocketed, with other scientists eliminating the ‘Man-Machine’ approach. One such example was Japanese computer scientist Takeo Kanande who created a program that could auto-extract facial features with zero human input, using a bank of digitized images from the 1970s World Fair.

The device that Takeo Kanande used to auto extract facial features at the 1970 World Fair: Image Source

Modern Face Recognition: Commercial Uses 
Nowadays face recognition systems are widely used for commercial use. Although the technology’s usage has a reputation for its potentially intrusive nature, it is also making way for increased national and international security along with advanced asset management functionalities.

Here are some examples:

Mobile Phone Industry: Perhaps the most well-known example of face recognition technology is something most of us use every day- our cell phones. Apple’s Face ID was introduced in 2017 with the launch of iPhone X. Face ID provides a secure way to authenticate an iPhone user’s identity using face recognition. This technology allows individuals to unlock their phones, make purchases and sign into apps securely. By using depth map comparisons against a face (using an infrared sensor) Apple added an extra layer of security- ensuring that a person’s identity can not be faked (i.e. holding up a printed picture with that face).

Banking Industry: With high levels of fraud related to ATM cash withdrawals, some large banking corporations such as the National Australian Bank are investing in research related to client ID security. In its initial experiments, the bank integrated Microsoft’s ‘Window’s Hello’ facial recognition technology to a test ATM setup, in which it used a face scanner to verify a client’s identity.

Sports & Media Rights Holders: Many rights-holders are using face recognition to work smarter and more quickly with their media assets. For example, the French Federation of Rugby (FFR) adopted Newsbridge’s media asset management system with built-in face recognition functionalities, among others, that automatically analyzes internal content, detecting certain players, objects or logos among voluminous hours of raw video footage or photos.

Hotel Industry: Checking in is now easier than ever at certain Marriott hotels in China thanks to a recent face recognition software partnership with Alibaba, Chinese tech giant. Upon entry, guests approach face recognition kiosks which allow them to scan their faces and confirm their identities along with auto check-in. The initiative is awaiting global roll-out based on successful assessment.

Newsrooms: Many newsrooms, or should we say ‘news labs’ have broken down the science of retrieving the perfect news extracts for a story, in record-breaking time. When creating news segments, newsroom editors and producers are tasked with delivering the perfect shots, which they can usually search for in their media asset management systems. When throwing face recognition into the mix, things get a whole lot quicker…and profitable.

Security & Law Enforcement: Combatting crime and terrorism an image at a time! One example relates to face biometrics, which is used when cross-checking and issuing identification documents. The same technology is increasingly being used at border checks, such as Paris-Charles de Gaulle airport’s PARAFE system, which is using face recognition to compare an individual’s face with their passport photo.

Limitations of Traditional Face Recognition Systems 
Face recognition systems have the potential to be extremely effective tools, but even the most powerful systems have limitations for users, such as:

Database Aggregation: Usually, face recognition systems have pre-existing datasets of legally derived faces based on known profiles from partner providers such as Microsoft Azure and IBM Watson. Depending on the use case, newsrooms for example, users are working with internal audiovisual assets (photos/videos) and need a system they can train to recognize unknown individuals throughout thousands of hours of audiovisual content. Essentially, they need to recreate a dataset of known individuals…which requires some human help.

Facial Occlusions: Multiple challenges exist in recognizing facial features when an individual’s face is partially obscured (i.e. wearing a scarf, mask, hand over mouth, or side profile). Because this occlusion distorts features and calculated alignments that are usually picked up by the facial recognition system to generate correct identity, the result is usually no recognition or incorrect recognition.

False Positives: Many face recognition systems can not provide a high level of accuracy for identification of individuals. For example, if there are 2 profiles that look alike and share multiple face features, sometimes a machine will detect the incorrect profile.

The Good news? Face Recognition systems have evolved, and all of these challenges can be solved using a Man-Machine approach along with Multimodal AI (using cross-analysis of face, object and speech detection to increase the confidence score of the recognized individual’s identity).

Your Machine-based Technology is only as good as your Man-made Thesaurus 

It’s important to note that there is no standard when it comes to a shared Thesaurus. Many companies rely on pre-existing datasets, but these can be limiting as users can not alter them for their custom needs (i.e. adding additional profiles). When building a custom database of unknown faces, it’s crucial to invest in the man-made creation of your digital Thesaurus.

For reference, in regard to information retrieval, a Thesaurus is:

A form of controlled vocabulary that seeks to dictate semantic manifestations of metadata in the indexing of content objects. A thesaurus serves to minimise semantic ambiguity by ensuring uniformity and consistency in the storage and retrieval of the manifestations of content objects.” – Source

Simply put, in the context of face recognition, a Thesaurus is a database or list of known individuals which acts as a reference point to detect the same faces among thousands of assets which may include these individuals’ faces.

A thesaurus is composed of more than strings and photos: each entity is assigned to several identifiers as references. For example at Newsbridge, for most of our customers we provide at least three identifiers:

1) An internal unique id

2) The customers’ entity legacy id

3) A wikidata identifier (we consider it at a universal id)

Using these identifier types adds more practical sense (and is the only way to leverage semantic search and analysis). It’s also the best way to ensure machines can talk to other machines. For example, if you have a wikidata identifier assigned to an entity it’s making it highly compliant when moved or shared with other systems.

So if your custom Thesaurus is properly completed, the machine should be able to identify your list of individuals among your ingested audiovisual content.

Example: Training Your Own AI via a MAM Platform with built-in face recognition

It’s long been said that machines can’t do everything autonomously. And maybe if we circle back to Bledsoe’s ‘Man-Machine’ Approach we can see that he was onto something…

When Bledsoe was inputting manually derived faceprints into his database, he was essentially training his machine, because he had to create a non-existent database. But today, what if end-users were able to train their face recognition systems too, without any coding experience?

This is where the Man-Machine Approach resurfaces…with the addition of Multimodal AI.

If you’re a Media Manager working with thousands of internal media assets that feature unknown individuals who would not usually be recognized by a pre-existing dataset, then there should be an easy way to train your machine to recognize faces among your assets, without needing to be a skilled programmer.

Taking a media valorization platform as an example, once users upload all of their media assets (photos, videos, audio files), the system automatically tags and indexes files based on multimodal AI.

For the faces that it does not know, an end-user has the ability to manually update their Thesaurus list by uploading 3 photos of each individual who they would like the machine to recognize among their thousands of assets. Once this is completed, the system will recognize these once ‘unknowns’. Via built-in face recognition technology, all individuals will be searchable via a semantic search bar.

By using this Man-Machine approach within an AI-powered media valorization platform with built-in face recognition, recognition capabilities and accuracy increase.

Now that’s what we call SOTA face recognition!

About Newsbridge

Newsbridge is a cloud-based solution offering video indexing tools based on Multimodal AI contribution for Media and Sports Rights Holders, leading to next-gen content valorization.

Taking into account facial, object and scene recognition with audio transcription and semantic context, Newsbridge provides unprecedented access to content.

Whether it be derushing, archiving or investigative research- the solution allows for smart media asset management. Today our platform is used by journalists, editors, TV Channels, documentarists, production houses and sports federations in contribution and post-production workflows.

To learn more about our offerings, please contact us today for a demo.

Primestream Case Study: Organization of American States

The Organization of American States is the world’s oldest regional organization, dating back to the First International Conference of American States, held in Washington, D.C., from October 1889 to April 1890. That meeting approved the establishment of the International Union of American Republics, and the stage was set for the weaving of a web of provisions and institutions that came to be known as the inter-American system, the oldest international institutional system.

Today, the OAS brings together all 35 independent states of the Americas and constitutes the main political, juridical, and social governmental forum in the Hemisphere. In addition, it has granted permanent observer status to 69 states, as well as to the European Union (EU). The Organization uses a four-pronged approach to effectively implement its essential purposes, based on its main pillars: democracy, human rights, security, and development.

Primestream Case Study: University of Southern California

University of Southern California’s Annenberg School for Communication & Journalism with The Julie Chen/Leslie Moonves and CBS Media Center

“The Master of Science program here at the University of Southern California’s Annenberg School for Communication and Journalism used to be a two-year program. 2014 marks the first year of a new nine-month program in a new building,” says Vince Gonzales, Coordinator, Master’s Programs, School of Journalism, USC Annenberg School for Communication & Journalism, who oversees the new Masters of Science and Arts program.

Communication and journalism students now share a new common workspace in the Julie Chen/Leslie Moonves and CBS Media Center at USC’s Wallis Annenberg Hall. Rising from the center of the USC campus in Los Angeles, the building’s collegiate Gothic exterior gives way to a Greek assembly-style forum, and a 30 foot digital media wall that greets visitors with a real-time feed of student-created programming. 

Collaborative spaces drive the 88,000 square-foot building’s design: project areas can be reconfigured with movable walls, hallways are lined with whiteboards to encourage impromptu conversations and meet-ups, and anywhere that glass can replace drywall, it does, owing to the school’s philosophies of sharing and transparency.

Primestream Case Study: New World Symphony

Miami Beach, Florida — At the New World Symphony, we had a problem. We had years of content, thousands of video recordings and images of the academy, faculty and guest artists in action, but there was no way for us to access it for either educational or marketing purposes.

From the beginning, our Fellows have been working with leaders in the classical world, and these interactions are invaluable. We’ve taken countless pictures and recorded a lot of audio and video, but we had no way to put these assets to use because they were difficult to catalog and locate as needed. A video clip of a master class with a visiting faculty member can be a powerful teaching tool for years to come.

Another challenge I kept hearing from our marketing team was their need to put together multimedia presentations for fundraisers or to share with patrons and media outlets, showing how the New World Symphony has helped our Fellows get to their next steps in life.