Contents
- What is Artificial Intelligence?
- A Definition of Artificial Intelligence
- A Brief History of Artificial Intelligence
- Artificial Intelligence in Broadcast & Media
- State of Adoption
- Artificial Intelligence Deployments in Broadcast & Media
- Content Production & Post-Production (only available to IABM members as part of the full report)
- Content Management & Monetization
- Content Distribution & Delivery (only available to IABM members as part of the full report)
- Lessons from Other Verticals (only available to IABM members as part of the full report)
- How Artificial Intelligence is being used in E-Commerce (only available to IABM members as part of the full report)
Introduction
IABM Media Tech Trends reports annually track the adoption of specific emerging technologies within the broadcast and media sector.
The purpose of these reports is to enable member companies to better understand the drivers of emerging technologies’ adoption within customer organizations.
This should provide member companies more tools to better address the challenges lying ahead, from new product development to marketing strategy.
In an ever-changing industry such as media technology it is increasingly important for suppliers to keep track of emerging technologies’ development and their use cases, both in media and in other verticals.
Report Content
This report contains a discussion on the state of adoption of the emerging technology in broadcast and media as well as an analysis of significant customer deployments.
We also include a discussion on the adoption of the technology in question in another vertical.
This edition of the report focuses on Artificial Intelligence (AI).
What is Artificial Intelligence?
A Definition of Artificial Intelligence – Artificial Intelligence (AI) is a branch of computer science that aims at creating intelligent technology capable of replicating human learning and problem solving skills.
This report considers Machine Learning (ML) and Deep Learning as sub-sets of the wider AI field.
ML is an early application of AI, providing computer systems with the capability to learn from data without being programmed. Machine learning algorithms can find patterns in data and make predictions (and therefore, decisions) on the basis of them without the need for human intervention – in the form of static program instructions. More specifically, ML algorithms estimate an inferred function to predict an output on the basis of the data fed to them. In this sense, ML algorithms are dynamic as they rely on the quantity and quality of the information they are fed with as well as learning from the mistakes they make in predicting the final output – i.e. the difference between the final output and the correct output. ML is generally divided into two sub-branches: supervised and unsupervised learning. The former makes use of labelled data to make inferences about the output while the latter is not trained with any labelled information but rather finds hidden patterns in unclassified data.
Deep Learning is a further development of ML enabling computer systems to imitate the workings of the human brain in problem-solving. The advent of Deep Learning can be traced back to the emergence of Artificial Neural Networks in AI research. Artificial Neural Networks are a system of hardware and/or software modeled to mimic neurons’ interconnections in the human brain. They are organized in different layers to process the raw information – each layer performs a task on information received from the preceding layer. The output layer generates the answer after the preceding layers have sequentially processed the information available. The network can then “learn from its mistakes” by applying different weightings to different input streams on the basis of their contributions to getting the right answer. In this way, neural networks can adapt to the reality surrounding them.
Once confined to science fiction, AI technology has slowly developed to become one of the most important emerging technologies of our time. Today, AI powers successful consumer products such as the Amazon Echo as well as Google and Facebook’s internet services for search, visual recognition and translation. Its importance though lies in its potential going forward. Before delving deeper into that, the next chapter provides a brief history of AI.
[bctt tweet=”Once confined to science fiction, AI technology has slowly developed to become one of the most important emerging technologies of our time.”]
A Brief history of Artificial Intelligence
The history of AI has been characterized by ‘boom’ and ‘bust’ cycles similar to the workings of a modern economy. The ‘booms’ have generally entailed a rise in expectations and optimism with regards to AI research while the ‘busts’ have abruptly exposed the mismatch between these expectations and actual achievements.
The birth of AI as a branch of computer science can be traced back to the Dartmouth Conference in 1956; when a group of scientists introduced the concept of machines capable of replicating human learning and intelligence. While AI received a lot of attention and funding during the 60s, the 70s marked the start of what has been dubbed as ‘The First AI Winter’, with funding for AI projects being cut as a result of several research setbacks.
Some of the issues that prevented early AI experiments from achieving any progress included the limited storage and processing power capabilities available at the time – the Lighthill report in England explicitly criticized the failure of AI applied research to meet the expectations set in the 50s.
Interest (and investment) in AI re-emerged during the 80s when both the Japanese and UK governments started pouring money into AI research. This period was characterized by the first applications of AI to specific business problems, showing that the technology could be successfully applied to real-world issues to achieve efficiency gains. This interest (and investment) significantly fell again at the end of the 80s with ‘The Second AI Winter’.
In the 90s, AI started being deployed in the technology industry as the increase in computing power enabled more effective applications of it. A landmark event in AI history was in 1997, when IBM’s supercomputer Deep Blue beat the world chess champion Garry Kasparov.
In 2011, IBM’s new supercomputer, Watson, beat human participants to the Jeopardy! quiz show showing that the application of the technology was not confined to logical problem-solving. AI could also understand the specifics of human language.
In the 21st century, AI technology continued to benefit from the exponential increase in processing power and storage capabilities as well as the rising amount of information made available by the internet revolution – this made a vast amount of ‘big data’ available to ML algorithms.
AI started being deployed into industries other than technology with research in Deep Learning strengthening the earlier applications of ML.
Funding for AI startups reached a peak (US$5bn) in 2016 according to CB Insights. AI also featured as the most talked about technology at the recent CES tradeshow.
Today, AI looks capable of fulfilling the promises set in 1956. Its influence in our everyday lives is already significant although its potential for the future is
even greater.
The next section looks at the influence that AI is having in broadcast and media.
Artificial Intelligence in Broadcast & Media
This section contains an analysis of the adoption of artificial intelligence in broadcast and media. We first comment on the current state of adoption of artificial intelligence in the sector and then discuss the use cases of this technology in the broadcast and media industry.
As mentioned earlier in this report, artificial intelligence funding reached a peak in 2016 when the first major applications of the technology to the media industry also started being adopted. In 2016, established media technology suppliers launched new AI functionalities in their product offerings while a group of AI startups with a focus on the media industry gained momentum.
The rise of cloud technology within media technology customers’ infrastructures also made it possible for them to harvest an increasing amount of information on their operations. The next frontier for most of them is getting more data on audiences, by going direct-to-consumer.
At the latest IBC 2017 tradeshow, artificial intelligence was for the first time one of the main themes, both at the exhibition and in the conference sessions. Data is now considered one of the foremost elements (if not the most important element) in media companies’ future strategy.
State of Adoption
According to IABM data, AI adoption in broadcast and media is still at an early stage, as shown by the chart below:
[bctt tweet=”According to IABM data, AI adoption in broadcast and media is still at an early stage.”]
Only 8% of media technology buyers said they had adopted it before the latest IBC 2017 tradeshow – however, adoption has increased significantly in the last 6 months. 36% said that they were unlikely to adopt it while 56% said that they were likely to do so in the next 2-3 years.
These results show that artificial intelligence is just at the start of its adoption curve. Across all buyers’ categories, artificial intelligence adoption is still quite low although customers such as Pay-TV operators, broadcasters and system integrators are generally more likely to adopt it – and more likely to already use it – compared to production/post-production companies.
While Pay-TV operators can take advantage of their customers’ data to reduce churn and increase subscriptions, broadcasters can rely on its insights on audiences when they bypass traditional distribution networks by going direct-to-consumer.
The low percentage of companies that have adopted AI masks the quick increase in adoption of the technology – in less than one year, as shown by the historical chart.
In fact, according to IABM data, between April and September 2017, the percentage of media technology suppliers saying that they are unlikely to adopt artificial intelligence dropped from 57% to 36%, indicating increasing awareness of the benefits of the technology – the percentage of respondents saying that they already deploy it also increased from 5% to 8%. In the same time span, ‘Big data analytics & AI’ jumped from the #13 to the #10 position in the IABM technology priority index.
The primary driver of adoption of AI technology is the opportunity to automate routine workflows that are manually executed. Netflix claims it saves about US$1bn a year thanks to AI technology’s ability to automate its content workflows and reduce customer churn. However, AI also guarantees increasing insights into audiences. These can be used for content monetization – e.g. advertising and content licensing – and customer retention. In fact, audience data can be transformed into effective customer retention campaigns or can be fed to recommendation / personalization algorithms to establish a more personal relationship with viewers – this is key in a direct-to-consumer model.
The next chapter looks at AI deployments in broadcast and media.
Artificial Intelligence Deployments in Broadcast & Media
According to IABM data, media technology buyers plan to apply AI mainly to content management and distribution workflows (see chart below) although a large share of respondents said that they plan to use AI throughout the content supply-chain.
Content management was the most popular category for likely deployment of AI (40%) with content distribution being a close second (37%). Post-production came in at the bottom with only 27% of end-users likely to deploy AI technology in this category.
It is no surprise that AI can be effectively applied to content management systems to automate routine/repetitive tasks such as metadata tagging, image recognition and speech to text – we have indeed seen an increasing number of suppliers incorporating these functionalities into their product offerings at IBC 2017.
Particularly with regards to metadata tagging, end-users find this of extreme strategic importance to build up an increasingly granular database of their content to compete with new players such as Netflix and Amazon, which have data at the heart of their strategies.
It is surprising to see monetization so low on the list. However, AI insights derived from content management systems can also drive monetization workflows – as shown by some of the use cases below – as well as streamline the production process.
Content distribution is another hot area of application with end-users planning to leverage the potential of AI to eliminate the ‘heavy-lifting’ needed to distribute content on an increasing number of platforms/devices while personalizing the customer experience. With regards to this, automatic transcoding/re-packaging of content to suit the needs of different devices/platforms is an important application.
While production and post-production were lower on the list, we will see how AI can do a lot to automate parts of these workflows as well.
Content Management & Monetization
This is the natural area of application for AI technology. Although the unstructured nature of video and audio data makes it more difficult to classify, advances in techniques such as image, emotion and speech recognition have enabled media technology buyers to increasingly rely on AI tools to organize and search their content archives.
The process of tagging content is very manual and expensive, therefore it is a no-brainer to replace it with AI technology. By increasing the level of detail of the metadata, search on content management systems becomes more precise, thus boosting monetization opportunities – e.g. content performance indicators can be leveraged in advertising sales. According to SVG, NASCAR Productions owns 500,000 hours of content and 3 million assets with only 9.5 million tags. Chris Witmayer, director, broadcast, production and new media technology at NASCAR Productions, said of this issue:
“Although we have an entire archive that goes back to the 1930s, we can’t actually find anything efficiently. If you can’t find anything, you can’t sell it, and you can’t make money. So this [AI] is big for us.”
The level of metadata richness is then also correlated with the number of opportunities to monetize content, in both commercial and licensing settings – again, content performance indicators can be used to justify the sale of a certain type of content to broadcasters. This view on data creation as a money-making activity is key to understanding the importance that buyers give to AI in content management.
The number of companies providing broadcasters and content owners AI technology tools to tag their content efficiently is already quite high. A leading provider in this area is GrayMeta – it’s been named in the list of the world 50 most promising startups this year. GrayMeta provides AI tools to create metadata from media companies’ current archives. More specifically, it provides an end-to- end platform for metadata extraction and creation that is able to connect with various data sources and automatically create metadata.
Similarly to what has been said regarding content production and post-production, AI technology providers specialized in metadata extraction often find that they can serve different verticals such as the law enforcement sector. In fact, techniques used in metadata extraction such as image recognition, speech-to-text and object recognition can be also applied to law enforcement’s digital archives of evidence. More specifically, in GrayMeta’s case, law enforcement agencies can correlate security footage with information contained in their databases, police reports or any other evidence they have available.
Another leading provider in the content management area is Veritone, which optimizes video discovery and analysis through AI tools in a cloud-based environment. Veritone signed a deal with Cumulus Radio, a US radio station group, at the end of 2016 for near real-time audio capture, indexing and analytics for a restricted group of Cumulus’ terrestrial stations. More specifically, Veritone’s cognitive algorithms are used by the radio broadcaster to index and package audio content within a cloud-based system, making it more easily searchable. Analytics on the audio content are then created to provide a tool to measure advertising performance – this is a perfect case of AI sorting out programming to provide monetization opportunities.
This use case also highlights the marriage between AI and cloud technology, with the latter providing users the flexibility to scale up and down as needed by the quantity of information processed by the AI engines – in the case of public cloud – without the need to build a dedicated data center. The increased accessibility of high volumes of computing power then makes the use of AI tools more attractive – as opposed to when this was not possible, as highlighted earlier in the chapter about the history of AI.
The transition to cloud-based infrastructures is also important to AI development as it gives broadcasters and content owners the possibility to track operations – again, data harvested by the cloud can be mined by AI engines. In October 2017, Veritone also signed a deal with iHeart Media – another US radio station group – to provide a similar solution to over 200 of its radio stations. Veritone has other broadcast customers such as Fox Sports Brasil, ESPN and CNBC. In the case of Fox Sports Brasil, Veritone deployed its platform on-premise and not on the cloud.
As does GrayMeta, Veritone also serves other industries such as government and legal and compliance, a natural extension for metadata extraction and creation technology.
Some media technology suppliers have partnered with these companies to provide their customers with AI tools. For example, media technology suppliers such as Dalet and Wazee Digital partnered with Veritone while SDI Media partnered with GrayMeta. This may represent a way forward for suppliers that want to complement their existing offerings with best-of-breed AI technology.
At NAB Show 2017, Valossa, another AI company, announced it had teamed up with Finnish broadcaster Yle to extract metadata insights from the broadcaster’s archive. More specifically, Valossa’s technology was used to segment magazine-like long-form content into shorter separate stories. Valossa’s technology was also used to visualize content in a more graphical fashion – e.g. content heatmaps.
AI-focused media technology suppliers also give broadcasters and content owners the tools to monitor and review metadata to adhere to regulatory standards. For example, Sky decided to partner with RAVN Systems to automate the review of its EPG (Electronic Program Guide). This was done to flag irregularities with Ofcom (the UK’s communication regulator) rules.
Another commonality between these companies is the provision of APIs to enable broadcasters and content owners to integrate search and indexing functionalities into their existing media asset management systems – which are often highly bespoke products. This is a challenge for media technology buyers relying on customized databases.
As mentioned earlier, the data created and classified by AI algorithms can then be repurposed for monetization workflows. With regards to this, it is useful to mention the use that some broadcasters make of AI for optimizing advertising sales.
Turner was the first broadcast and media company to apply IBM’s Watson technology to ad sales in February 2016. Turner decided to adopt Watson to develop a cognitive technology for ad sales and power its recommendation engine for advertisers. In the press release outlining the deal, Turner said that the main benefits of Watson would be the following:
- “Obtain actionable insights about advertisers and trends in their respective industries from news feeds, analyst reports and social media”
- “Analyze advertisers’ historical advertising spend to uncover alternative ad spending strategies that project forward looking business requirements and competition”
- “Learn from user feedback to provide relevant information for brand profiling and offering solutions to advertisers”
According to IBM, Watson is used by broadcast and media customers to examine social posts, online feedback and images – this is important in building audience profiles. This is a growing area of application of AI technology and one which could enable broadcasters to better compete with data-driven new media operators such as Netflix and Amazon.
Channel 4 has also been investing in AI technology. The publicly-funded broadcaster created an in-house Data Planning & Analytics team in 2011. Since then, it has been developing machine learning techniques for customer segmentation and targeted advertising.
Channel 4’s experience is particularly interesting in the context of big data technologies. With the launch of its VOD offerings, Channel 4 was challenged by the volume of data flowing from these platforms – which was not matched by its processing capabilities. Therefore, the broadcaster decided to invest in open-source technologies such as Hive, a programming tool for Hadoop, leveraging AWS for on-demand big data processing. It’s been using these big data tools in combination with visualization software such as Tableau to analyze audience data.
It is important to note that AI systems can theoretically create much more detailed metadata than media companies are used to – this is true particularly with regards to performance indicators used in advertising and content sales, as highlighted in Turner and Channel 4’s experiences. The presence of information on content performance on different devices and platforms as well as its popularity on social media outlets can all enrich the portfolio of evidence used by media sales executives. This point is extremely important also with regards to content distribution and its capacity to deliver a personalized experience to viewers thanks to the richness of the data available on their preferences – i.e. the possibility to build audience profiles. We will explore this and other trends in content distribution in the next focus of our analysis.