farmerswife Release v6.7 is here!

In the middle of a hot summer – we bring you a few hot features along with some other improvements and fixes in version 6.7.

Read about our most important new features and updates below.

Customize your mouse over

You can now customize the information that the Booking Mouse Over is showing in the timelines. Great for those who wish to simplify their interface and see the important information at a glance. Read more.

Playful

We’ve added some new icons to our Library to make your interface more colorful and enjoyable during your day-to-day work.

Let’s play! Read more on how to add your own icons.

Let it count!

For your advanced Budget experts, you can now show a column called “Counter” in the Budget table, and display a Counter Element inside the Category/Account groupings of the Financial Report. It contains an auto-incremented counter for each Category/Account that actually contains any Budget Details or Actuals. The counter can be used if you need a consecutive number series for used Budget Categories and Accounts. Read more.

Back to the future

In case you missed this fantastic feature in our 6.6 SP2 Release… it is now possible to invoice future bookings! We are thrilled to deliver this to all of you who really needed this. Sound amazing? It is!

A server setting allows you to decide WHAT can be changed on a booking AFTER it has been invoiced: Change Status, Object, Time Report, Booking Time, Dates, Number of Days of the Booking. Read more.

Push your budget task to Cirkus!

We’ve lifted our farmerswife <> Cirkus integration to a new level! You are now able to use the sync between farmerswife & Cirkus, to push Budget lines to Cirkus tasks. Ideal for all services you are providing to your client, which you would prefer to manage in Cirkus rather than farmerswife.
You must have the farmerswife <> Cirkus integration enabled. Read more.

Make sure to insure

For rental or those that need to calculate a markup on Bookings we expanded the “Markup” option in Services to be used for calculating “Insurance markup” on Booking level.

This Service can be configured as a “Pre-defined Extra” on a specific “Booking Names” template, e.g. “External Equipment”. Read more.

Are you QR ready?

We are. The Inventory Number of each Object in the Object Manager converts into a barcode/QR code and can be printed out via the Object Manager Report. Simply select one or multiple objects and right mouse click > Reports.

You have (G)Mail!

You can now configure the Google Invites Connector to use your own google business domain which offers 2 primary benefits.

1) Invites will now show as being from your own organization instead of a generic domain.
2) No limitation on the number of invites sent out. Read more.

Consumption or Capacity Report

With the new report type from Long Form > Classes Availability Tree you can display the % of usage according to the total amount of Class Members vs number of Object Bookings.
Read more.

Utilization Reports

You can now exclude weekends from Date Selector when running a Financial Report on Objects and from Financial Lines In Range date Selector. This is useful if you want to run Financial Reports on utilization/profit over a period of time (from selected Start/End dates). Simply tick the new box to “Exclude Weekends” so the utilisation/profit is more accurate. Read more.

The end…

You know you can always read all the nice stuff we added (including all bug fixes) in our Release Notes… but here are a few other interesting changes:

  • Web Users can be included in the “Object Permissions” list, making it easier to work with that feature. For more information click here. 
  • We improved the Projects tree feature “Auto from View Port”, to now respect the “Maximum Hits When Searching” on limiting the amount of Projects being auto-loaded. Before it would only display 51 projects. Read more.
  • We applied the “Financially Blocked Client” feature at company level and added to searches and invoice manager so you can better control your client limits. Read more.

For the full list of changes in this version take a look at the Release Notes.

Don’t hesitate to contact us! 

We at the farmerswife support team are happy to help you with whatever questions you might have. We are just an email or a phone call away.

Digital Nirvana’s Transcription Solutions for Financial Markets

Overview

Transcription is a critical tool in today’s financial industry, and it’s not just for earnings calls. There are all sorts of reasons to get conversations transcribed — shareholder meetings, TV and radio interviews, financial podcasts, meetings between analysts, and internal discussions among board members. Having a transcript gives stakeholders a searchable record to refer to, whether tomorrow or years into the future.

Client: One of the largest financial data providers in the industry was looking to outsource to a new transcription provider. The company needed fast, accurate and reliable transcripts that analysts and others could use for performing competitive analysis and offering investment advice.

Industry Challenges

Notably, financial analysts rely on these transcripts to build their economic models and create research reports that get distributed to their clients, who in turn use the information to make investment decisions. If one number goes wrong or one decimal is out of place in the transcript, it can mean hundreds of millions of dollars get moved to the wrong spot, and legal liabilities come into play.

With so much riding on them, the transcripts must be precise and detailed. So not just any transcription solution will do. Not only that, but the sheer volume of transcription — earnings calls alone can total at least 500 quick-turn transcripts per day at the height of the quarter — requires scalable infrastructure and specialized transcriptionists who understand the terminology and can handle the volume without sacrificing accuracy, detail, and budget. Few transcription providers are up to the task.

The new transcription solution had to address the following criteria:

  • Seasonal volume adjustment: the ability to scale up and down as seasonal variations dictate.
  • Infrastructure: the technology and processes to record calls reliably and the scalability to record hundreds of calls at the same time.
  • Quality: accurate transcripts across all seasonal volumes.
  • TAT: maintain a streaming turnaround time (TAT) of less than two minutes and final TAT of three hours, even during peak periods.
  • Cost : keep the project cost within the allocated budget.
  • Flexibility: the ability for the client to change, cancel, or add calls during the day as circumstances change.
  • Communication: seamless communication with the transcription service provider, who would literally work as an extension of the financial organization’s team.

Solution

Digital Nirvana began providing earnings call transcripts in 2001. In the years since, we have developed — and continually refine — the technology and processes to handle all the variables and nuances of financial transcription.

Digital Nirvana offers transcription through two AI-based avenues:

  • An automated SaaS solution called Trance that lets you manage the process yourself.
  • Full-service, turnkey transcription managed by our specialized in-house team.

You choose what's best for your business.

Trance: Build efficient, accurate transcription into your existing workflow

Leading companies in the financial space use Trance to stream real-time transcripts of earnings and corporate conference calls conducted by Fortune 500 companies. Key attributes:

Trance unites cutting-edge speech-to-text (STT) technology and other AI-driven processes with cloud-based architecture to reduce the time and cost of delivering accurate transcripts.

In an industry first, Trance automatically identifies music, silence, and other nontext audio. It provides intelligent line breaks, preventing the awkward splitting of words or dangling punctuation at the end of a line. Trance also keeps multiword proper names together. Robust presets and grammar rules ensure that lines do not start with misplaced punctuation, such as the apostrophe in a contraction. 

Trance automatically detects speaker-change, such as when the conversation goes from the CEO to the operator to the CFO. The algorithm automatically inserts line breaks to account for the change in voices.

An advanced, AI-driven STT engine trained in the language of finance makes it possible to generate highly accurate, industry-specific content in real time. Through machine learning, this automated speech recognition engine will only improve over time.

The service scales up to 400 hours of live transcription in a day during the earnings-release season and scale down to fewer hours during other days in a calendar quarter.

Outsource your transcription to Digital Nirvana

Many financial organizations choose Digital Nirvana to handle the real-time transcription function from beginning to end, which allows them to eliminate dedicated teams working on transcription.

Digital Nirvana’s finance operations team builds and maintains a calendar of earnings calls for as many as 4,500 of the most followed public companies by the investors and competitors. This effort alone is of critical importance in helping users track which companies are reporting at which times and how to access the call (phone number, webcast login, etc.).

Once contracted to cover a call, the team uses an internal version of Trance to generate real-time transcripts. Digital Nirvana relies on a stable pool of specialized transcriptionists who understand the numbers and terminology to review the transcripts and ensure their accuracy.

At the height of the earnings-reporting period each quarter, Digital Nirvana’s transcription infrastructure scales up to handle at least 500 earnings calls per day, recording those calls, using speech-to-text to transcribe them, and then having a human edit each one — and turning it all around in three to six hours to ensure timely reporting that keeps up with the markets and investment cycles.

Because this financial organization preferred not to live-stream direct speech-to-text output to analysts, Digital Nirvana worked with the company to develop what we call a near-live transcript. It works like this:

For the first version of the transcript, a speech-to-text engine generates a transcript in real time. Digital Nirvana’s highly trained financial transcriptionists quickly review the engine’s output to correct blatant errors, insert paragraph breaks and speaker names, and generally make the transcript more readable.

Then we stream that version to the client’s chosen destination for viewing by its customers, usually within 30 seconds but up to three minutes.

The next version of the product, akin to a raw transcript that has not yet been fully edited, goes up within half an hour of the call.

From there, the Digital Nirvana team combs through the transcript in detail to correct errors, thoroughly review the numbers, add speaker tags, and apply the company’s style guide, branding, and formatting; with that done, the final edited transcript is shipped to the client’s database within six hours.

Differentiators:

This financial data provider summarized the benefits of working with Digital Nirvana this way:

1. Trust — The same earnings call can be transcribed once and sold to multiple financial organizations, but for this company, that practice presents an IP issue. They had been burned by other transcription providers that promised to do an exclusive transcript but clearly sold them a copy. Digital Nirvana proved that when we cover the same call for multiple companies, we have various teams in place making numerous transcripts of the same call. That exclusivity garnered trust.

2. Quality — Maintaining consistent quality is a challenge for many transcription providers. It is easy to do it when volumes are low, but when volumes are high on a given day, and turnaround time doesn’t change, then quality and accuracy can deteriorate. Digital Nirvana delivers financial transcripts with the highest possible accuracy, as evidenced by its Q Score methodology. Typical quality checks look for errors in grammar or spelling, but in financial transcription, an errant number has a much more significant impact than a misspelled word. So Digital Nirvana came up with a weighted average quality score that makes it possible to quantify the quality of a financial transcript. Now the Q Score has become an industry standard for measuring transcript quality.

3. Quick and Efficient — Because of the technology and operational efficiencies, Digital Nirvana can turn complete transcripts around very quickly — usually within three to four hours compared to six hours or more with other providers.

Today, Digital Nirvana is by far this financial data company’s largest transcription provider.

Because this significant financial data provider is also a broadcast media outlet with a TV, radio, online, and podcast presence, there is potential to use Digital Nirvana’s transcription solutions to handle so much more than earnings calls.

Benefits

Speech-to-text technology and other AI-driven processes radically reduce the time and cost of delivering accurate transcripts.

Generates fast and automatic transcripts.

Provides AI-driven STT with expertise in the language and terminology of finance.

Makes transcripts more readable with AI-driven capabilities.

Delivers up to 400 hours of live transcription per day at peak season.

Finance operations team tracks a seasonal calendar of earnings calls.

With the managed service, specialized transcriptionists review transcriptions and correct errors.

Closed Captions vs. Subtitles

Are you someone who loves to binge-watch foreign movies and shows? If the answer is yes, then you must be aware of the terms closed captions and subtitles.

However, have you ever thought if closed captions and subtitles are the same? Or is there a difference between closed captions and subtitles? For most people, captions and subtitles are similar; however, captions are not interchangeable with subtitles.

Subtitles translate the dialogues on the video into languages other than the spoken language, enabling audiences worldwide to consume videos, movies, and any other form of video content without the need to understand or know the language spoken. Subtitles only show the dialogue and not the non-speech-related elements like sound effects.

Video subtitling has proved to be a winning move for studios and production houses. They look to tap into global markets by making their content accessible to different geographic locations.

On the other hand, captions were implemented by law to reduce discrimination against deaf and hard of hearing and allow them to experience the video content just like others. Captions include the background audio and non-speech elements enabling the viewers to follow the story. In the recent decades, captions have gained tremendous prominence as mentioned by the Forbes report, 80% of consumers are more likely to watch an entire video when captions are available and 69% view video with the sound off in public places, and 25%  watch with the sound off on private places.

Captions and Subtitles: A quick breakdown

Captions

Captions are mandated by U.S. law for video content.

Captions include non-speech elements apart from the dialogues. They communicate all the appropriate audio sound effects, speaker identities, and other audio, essential to understanding the story..

Captions can be either open captions or closed captions, the viewer can turn off closed captions (CC) with the click of the button. However, the same can’t be done for open captions as they are embedded into the video.

Subtitles

Subtitles are for users who do not understand the video language.

They translate spoken language into another language.

Subtitles does not include non-speech elements.

Subtitles are developed before releasing a movie or show as they are time transcriptions of audio files.

Are you looking to add captions to your videos? Explore our captioning solution, Trance, and schedule a demo with experts.

Closed Captions & Subtitles: Different Purposes

After enacting the Americans with Disabilities Act (ADA) 1990, closed captioning for public television was made mandatory, as the National Association of Deaf published in a report.  This law regulates that all public multimedia, whether in the classroom or on late-night television, must be captioned to prevent discrimination against deaf and hard of hearing. One of the best captioning features is that it can be generated live, for example, captioning for Saturday Night Live. Another exciting feature of captioning is that it allows viewers to change the visual displays of captions. Captions are programmed to move their placement on the screen to avoid any obstruction to the video being played.

Video subtitles, on the other hand, are often also called translations. In simple terms, subtitles translate the dialogue and not the other audio elements of the video. Subtitles target the viewers who can hear but don’t know the language, and they have instrumental in content localization. The CC option in the video settings can be used to switch on the subtitles. Even though subtitles and closed captions are different, they are represented by the same symbol CC in the media settings allowing the users to toggle them on or off.

It is also important to note that the terms video subtitling and closed captioning are identical outside the USA and Canada. In the UK, Ireland, and many other countries, the term video subtitling does not distinguish between subtitles used for foreign language and captioning used for the aid of deaf and hard of hearing; these two terms become interchangeable in most parts of the world.

Want to learn more about closed captioning?

Everything You Need to Know About Closed Captioning

Studies indicate that by 2023, the video will make up more than 82% of all consumer internet traffic. Keeping up with this trend, an analysis by Verizon Media found that 80% of the people who use captions aren’t deaf or hard of hearing. It further states that 80% of people are more likely to watch an entire video when captions are available; 50% said captions are essential because they watch videos with the sound off, and 1 in 3 have captions in a public setting. With the rising dependency on captions, the global captioning and subtitling solution market size will grow from US$263.4 million in 2019 to US$350.1 million by 2025 at a CAGR of 7.4%.

Closed captioning is the key to reaching more people, and it is a critical player in video accessibility. Having closed captions in videos boosts audience engagement, improves user satisfaction, and gives the user the power to choose how they would like to watch the video.

What is closed captioning?

Closed captioning is the textual version of the spoken part of a television, movie, or computer presentation. It makes videos accessible to deaf and hard of hearing, and it is encoded within the video signal.

Closed captions are not restricted to just speech; captions also include non-speech elements such as sound effects that are important to understand the video. Closed captions are usually identified as a “CC” icon on a video player.

Types of captions

We now know what closed captions are, now let’s see how they are different from subtitles. There have been many discussions about closed captions vs. subtitles, and both these terms are often used interchangeably.

Closed captions are primarily developed and added for people who are deaf or hard of hearing, and they are identified with a “CC” icon on video players and remotes. Captions come in two forms, closed captions and open captions.

Closed captions are published as a sidecar. Open captions are burned into videos.
Users can turn them on or off. They cannot be turned on or off.
They are used for online videos. Used for offline videos or social media videos.

On the other hand, subtitles assume that users can hear but do not understand the audio language. Subtitles translate the spoken audio into a language understandable by the viewer. Unlike closed captions, subtitles do not include non-speech elements of the audio, such as gestures and other sounds. Thus, subtitles aren’t the most suitable option for deaf or hard-of-hearing viewers.

History of closed captions

Before we go ahead and lay out the detailed handbook for you, let us take a step back and look at the origin of closed captions. The history of captions dates to the 1970s where open captions were applied before the use of closed captions. However, it wasn’t until 1972 that open captions were regularly used. The PBS show The French Chef was the first show to incorporate standard open captions. Open captioning eventually led to the development of closed captioning and it was first displayed at a conference for the hearing impaired in Nashville, Tennessee, in 1971. This was followed by a second demonstration in 1972 at Gallaudet University, where the National Bureau of Standards and ABC showcased closed captioning embedded in a broadcast of The Mod Squad.

By 1979, National Captioning Institute was founded, and in 1982, the institute developed a process of real-time captioning to enable captions in live broadcasting. The National Captioning Institute helped American television to begin full-scale use of closed captions. Masterpiece Theater on PBS and Disney’s Wonderful World: Son of Flubber on NBC were some of the first programs to be seen with closed captioning.

In the early 90s, the Television Decoder Circuitry Act of 1990 bill, allowing the Federal Communications Commission (FCC) to place the rules for implementation of closed captioning. The Television Decoder Circuitry Act was a big step in enabling equal opportunity for those with hearing impairments. It was passed the same year as the Americans with Disabilities Act (ADA).

Why is captioning important?

Accessibility:

Statistics published by National Institute on Deafness and Other Communication Disorders (NIDCD) states that approximately 15% of American adults (37.5 million) aged 18 and over report some trouble in hearing, and about 28.8 million U.S. adults could benefit from using hearing aids. In the World Health Organizations’ report, Deafness and hearing loss, they stated that more than 5% of the world’s population experience hearing loss, and by 2050 nearly 2.5 billion people are projected to have some degree of hearing loss, and at least 700 million will require hearing rehabilitation. With a growing population of deaf and hard-of-hearing individuals, adding closed captions to the videos makes them accessible. Without closed captions, you could be losing a whole section of the audience, and, not to mention, it is mandatory by law. The ADA, passed in 1990, mandates private and public businesses to ensure people with disabilities are not denied services.

<Thinking about how to make your content ADA caption compliant? Schedule a demo with our experts.>

Growing video use:

More video content is created and consumed in 30 days than the major U.S. television networks broadcast in 30 years. The 2021 Video Consumption Statistics show that video is the number one source of information for 66%. On average, people spend 6 hours and 48 minutes per week watching online videos. 500 million people watch Facebook videos every day. Closed captioning plays a vital role in these statistics. According to the report by Verizon, 50% of people prefer captions because they consume the videos with the sound off, irrespective of the device. 80% admitted that they are more likely to watch an entire video when captions are available.

Captioning makes these videos more accessible, as even deaf and hard-of-hearing viewers can enjoy. Captions help people in retaining information better and help them keep focus. As video consumption is increasing every day, content creators must include captioning in their videos.

Improve SEO:

Do you know that captions can improve the search engine rankings of your video? The same way search engines scan a webpage for keywords and phrases to match what the user is looking for, they scan the video captions. Hence, a video with closed captions will have a better ranking than one without it. Search engines will rely on video descriptions and metadata in the absence of closed captions or video transcriptions. While it’s good, it’s no match for videos with closed captions.

Improve user experience and average watch time:

Imagine watching a video while commuting in the subway or uber; you’d want to keep yourself aware of the surrounding sounds while watching the video. It sounds impossible otherwise but not with closed captions. Closed captions allow people to watch videos in sound-sensitive environments. This results in the following direct gains for the broadcasters:

  • Closed captions increase the average watch time and ensure users stay engaged with the content as captions provide context to the viewer.
  • Captions are proven to be one of the most prominent factors when users are deciding to buy.

How to add subtitles to your video?

Digital Nirvana leverages two decades of speech-to-text and knowledge management expertise to deliver greater productivity and shorter turnaround times and improve the captioning process’s speed and accuracy —  all in an easy-to-use interface, Trance. Digital Nirvana’s Trance brings the AI advantage to your transcription, captioning, and translation workflows. Trance is a cloud-based, enterprise-level SaaS platform, accessible from any web-enabled computer to auto-generate transcripts, create closed captions and translate text into more than 100 different languages. To know more about Trance or talk to our subject matter experts, contact us.

In Conversation with AMC Networks

In this IABM TV interview, we are joined by AMC Networks David Hunter (EVP, Chief Technology Officer) and Josh Berger (Senior Vice President, Media Management) to discuss their recent transformation in their supply chain. We hear what circumstances mitigated the requirement for this business transformation and what benefits it has generated.

We also hear how AMC Networks managed the transition to remote working and managed to deliver against all of their committed initiatives.

With a rapid growth in subscriptions for on-demand services for AMC Networks direct-to-consumer brands (often launching new services ahead of other networks), we hear what factors have enabled AMC Networks to achieve this. Finally, David and Josh outline AMC Networks roadmap as they continue with their digital transformation.

In Conversation with Videon

In this IABM TV interview, we are joined by Todd Erdley, President at Videon to hear how they are transforming and impacting the streaming market and the opportunities that are available.

Todd tells us about Videon’s recent funding and how they are planning to use this investment along with what’s coming up for them on the innovation side.

Finally, Todd talks to us about the Videon culture and what makes them unique.

In Conversation with Interra Systems

In this IABM TV interview, we are joined by Penny Westlake, Senior Director, Europe, Interra Systems to hear how COVID 19 has accelerated all aspects of media technology solutions and how they are responding to these changes.

We hear about new products and updates from Interra Systems including the SDIoIP elements of their ORION product for real-time monitoring.

Finally, Penny talks about the latest updates on AI/ML solutions in QC and monitoring.

In Conversation with Prime Focus Technologies

In this IABM TV interview, we caught up with Nick Kaimakami, EVP and Head – EMEA and USA East at Prime Focus Technologies to hear more about their new hybrid cloud media center in Leeds and why they chose to locate it in Leeds.

Nick also outlines and elaborates their Content Supply Chain offerings.

Automating the production pipeline – 5 major decision drivers

Content production is one of the most labor intensive exercises in the media and entertainment (M&E) industry. As a result, it takes up a lion’s share of the budget too. And that is not just because of the number of people involved but inefficiencies run across production planning, time and resource management, use of technology and content logistics (storage, transfer, review). Since the outcome is what we call the IP that is monetizable, security also becomes paramount as the threat of piracy never subsides.

Over the years, various technologies have been able to successfully enable the production process while ensuring they do not in any way interfere with or stymie its core – the creative expression involving human talent. While the core has been kept intact, the overall production pipeline has undergone transformational changes for the good over the years.

When workflows moved from tape to tapeless, the digitization ushered in an unprecedented era of efficiency, security and sustainability. The manual effort involved in content logistics has been substantially reduced, files are easier to track and guard than tapes, and elimination of hundreds of thousands of plastic tapes has led to the reduction in emission of greenhouse gasses.

Technology, especially the cloud has modernized the processing of dailies. From the age of couriers running to the film laboratories with dailies footage, to direct upload of content to the lab via the cloud, today we have DIT (Digital Imaging Technician) on-set working with the cinematographer on workflow, systemization, camera settings, signal integrity, on-set color correction and other image manipulation to ensure that the content meets the director’s creative goals and maximizes digital image quality.

With piracy eating into billions of dollars of revenue every year, television and film studios are focused on content security more than ever. And increasingly the focus is on identifying and solving vulnerabilities so leaks do not happen rather than embedding watermarks to track and monitoring where the breach occurred.

If there is one area technology has significantly enabled in the production pipeline, it is content review. Earlier, content review was the phase when the content was most vulnerable to leaks as many hands are involved and the content used to move on tape or DVD to shared spaces like preview theaters, edit suites, even living rooms.

Now with digital content, review can happen on the fly and in the moment, on any personal device like a smartphone or tablet, including sharing feedback as annotations, comparing before and after clips and signing off on edits, all remotely. This has led to a virtual explosion in creative collaboration globally too. With easy file transfer links even with an expiry date and watermarking of all kinds, content collaboration and review has become secure like never before.

With digital distribution supporting global syndication of content easily, content localization aspects like incorporating cultural nuances of the audiences, adhering to compliance parameters in each market, diversity inclusions, all have become critical needs of the business. This calls for greater global collaboration of talent as specialized skill sets are required.

One of the biggest possibilities today is to evolve from nurturing “islands of automation” facilitated by point solutions in the production pipeline to seamless end-to-end technology enabled secure production workflows that drive efficiencies, be they cost, time, and/or effort.

Which brings us to the question: What are the 5 key aspects to look out for in a modern production pipeline automation software? The starting point (on-set) and finishing line (mastering) are practically unchanged, but the routes between them are diverse and have been easy or difficult based on how productions adopt to standardizations and technology. Let me list down the 5 major decision drivers while choosing a software for automating the production pipeline. Needless to add, CLEAR Production Cloud exceeds the expectations of content producers worldwide.

1. Ease of Use

CLEAR Production Cloud is designed to be modern, responsive, and intuitive. The application empowers the user with features that allow for quick initiation and management of software aided workflows. The ‘sleek design’ approach helps to simplify the user experience even for the not so tech savvy, while still offering a high level of customization and integration.

2. Budget-friendly

CLEAR Production Cloud has the lowest Total Cost of Operations (TCOP) and offers true value for money with extensive features – Digital Dailies®, asset management, review & approval, collaborative production workflows, centralized management, powerful admin module, next generation HTML5 player, advanced playlists management, one click share, Secure Screeners, Just In Time (JIT) watermarking and many more. It is supported on all leading browsers, and Android, Apple TV, and iOS devices.

3. Time to Market and Industry Compatibility

CLEAR Production Cloud is offered on a SaaS model, so there is no upfront CAPEX allowing fastest time to market because you pay as you need. It can be easily integrated with all leading software products including major point solutions used for a variety of functions.

4. Constant Innovation

CLEAR Production Cloud is continuously evolving and releasing new features and enhancements to support emerging needs as we listen to hundreds of customers, enable thousands of shows and manage millions of hours of content.

5. Availability, Security & Reliability

CLEAR Production Cloud is 100% available and offers the gold standard in M&E industry security. It is reliable and backed by industry-leading support, processes and personnel.

The good news is that’s not all. CLEAR Production Cloud has an exciting slate of features and enhancements in the pipeline that will make it even more compelling for content producers.

Here’s a sneak peek: raw camera support; camera-to-cloud instantaneous reviews; policy-based archival; tiered storage workflows – movement of files across storage tiers (UI & rules-based); next-gen Share features & functionalities; AI-driven toolsets; forensic watermark support for downloads; “Locate” feature – retrieve specific footage based on EDL or in/out points; and clip-based workflows.

Stay tuned!