Case study: The BBC Virtual Audience and CEDAR

An interview with Matthew Page and Mark MacDonald of UK Operations For BBC News

The COVID-19 pandemic has had an impact on almost everyone, and those working in the fields of radio and television production are no exception. Not only have they had to adopt new working practices, but the nature of the job has often changed because of the need for social distancing. Nowhere is this more obvious than in TV studios, from which audiences have been banned for more than a year.

One team who have overcome this is UK Operations For BBC News, who have created a system for providing an audio Virtual Audience to shows including The News Quiz, Mock The Week, Children in Need and the BAFTAs. We talked to Matthew Page, Outside Broadcast Engineering Manager at the BBC and to sound engineer Mark MacDonald who explained how they are bringing remote audiences into the studio, and how they overcome some of the problems that they encounter in doing so.

Matthew explained, “We have a team of ten outside broadcast engineers who are generally on the road, but in March 2020 all of our work disappeared. But we have various skills and we redeployed them, and we’re now recording up to five programmes a week with virtual audiences. For each show, BBC Audience Services sends out links to an audience who can watch the programme being recorded, and we hear them responding in real time as do the contributors to the programme. The members of the audience also hear themselves, which makes it a very immersive experience for them. So we’ve got hundreds of incoming audio streams from people’s homes, each adding some sort of noise – washing machines in the background, the hum of people’s laptop computers, you know the kind of thing. If they’re all summed together without any noise reduction it sounds pretty dreadful.”

“We tried cleaning up the first virtual audiences in post, but the number of hours it took to produce a show was ridiculous. We could do just one or two programmes a week, and by the end of each we all had square eyes. We’ve even increased the sizes of the audiences since then, so we had to refine the process because we’ve now got up to 1,000 incoming streams that, even when we whittle them down, generally leaves an audience of about 300 to 350 sources. It’s a huge sound and, to handle this, we created a system that we call the BBC Virtual Audience that can handle the hundreds of incoming audio streams at the same time.”

Mark picked up the story, “We handle 40 to 45 streams on each of eight PCs. Two of the team will submix on these, which are essentially just a giant virtual audio mixer. They can mute channels, adjust levels and flag things that we need to get rid of – for example, if someone’s having a conversation in the living room and makes too much noise, we can remove that person and hook in someone else. But there’s always someone breathing really loudly or clinking a teacup, and the bigger the audience, the more stuff gets buried in the sound. The submixes then go through a CEDAR.”

“When we started with the Virtual Audience system, we didn’t really know if it would work. All ten of us would spend two days cleaning up to 150 tracks in post and turning them into a sort of cohesive sound, which was too time consuming. We borrowed a two-channel CEDAR DNS 2 from another team, and we used that for the live element, so at least the live sound was quite clean. It was amazing what the DNS 2 could do, so we then got permission to get the 8-channel DNS 8D, which was really a game changer because, at that point, it was so good that we no longer needed to do any post production work on the sound.”

“Now, the Virtual Audience system directs the first set of feeds to one computer, those are mixed and that mix goes to one channel of the DNS 8D. The next set of feeds goes to a second computer, that’s mixed and the output goes to the second channel of the DNS 8D, and so on. We then do a final mix on the eight channels that we get back from the DNS 8D.”

“We do some simple processing before the sound goes to the CEDAR – pretty standard stuff, some gating and taking off some lows and highs. When the sound comes back from the CEDAR, we add another gate and a dynamic eq to try and reduce the sound of breathing. The mix is then passed through a couple of different reverbs. This is because the microphones are quite far away in a real audience. Mock The Week actually tried playing some of the virtual audience into the room through a PA and capturing it on mics, but it didn’t work for us. So we try to push away the virtual audience subtly, but without making it sound like there has been a giant load of reverb dumped on it.”

“The system is so effective now that, when we’re lining up all the PCs with a tone coming through, we sometimes forget to switch the CEDAR out. Then someone would complain that there was no tone, and we’d be saying that we had definitely switched it on. So we’d look around and someone would eventually say, ‘ah, right, OK’ and switch the CEDAR out. After the first couple of audience events I wrote some sound notes and, at the end of it, it says, ‘bow down and worship the CEDAR’. It’s fantastic. It’s really, really amazing.”

“When the DNS 8D arrived, I read the instructions and they said that, for most applications, I could just leave it in Learn mode. I thought, who am I to argue with the box – it was doing way more than I could. And you know, I did some tests and tried to improve on it, but I couldn’t. It’s amazing in Learn mode, even if I’m not tweaking it in any way. Maybe I’ll adjust the weighting sometimes, just because I can, but mostly I just decide on the attenuation – usually about 12dB. The whole sub-mixing and mixing system’s working well for us now.”

Matthew continued, “That’s definitely where it shines. We’re sometimes doing five programmes a week, and they all take a live audience mix. Some of the programmes are live to transmitter while most of the rest are ‘next day turnaround’. So there’s no time for any post production on the sound; what Mark and the team mix on the night goes out. It works really well, and our audiences and the producers are happy. But we always want to improve what we’re doing. I had a conversation with our R&D department about six months ago, and they’re looking at machine learning too. They’re looking at detecting the actual speech, but we would want to classify things like applause or laughter and remove everything else, or even classify tea cups clinking and then remove those. But I don’t know if that’s going to be possible.”

“Moving away from the equipment, the psychology of remote audiences is also quite interesting. Our producers have found that the performers often find it easier without the studio audience. On some programmes they give the panel some virtual audience webcams to look at if they want to, and they’re still getting the audio feedback – applause and laughter and that kind of stuff – but I think they feel less inhibited.”

Mark concluded, “That’s exactly what I found too. It’s really helped the panellists and the comedians because it was flat without an audience. Now, some of them quite like being able to hear the audience clearly without having to be a few metres away from them.”

Links:

CEDAR DNS 8D dialogue noise suppressor
CEDAR DNS 2 dialogue noise suppressor

pixitmedia Introduction – Who are we?

Paul Cameron, Our Global Managing Director gives an overview of the company, our products and the challenges we solve for our customers in the Media and Entertainment Industry.

To find out more about our company, and how our solutions are solving challenges and changing the way businesses handle their data, visit our website.

Borderless Film Picks EVO for International Video Production

Borderless Film isn’t just a clever name, it’s the epitome of founder and producer Sejin Park’s approach to media production. His South Korean production company regularly works with international brands, television networks, and web platforms to create truly incredible, border-defying work.

In just the last few years, Borderless produced video content for brands like Under Armour, Hyundai, Korean Air, and Timberland. K-Pop fans might recognize Borderless’ work with artists like BTS, Blackpink, and ITZY—part of their promotional series with Spotify. Borderless also provides international video production services for YouTube Originals like K-Pop Evolution, feature films like Fiction and Other Realities (2020), and TV programs like Dramaworld.

This colossal body of work has to be ingested, edited, and sent to clients and distributors from Seoul to various destinations around the world. It’s a lot to manage, but Park’s team makes it all happen with their 8 Bay EVO media production server. Park emphasized, “Whatever we shoot—television shows, movies, commercials, and more—everything comes together through EVO.”

An EVO-lution for “K-Pop Evolution”

In 2018, Borderless Film was working on one of their most ambitious projects yet: K-Pop Evolution. The YouTube Original documentary series explores the history and meteoric rise of the K-Pop genre from the perspective of its stars. The seven-episode production combines interviews with over a dozen K-Pop artists, footage from worldwide concerts, and behind-the-scenes clips from tours and training sessions.

K-Pop Evolution | Trailer

The trailer for K-Pop Evolution, co-produced by Borderless Film.

Borderless Film began the project using external hard drives, manually transcoding footage in the cloud, and sending terabytes of data internationally via courier service. Park knew that so much footage required a better solution, and a better workflow overall.

Park discussed his video editing workflow with DV Nest, a top Korean distributor of media production equipment. After seeing EVO in action, he decided it was the right shared storage solution for his team.

“SNS EVO is a professional storage server with high performance, proven stability, and opportunities for scalability,” explained Lee Kwang Hee, CEO of DV Nest. “EVO was a great choice for Borderless Film.”

With 10Gb Ethernet ports added on, Park knew his EVO shared storage system would improve workflow and facilitate future growth. For example, his old infrastructure was keeping the team from accepting more post-production projects. “EVO really opens up a lot of business opportunities for me,” said Park. “I’m getting a lot more inquiries about post-production work now.”

Editing Efficiency with the EVO Suite

Borderless Film’s EVO provides more benefits to their video editors than just hardware performance. While storage capacity and network speeds are undoubtedly important, organization and efficiency are paramount for busy editing teams with quick turnaround times as well. And that’s where the EVO suite comes in.

The EVO suite includes the ShareBrowser media asset manager, Slingshot automations engine, and Nomad remote editing utility. Each of these powerful software tools come included with EVO to facilitate a collaborative, efficient, and flexible workflow. “The whole EVO suite makes our workflow easier,” said Park. “I use everything: ShareBrowser, Slingshot, and Nomad.”

To organize their library of footage and project files, Park’s team uses EVO’s powerful and easy-to-use ShareBrowser media asset manager. “When I saw ShareBrowser, I really liked the way it handles video management,” said Park. “It’s great being able to tag and add custom video metadata to our footage.”

Automated International Data Transfers

Borderless Film’s international video production workflow often requires collaboration with production teams all over the globe. Thankfully, Park uses Slingshot, EVO’s included automations engine and API, to automate processes like transcoding media, scheduling backups to cloud storage, and just about anything else.

Park explained how he used Slingshot to automate massive file uploads to Amazon S3 cloud storage for K-Pop Evolution. “I had to transfer 700 GB to 1 TB of media to a partner in Toronto every day. EVO, with Slingshot, does my job automatically. I started uploading everything to the cloud on a schedule with Slingshot, and my client would download the files from there.”

“We’re a small company without any IT or tech guy,” Park added. “With EVO, you just plug it in, power it up, and it just works. Even the automations, I can set it all up myself with no IT guy. It’s really easy; I found all the information I needed on the SNS website and knowledge base.”

Better Borderless Capabilities

Borderless is a great way to describe Park’s video editing workflow with EVO, too, as Nomad, EVO’s remote editing utility, enables an efficient remote workflow anywhere in the world.

“For those days you work from home, Nomad is really important,” said Park. “It’s great for me because I don’t like carrying around those slow 2.5-inch drives anymore.”

Slingshot automatically transcodes Borderless’ media, generating proxies files for local offline editorial. Using Nomad, users can download these lightweight proxies—or their source media, if preferred—to their local workstation for remote editing. The result is a post-production workflow that is truly flexible as plans and locations change.

“It’s easy enough for me to configure everything,” Park continued. “Overnight, with Slingshot, all the footage is transcoded into proxy files. The next day in the morning, I am able to collect all the proxy files with Nomad to my laptop and do offline editing. It is very easy.”

To make their international video production workflow even more borderless, Park is excited about enhancing his team’s remote workflow with SNS Cloud VPN, the virtual private network built exclusively for EVO media servers.

While working from home or traveling abroad, Park and his team can connect to EVO remotely over SNS Cloud VPN. Once connected, they can access their entire EVO suite of workflow tools, download proxy media with Nomad, and edit online or offline from anywhere in the world.

“I trust SNS Cloud VPN more than the service my internet browser is providing,” said Park. “I’m working on trailers and edits for unreleased series on major platforms, so security is extremely important. I trust SNS.”

No NLE Limitations

Borderless Film clearly doesn’t like to be constrained, so they needed a media storage solution that worked with all of the editing tools at their fingertips. “I’m using Premiere Pro for most of my jobs. For DI [digital intermediate/color grading], conforming, mastering, and some other work, I use DaVinci Resolve,” said Park. “It was important that my storage server worked with all of these tools.”

EVO and the EVO suite are compatible with all creative applications and NLEs. Going beyond compatibility, EVO seamlessly integrates with major NLEs like Adobe Premiere Pro, Final Cut Pro, DaVinci Resolve, and Avid Media Composer. For example, ShareBrowser media asset manager is available as an in-app panel in Adobe Premiere Pro and as a workflow integration plugin for DaVinci Resolve Studio.

Users can even host the shared Resolve database on EVO to maximize collaboration in their DaVinci Resolve video editing workflow. “Hosting the Resolve shared database on EVO is great,” said Park. “For example, I have two iMac Pros accessing the same exact database for editing in DaVinci Resolve. It’s really helpful.”

International Video Production Pros

With customers and collaboration partners around the world, Park’s international video production services rely heavily on EVO’s stability and reliability. “EVO has never failed on me. It just works,” said Park. “For feature films, TV shows, remote editing projects, or anything else, it does exactly what I need it to.”

Borderless Film has used their EVO media production server for years, storing and editing countless projects. Their busy workflow will only continue to get busier, but Park has no worries about his server’s reliability and ease-of-use. “We work on big projects but we’re still a small team, and we don’t have IT staff to set everything up,” explained Park. “EVO and the SNS support team make everything really easy for us.”

With this confidence, Park thinks he’ll go forward with plans to increase their post-production capabilities as a team. Already a top international video production company, it looks like Borderless Film has a bright future ahead.

EVO media production servers are trusted by international video production teams in over 70 countries worldwide. Contact SNS to upgrade your video storage infrastructure today.

Behind The Scenes: Gold Medal Olympics’ Content Live & On Demand

With the Tokyo Olympics finally a reality, though without spectators in the stands, more people than ever are viewing the games across a variety of platforms, both fixed (TVs) and mobile (smartphones, PCs). Feeding content to a multitude of device types is never easy, even in the best of circumstances. Now factor in hundreds of hours of footage, both live and canned, with complex overlays and captioning, distributed to a global audience—some in locations still significantly affected by the COVID-19 pandemic! All told, tens of thousands of individuals and hundreds of companies are working tirelessly to pull off this broadcast feat.

At MainConcept, we are doing our part to assist with the successful delivery of video across platforms. With three decades relentlessly focused on quality, performance and reliability, our codecs are now deployed by dozens of companies directly involved in the distribution of Olympics’ content. While we would love to highlight all of these amazing partners and how MainConcept codecs are involved, we’d be writing about them until well after the Olympic flame has been extinguished in Tokyo. Instead, here is what you will find from few of them.

AVID

NBC Olympics selected Avid to provide the content production and media management platform, tools and solutions for its production of the Tokyo Olympics. NBC Olympics deployed Avid’s MediaCentral solutions to drive Tokyo-based remote and on-site workflows to generate content for linear, OTT and social media platforms for audiences in the U.S. NBC Olympics is also using Avid NEXIS shared storage, Media Composer Ultimate and the Media Composer Cloud VM option to empower its team in multiple international locations.

Avid MediaCentral, NEXIS, Media Composer and other products use various MainConcept codecs.

Dalet

Dalet has been working with customers to bring sports events back live this summer. This includes France TV and SBS in Australia with Dalet Galaxy five (including Dalet Brio servers) to manage part, or all, of their content supply chain, covering major events such as the UEFA Euro 2020 and the Tour de France. The International Olympic Committee uses Dalet AmberFin transcode and standards conversion solutions, employed during the Tokyo Olympic Games. They also use Dalet AmberFin for media processing within their historical archive, la Fondation Olympique pour la Culture et le Patrimoine.

Dalet Brio and Dalet AmberFin utilize multiple MainConcept SDKs to help bring live sports entertainment to life.

MOG

For over 10 years, MOG has stood out as a crucial key-player in centralized ingest solutions for the broadcast and entertainment industry and, more recently, as a global provider of cloud media production and distribution services and appliances. This includes providing installations throughout many Olympics events, and working closely with major broadcasters such NBC, OBS and RTVE, to ensure that the video and audio signals are correctly controlled, processed, archived and distributed without any failure or delays.

MainConcept codecs are directly integrated into MOG’s MAM4PRO and mxfSPEEDRAIL software as a useful and flexible option for a wide variety of formats.

Telestream logo

Telestream

NBC Olympics selected Telestream as their live capture and HDR processing provider, utilizing Telestream Lightspeed Live Capture and Telestream Vantage media processing to perform a unique, mixed HDR/SDR workflow.

Telestream uses MainConcept SDKs for both Lightspeed and Vantage, as well as other MainConcept codecs throughout their portfolio.

Video production tools

MainConcept technologies are used to process most of the world’s professional video. The most popular video production tools have integrated our codecs. And these tools are used in virtually all broadcast control rooms sending Olympics’ content to our screens.

Adobe uses MainConcept codecs in many of their products including Premiere Pro CC, After Effects CC and Media Encoder. Learn about our companies’ 20-year partnership.

Blackmagic Design invited MainConcept to develop the first-ever codec plugin for their ubiquitous DaVinci Resolve Studio professional editing suite. DaVinci Resolve Studio users can now gain access to the industry-leading HEVC encoder, 8K 10-bit 4:2:2 and AS-11 UK DPP format without ever leaving the application. Learn more.

MAGIX uses various MainConcept codecs in Video Deluxe, Video Pro X, Fastcut, VEGAS Pro Suite and other Magix products.

Mainconcept Goes GStreamer: Extended Codec Footprint

For almost 30 years, MainConcept® codecs have been renowned in the broadcast industry for excellent quality, impressive performance, and extensive feature sets. Since the 1990s, our components have been based on the proprietary MainConcept API (Application Programming Interface), which enables companies all over the world to integrate our codecs into their solutions. Now, we’ve also made it even simpler to use MainConcept’s best-in-class codec packages with direct integration into the GStreamer Media Framework to give you a complete encoding and transcoding pipeline.

With the MainConcept OTT Content Creation SDK for GStreamer, MainConcept extends the availability of its video and audio encoders, including HEVC and AVC, multiplexers and packaging tools. This means that GStreamer users no longer need to deploy and manage both APIs separately to work with various production formats, high-bitrate video files, or any other use case requirements.

About the GStreamer Media Framework

GStreamer is a multi-platform framework—an open industry standard that is supported by a worldwide community of engineers. The cross-platform availability of GStreamer makes it a useful framework for developers on desktop and mobile platforms to create complex multimedia workflows. This media framework consists of an API and rules for connecting codec and streaming components.

The GStreamer API was designed for developing applications, services and systems intended for encoding, decoding and streaming environments. Many media-handling libraries already implement this API. Components that use the GStreamer API are called plugins and can be connected to create complex media pipelines for encoding, transcoding, and streaming. And anybody is free to implement components with the GStreamer API to add new codecs, formats or features to GStreamer-compatible pipelines.

A standard GStreamer installation consists of a C++ SDK. It comes with numerous codec and streaming components from the open-source community as well as many ready-to-use command-line tools that enable users to quickly set up encoding and decoding pipelines. GStreamer offers optional tools, test suites and codecs for download, including commercial versions.

MainConcept OTT Content Creation SDK for GStreamer Extends Your Codec Footprint

The OTT Content Creation SDK for GStreamer allows users to generate MPEG-DASH and Apple HLS streaming formats for VOD (Video on Demand) as well as live production workflows. It features both on-demand and live low-latency CMAF-DASH content creation on Windows and Linux.

The OTT Content Creation SDK for GStreamer includes MainConcept’s industry-leading AVC/H.264 and HEVC/H.265 video encoders as well as all audio encoders, multiplexers, and packaging tools to create MPEG-DASH, CMAF-DASH and HLS-compliant content. It even gives you the complete set of MPD and playlist files! The SDK’s plugins and components seamlessly integrate into GStreamer and can be fully controlled using the GStreamer API. This enables you to develop more sophisticated solutions by combining MainConcept state-of-the-art codec libraries with built-in or 3rd-party GStreamer components.

Try the MainConcept OTT Content Creation SDK for GStreamer

Request your free evaluation copy of the MainConcept OTT Content Creation SDK for GStreamer which is compatible with the industry standard GStreamer API. You can also contact us to set up an initial consultation with one of our experienced Solutions Architects. And no matter your API or use case, our Professional Services team can be engaged to make sure you select and correctly implement the best solution for your organization.

XHE-AAC: A New Standard For The Audio & Video Streaming Experience

When people talk about Adaptive BitRate (ABR) streaming, the focus is usually on video with reference to quality and bandwidth, while audio typically plays only a minor role or is entirely neglected. It’s true that video takes up considerably more bandwidth than audio, but for most production, high-quality audio is anything but an afterthought. The common audio standard currently used is AAC—developed by the Fraunhofer Institute for Integrated Circuits (IIS) and others—which is natively supported by almost all mobile devices, tablets, Smart TVs, desktop PCs and browsers. Fraunhofer recently released AAC’s successor, xHE-AAC™, which has already been deployed on countless Android and iOS devices. Even Netflix has identified xHE-AAC as the key audio format to deliver an unrivaled audio experience to their audience.

The benefits of xHE-AAC

The team at Fraunhofer IIS addressed the pain points of today’s streaming users, content creators and service providers when they developed xHE-AAC. This new and flexible format helps to significantly reduce audio bandwidth constraints where network conditions are limited. This might affect different streaming markets ranging from OTT and broadcast to education and virtual meetings/conferences. The seamless switching between various quality levels and audio bitrates makes xHE-AAC more flexible than its predecessor, the legacy AAC formats. The integrated Loudness and DRC processing creates an optimized playback experience even in noisy or other environments where it is difficult to hear, preserving excellent audio quality while keeping the bitrates low.

Although the audio bitrates are often neglected when it comes to bandwidth savings, xHE-AAC uses a unique approach: The audio bandwidth you save with low bitrate xHE-AAC can be used for improving the overall video quality. This saved bandwidth can often make certain areas or textures within a picture clearly visible and distinguishable again. After all, every bit counts!

What sets the xHE-AA codec apart?

The new codec not only supports xHE-AAC but also the legacy AAC formats: AAC-LC (AAC – Low Complexity), HE-AAC v1 and HE-AAC v2 (High Efficiency – AAC) bitstreams. The encoder creates the full AAC portfolio, and the decoder plays it back. This enables full backward compatibility for existing software and hardware products.  The xHE-AAC format delivers impressive audio at bitrates as low as 12 kbit/sec for stereo all the way up to 500 kbit/sec for crystal clear audio quality. A mandatory feature for xHE-AAC is Loudness Metadata processing and Dynamic Range Control (DRC), which helps to create an unrivaled listening experience. It does so by adapting the audio content’s characteristics to the actual user environment when playing back the footage. The Loudness as well as DRC Metadata processing is mandatory for xHE-AAC. The xHE-AAC format will lift the user’s audio experience to the next level—even under low bandwidth conditions!

Regardless of whether you are targeting speech, music or even mixed content, Fraunhofer xHE-AAC provides exceptional audio quality at extremely low bitrates.

Webinar and beta program for xHE-AAC

By collaborating with the Fraunhofer IIS team, we have extended our highly acclaimed FFmpeg product line, adding the beta MainConcept xHE-AAC Encoder Plugin for FFmpeg. Please join us live for our August 18 webinar to learn about the new xHE-AAC FFmpeg plugin. Experts from Fraunhofer and MainConcept will discuss how:

  • Mandatory support for loudness and dynamic range metadata in xHE-AAC can help you optimize the dynamic range and loudness level of a program to provide the best possible listening experience on any device and in any environment
  • The new solution enables service providers and industry professionals worldwide with varying use cases to rely on FFmpeg-based ingest and encoding frameworks
  • Audio bandwidth saved as a result of xHE-AAC’s efficiency can be used to improve the video quality
  • Performance of the solution that offers maximum coding efficiency with a usable bit rate range that spans from 12 kbit/s to 500 kbit/s and above for stereo services

Register now to watch live or on demand after the event.

If you want to be an early adopter of this exciting audio format and get your hands on the MainConcept xHE-AAC Encoder Plugin for FFmpeg, apply now for our Beta Program and be a part of testing this emerging audio experience!

xHE-AAC™ is a registered trademark of Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. in Germany and other countries.

High Efficiency Image File Format (HEIF) Beta Program

It’s part of the MainConcept DNA to listen to our customers and look ahead. Our Beta program is one of the many ways we get input on products that are in development, so we can decide how best to serve the media & entertainment and broadcast industries along with others that use digital video. Our newest Beta program offering is now open for the HEIF/MIAF Ingest SDK and Viewer App.

The demand for a versatile image compression algorithm

For the last 30 years, JPEG has been one of the most popular lossy image formats. However, our modern world demands higher image resolutions, accelerated by the ever-increasing number of photos taken along with their file sizes. This has led to a demand for a more efficient and versatile image compression algorithm.

What is HEIF/MIAF?

Seeing the need for a versatile image compression algorithm, the Moving Picture Experts Group (MPEG) developed the High Efficiency Image File Format (HEIF) container which was introduced in 2013 and finalized in 2015. The Multi-Image Application Format (MIAF) is a subset of HEIF that defines additional constraints that can be used to simplify format options, alpha plane formats, profiles and levels, metadata formats and brands, and rules for how to extend the format.

At the core of HEIF is the popular HEVC/H.265 video format, which can significantly reduce the size of several consecutive images. It does so with a smaller file size, up to 50% smaller than JPEG (enabled by the more efficient compression algorithm mentioned earlier). Better yet, it does this with the same or better image quality.

The primary goal of the HEIF format is to meet the current challenges around image compression and storage. Benefits of using HEIF include:

  • Better compression than JPEG (higher quality with less storage space used)
  • Options for lossless and lossy compression (so it can be used both for consumer and professional devices)
  • Built-in abilities for non-destructive editing (cropping, zooming, transparency, etc.)
  • HEVC coding format which provides built-in thumbnails and sequence of images storage

There is much more to HEIF than compression and storage. HEIF encoded files can include metadata on HDR color, computer vision data, depth map generation, and RAW data, all in a single file. For devices with multiple cameras, almost a given these days, the individual cameras can capture images simultaneously, then combine them into a single HEIF. Rather than sharing multiple images, one file can carry them all.

Is HEIF a common format?

When Apple announced at their 2017 Worldwide Developers Conference that their new macOS and iOS would support HEIF natively, the format received a tremendous boost. Now, it is a common format for many devices (smartphones and digital cameras) from a variety of manufacturers including Apple, Samsung and Nokia.

How does the MainConcept HEIF Ingest SDK benefit your workflow?

You can enable import of HEIF images by adopting MainConcept HEIF/MIAF Ingest SDK which is capable of opening containers with an image, extracting metadata and processing the image into an uncompressed bitmap. This uncompressed buffer can be overlayed for a user or it can be converted to any export format that your application supports.

How to view HEIF images

Starting from the availability of macOS High Sierra, you can view HEIF pictures on Apple laptops and iMacs in the Preview application. You can also view them on Windows 10 by installing an add-on or by using the beta version of the MainConcept HEIF/MIAF Viewer App.

With the MainConcept HEIF/MIAF Viewer App, you can open HEIF/HEIC files and view them with image built-in effects or view raw unedited pictures, look through metadata provided with image, and zoom in and out.

Join our Beta Community

Get early access to MainConcept technology and provide direct input to our product management and engineering teams. Plus, when the product goes mainstream, you’ll be a step ahead of your competition. Sign up to become a beta tester today. 

pixitmedia Case Study: Jellyfish Pictures

Jellyfish Pictures is the largest VFX and animation studio in the world operating in a completely virtual environment. The studio’s commitment to push the boundaries of technology allows the studio to scale and access global talent in a way that is unrivalled in the industry. CTO Jeremy Smith and CEO Phil Dobree explain how working with the right partners has enabled them to evolve as a studio from being locally virtual, to fully global.

“At Jellyfish we’re constantly looking at how we improve our workflows to be more flexible and commercially competitive. We work with some of the world’s leading studios on high profile projects, like Disney, HBO, Dreamworks Animation, and Netflix.

This starts with the way we build our network infrastructure and work from our storage: how to make each project run smoothly without overinvesting and without bottlenecks. There’s a tendency in post and VFX to invest based on the immediate need of the next project and this doesn’t always match with the longer-term business requirement.” comments Dobree.

Realising the vision

Jellyfish Pictures has experienced extremely rapid growth over the past few years, with projects demanding more compute and the very best talent. Phil Dobree explains, “If we had remained with the traditional on-prem model, we simply wouldn’t have been able to meet our clients’ requirements, both from a capacity and financial standpoint. Having the ability to scale up and down when needed, teamed with having access to the very best global talent, has changed the game, for not only us but for the whole industry.”

Taking the step from virtual studio to fully global virtual operation is no easy feat. Jellyfish Pictures worked with pixitmedia, Teradici, and [RE] DESIGN to realise this vision: Teradici’s PC-over-IP (PCoIP) technology and Cloud Access Software provides the secure backbone for remote workstation sessions between artist and workstation across all configurations; Teradici Cloud Access Software generates an encrypted PCoIP pixel stream terminated by a compliant endpoint device such as a PCoIP Zero Client at the desk. This ensures data never leaves the Jellyfish Pictures infrastructure and the security works in exactly the same way as it would in the studio. There is no way data can be passed on, downloaded, or accessed. [RE]DESIGN developed a packaging system that integrates the Autodesk Shotgun Production Management, Tracking and Review platform with DCC applications, enabling the studio to extend their Production Pipeline, workstations, and infrastructure resources across any private cloud data center or public cloud region at the push of a button. This enables Jellyfish production teams to manage their projects within the Shotgun interface, assign tasks to any global artist, and deliver their files and software packages for execution, directly within the Shotgun template. Users don’t need to manage files, as data management is automated through workflow triggers within Shotgun. Files are kept in sync from delivery to a location, to submission for reviews, to data protection, and archival at the primary hub. All content resides on the award-winning pixitmedia storage solution pixstor, alongside dynamic data manager ngenea, which allows files to be distributed across multiple creative hubs quickly and securely anywhere in the world, enabling global collaboration with guaranteed data efficiency.

“Jellyfish have been working with pixitmedia for nearly 10 years. They have been on this journey with us, enabling us to grow and become the studio we are today.”Jeremy Smith CTO, Jellyfish Pictures

Demanding workflows

Jeremy Smith, Jellyfish Pictures CTO explains the crucial role storage performance and security plays in their daily operations; “We have varying performance needs depending on the project. Most of our work is now 16bit OpenEXR and DPX for the sequences and shots we’re working on. In all cases, we need to be able to guarantee that performance from our network.” In the media industry, sustained performance is mission-critical. With new storage, you normally start really fast, and then as the system gets older you get fragmentation, which negatively impacts workflow and makes it slower. As a result, companies would recommend and over-buy storage based on the knowledge that the performance would deteriorate over time. “For the archive, it doesn’t need to be fast, so we use ‘cheap and deep’ storage. But in our live workflow, we use more expensive spinning discs. Our main network is based on a Mellanox core to our clients. As long as the physical disc can spin that fast there’s no reason why we can’t saturate that connection. There aren’t that many applications that require that level of connection. “The key differential in working with pixstor and ngenea solutions is that it writes data to the underlying disc in a way you know you have guaranteed read and write performance. This has a massive knock-on effect for real-time tasks like editorial or grading. If you set the performance at X, you get X and if you scale it up to Y, you get Y. The pixitmedia team makes sure it stays within our defined parameters and doesn’t deviate over time. Even when we’re at 99% capacity we’re still getting perfect performance levels.”

“None of this would have been achievable without the innovative minds at pixitmedia, who understand what the future of our industry looks like.”Phil Dobree CEO, Jellyfish Pictures

Secure virtualisation

“With pixitmedia’s support, we’ve been working with the relevant security auditors on delivering a secure international virtual workflow in line with TPN (Trusted Partner Network) requirements. The pixitmedia team has done some really great work, to help us protect our data and this greatly helps when getting a TPN assessment. pixstor offers multi-tenancy on its single storage fabric, creating secure containers to allow separate editorial, VFX, and other workflows to sit within one, rather than multiple networks. Each container is in 100% isolation with no cross over to prevent any data leaks. “We can set our own rules for each container with no additional costs and no impact on performance. The data is isolated on a dedicated network level and you allocate per client, per job working on whichever base storage you already run. Before this, you’d have one storage system for customer A and one for customer B, and so on. Now we only send the data we want to be worked on remotely to our artists wherever they’re based – with all keyboard, mouse, and pen tablet signals recorded and attributed, and returned within the same infrastructure. pixitmedia continues to evolve its unique pixstor environment and ngenea platform with us to make it work even harder for the most demanding 4K and CG workflows whether on-premises or now in the cloud. The pixstor Cloud virtual performance is the same as the native file system on the pixstor on premises. The cost savings are very high and the potential for collaborative working is exciting!” Dobree concludes “None of this would have been achievable without the innovative minds at pixitmedia, who understand what the future of our industry looks like.”

pixitmedia Case Study: Cinelab London

pixitmedia Streamlines Media Workflows for High-Performance Film Scanning and Data Management

pixitmedia’s software-defined storage and data management solutions provide purpose-built infrastructures for media and entertainment organisations, and empower the creative teams of broadcasters, major film studios and creative organisations to access their data how and when they want.

pixitmedia’s storage solution, pixstor goes beyond storage offering consistent, high-performance and scalable access and storage that teams in data intensive environments need to stay competitive. pixstor is a dataaware scale-out NAS platform that is purpose-built for demanding media and entertainment requirements, offering robust management, storage and protection of media workflows across on-prem and cloud environments. pixstor also ensures data is ‘always on’, available precisely when and where it is needed, enabling in-house and remote creative teams to be more efficient and productive