KitPlus: Coming out from behind the camera

Simon Tillyer
KitPlus

Let’s face it most of us are ‘behind the camera’ sort of people and if Covid hadn’t reared its ugly head then that’s most likely how it would have remained.

Rewind just over 12 months and the KitPlus Show in Shoreditch, London had just taken place with a packed show floor and copious amount of hand sanitizer on offer and if anyone had suggested our much-loved print magazine would be replaced in favour of a daily TV show, even temporarily, we’d have probably taken a sharp intake of breath!

As with many small companies when Covid hit we had to make quick decisions. Do we hide under a rock until it’s all over or create something that may actually help spread the news and messages that so many manufacturers, suppliers, experts and leaders in our industry were keen to tell? And so KitPlus TV was born.

For the last 10 years KitPlus has had a reputation at trade shows such as IBC or NAB as the company that turns up on your booth (invited of course) does a short video and dashes off to the next booth, all filmed, edited and shared online within 12 hours. We were fortunate to have already organically grown an online video subscriber audience and we did at least have some production and post production skills to tap into. But wanting to be a little different, we teamed up with our good friend and streaming guru Jon Pratchett and created a virtual studio within our vMix software suite into which we could be teleported from the comfort of our green screened homes / sheds and attempt to look like we are all in the same studio, from where we would interview guests and read the news.

So leaving our TV project there for a second let’s digress, as we often did in 2020, to dreaming about trade shows and with NAB, CABSAT and IBC on hold, our next digression was from the IABM with their own 2020 event now going virtual. During the summer and autumn of last year a few event organisers had tried virtual events with varying success; webinar video style chats were seemingly starting to wear everyone down and the IABM came up with the #nodigitalfatigue tag.

With Jon’s help we’d learnt a lot during 2020 about virtual sets and production, in fact we literally immersed ourselves in it for six months and our platform offered guests a chance to do something slightly different and although we were still all a long way apart we did feel close, which was nice for the conversation. But with Jon now back to doing his ‘proper’ job the question was how to apply our lockdown learning to work with the IABM to deliver the best virtual event of the year, something which we were delighted to be asked to help with.

The brief: Create a virtual set that needs to accommodate 8 hosts rotating around 21 shows with 35 guests over a three day event. Then add an awards ceremony with a glitzy virtual stage where all 48 nominees across 10 award categories would connect and be brought into a virtual set where the host will be introducing and speaking with the winners. And neither of these must look like Zoom and, here’s the thing, it must all be streamed live as it happens with everyone (everyone!) in their own home or office - that’s hosts, guests, nominees and tech. Remote production here we come!

The brief created a few initial challenges. Firstly anyone appearing in the virtual set must look like they are there. So with hosts at home we created six host presenter kits that we sent out (including green screen, broadcast mini camera, mic, lights etc) and during the ‘rehearsal’ phase we tested each host in situ to get the best look possible as well as connecting all 48 guests in advance to check audio and camera connections. As for KitPlus TV, we used vMix to control the whole event with hosts and guests connecting to a central production system where the virtual set and graphics were mixed and then streamed out to the IABM BaMLive!™ platform with recording locally and remotely for on-demand viewing.

The preparation and planning was meticulous along with disaster recovery and failover; after all we are at home using domestic power and internet, so we needed to cater for the unexpected. To set the scene, one moment involved a green screened host in Brazil interviewing three guests, one in Houston, one in New York and one in India... all in their homes, connected to a virtual studio and streamed live from a house in an English Thames Valley village on top of a hill… and that was just one show out of 21 produced over the three days.

The applause the IABM received from the first BaMLive!™ in December was very well deserved on all fronts and we’re delighted that we’re once again working on the March virtual event, creating a new set to be streamed live to the IABM’s new platform.

And whilst this all goes on the KitPlus TV daily show, now in Season 3 with over 150 shows under our belts, continues with our news roundup show on Mondays followed by guests and kit reviews during the week. One of our missions for 2021 is to promote VPRs (video press releases) to insert into the news show. These are 30-60 second news items filmed by the manufacturer ‘No Frills’ and then embedded into the news creating a welcome break from Matt or Simon reading the news!

Editorial participation in KitPlus TV is free of charge thanks to support from those advertising around the frame and of course to Mediaproxy who spotted the show way back in April 2020 and wanted to assist in its development, so it’s important to acknowledge that without this help and support KitPlus TV might not be here today. So a big thank you to Mediaproxy and all those companies who believed the concept a refreshing change to traditional marketing and got behind it. Finally thanks as well to the voice on the end of a tech call, fortunately for him not so often now, JP.

Vimond and TV 2: Improving the Newsroom

Paul Macklin
Product Manager, Vimond IO

The lifecycle of a story in the media world today is increasingly short. Sparking the interest of your audience, gaining their attention with exciting and important stories as well as being first to publish are key goals for all media organisations.  As a broadcaster in a modern mass media consumption space, one has to keep up with the end-users’ constant demand for fresh content around the clock and hold a reputation as the first platform that reports breaking news stories.

The Norwegian broadcaster TV 2 has a history of using technological innovation to empower creative talent to provide engaging viewing experiences. In early 2020, in collaboration with Vimond, TV 2 created the TV 2 Nyhetene (“TV 2 News”) mobile app available on Android and iOS to deliver breaking news and global events in their viewers local language. In contrast to other digital services from TV 2, this app features only video content.

Within the news app journalists drive the whole content creation and delivery process, operating the tools for production and the content management system behind the online video platform.

- Allowing us journalists to be part of the whole process enables a faster workflow from production to distribution without any unnecessary bottlenecks. This creates a more efficient workflow than journalists are used to from other platforms. In addition, we value that the system is cloud-based. This gives us more flexibility and possibilities.
Camilla Island, reporter in TV 2 News.

TV 2 Nyhetene app

TV 2 news app provides access to Breaking News, Sports, Politics, Crime, International, Business, Lifestyle, Entertainment and Regional News. The app summarizes the most important recent news in its top section, before proceeding to categorized news. Its navigation is inspired by the “stories” format of social media applications that enables the users to easily select the stories relevant to them.

For Breaking News the app supports live video feeds and push notifications to instantly inform the viewer of the latest events. Vimond IO allows TV 2 to create different versions of stories from their live content or content in their Media Asset Management system. All videos are produced with embedded subtitles to support video viewing without sound enabled on the device. Most news stories also have descriptive voice overs.

TV 2 News APP is designed to be a platform for getting daily updates about the latest current affairs. Journalists not only create content but also operate the CMS to curate the applications, so the tools need to be easy to use and respond to the time demands of a modern online news service. In this highly competitive space, if you’re not first, you’re last and delivering breaking news stories to their viewers first is vital, with a lean technical and operational footprint.

- In order to keep up with the ever-evolving user preferences of how they want to consume news, we need to be agile and experiment with our distribution. It may require a significant effort to change or add to an existing workflow in an established news organisation, but by utilising cloud services like Vimond IO and VIA we get the flexibility to quickly test out our theories. If successful, we can scale up and optimise the workflow.

Arild Rugsveen, Project Manager in TV 2 Digital.

Cloud native workflow

To support the TV 2 News app, Vimond provided a browser-based end-to-end cloud solution. Starting with the Vimond IO, a video editor and clipping tool, TV 2 journalists can work from any location at any time to create breaking news stories.

Through IO the teams can source content both from live feeds and files uploaded directly in the UI and mix it with restored content from the TV 2 News archive. Content can be saved in multiple aspect ratios ensuring the right fit for the right distribution point. TV 2 has taken advantage of this by creating 1:1 aspect for Social Media in parallel with 9:16 for the news app. Videos are also rendered in various bitrates to support different internet connections from 3G to high speed broadband. When editing one can add custom graphics, images, music, video and audio transitions and voice-overs to create visually striking content suitable for each platform.

Working in a browser-based application allows for remote work that gives flexibility and the ability to scale up teams beyond a physical location. By using one that is also cloud native, TV 2 can replace the heavy on-premise systems, hardware expenses and the need for a fixed physical location to save cost and speed-up the process.

Once the stories are finished and edited, they publish to Vimond VIA, Vimond’s powerful CMS, where journalists curate the content for the mobile applications keeping the content current and updated. The solution also distributes content to the Airport shuttle train in Oslo, with daily headlines playing in carriages for all passengers to view. Using Vimond’s tools, journalists can create and distribute video content in different formats to online platforms before their competitors, creating an experience that is part of their viewers’ daily routine.

Operational Workflow

The reporter starts by identifying source video content in TV 2’s MAM. The selected content is pushed to an AWS storage area and picked up by Vimond IO. The content is made available in IO’s library, where it becomes an asset in the video editing process. Clips are trimmed and adjusted to the output aspect ratio, and graphics and voice are added before the video is rendered in a preconfigured selection of bitrates and resolutions.

When the video is rendered the reporter adds some simple metadata and the content is pushed to the Vimond VIA CMS application. The various versions of a clip are linked so when the reporter assigns the video to a section all versions are available to the mobile app which selects the appropriate version to be applied to the specific display category.

If Breaking News is occurring, the journalists on duty have predefined modules in the CMS that can be easily adjusted to the current situation. When published, the module will appear on top of the front page with a running livestream, bullet points summarizing the event and related video stories.

IP Technology for Broadcast Audio Routing Systems

As the AoIP debate continues to confuse and delight in equal measure, what is clear is different scenarios require specific solutions. So is there a solution that encompasses open standards and existing proven AoIP technologies to the benefit of all?

Introduction

By definition, network infrastructure (switches, routers and cables) is protocol and technology agnostic, it carries data. This is one of the primary reasons to use IP technology in a broadcast facility, the same infrastructure can carry different formats of video and audio data. Key to developments are open standards, ensuring the widest potential future interoperability. Key to real-world installations are system requirements and technology choices driven by the application, or specific usage case. The market share of AoIP technology stacks is also an important factor to consider for interoperability. At this point on the standards adoption curve for audio, the use of licenced AoIP technology stacks provides the widest guaranteed interoperability and greatest functionality when considering audio specific routing requirements.

AoIP technology and Transport Standards

SSL’s System T utilises Audinate’s Dante technology stack, including the Dante API managing audio routing of SSL Network I/O and over 2000 third party AoIP products directly in the console GUI, including automatic discovery. The exact same hardware interfaces on Tempest Engines and Network I/O devices simultaneously support Dante and the transport standards – AES67 or ST 2110-30 – providing the widest possible interoperability.

Stacks of Standards

Media over IP systems can be broken down into layers, the layers make up what is referred to as a stack. Within each layer different standards perform different functions. At a network level these standards are managed by the IEEE and IETF. Within the broadcast industry SMPTE and AES standards plus the more recent AMWA specifications have been developed for media specific network requirements. It is commonplace for standards to use other standards (ST 2110 uses RTP), specifications to use standards (NMOS IS-04 uses mDNS and/or DNS-SD) and technology packages to use standards (Dante uses mDNS and DNS-SD).

AoIP Stacks of Standard

The benefit of an IP network is that both the technology packages and broadcast standards plus  specification approaches use the same underlying network standards. To the network infrastructure all media and control traffic is simply data. Another network benefit is that evolution of technology and standards can be accommodated because the underlying network standards are respected.

The choice is not between using a specific technology package or standards, the choice is where is it appropriate to use a specific technology or standard? Looking at the user requirements will inform this choice.

Other factors to consider include the total cost of infrastructure. For the audio-only section of a system, lower cost 1GbE switches (perhaps with a 10GbE uplink) may well be suitable. Using 10GbE or higher bandwidth ports that are required for the video stream for every audio device will be extremely wasteful and costly. SSL have installed System T projects where over 6000×6000 Dante audio signals have been deployed on Cisco’s small business range of SG350 and SG500 series switches.

SSL’s System T

Security

The best thing about media over networking technology is that everything can be seen by everything and options are limitless, the worst thing is that everything is also available everywhere.

Audinate’s Dante Domain Manager provides a security layer for the Dante technology stack. DDM acts as an authorisation server that allows routing clients access to devices to make changes. Dante Controller is a routing client that uses the Dante API, the System T control software is also a routing client that uses the Dante API.

When thinking about any IT system, security, functionality, and ease of use can be considered to have a triangular relationship. A change in any one of the 3 factors also changes the others, for example adding a login pin on a mobile phone makes it slower to make a phone call. Any security considerations should always consider the intended usage of the system.

There are a number of ways of using DDM with Dante devices, thinking about IP systems in layers helps with planning how DDM may be used. Is the intention to restrict transport to stop someone “listening to audio”? Is it to restrict control and access to make changes? Should a user be able to discover and see how a system is currently being used, but not have access to make changes? It should be noted SSL have installed a significant number of Dante enabled System T facilities without DDM, depending on requirements security can be managed at a physical and/or network level. This type of security prevents access entirely unless you are the network admin or pre-authorised, DDM provides a more granular security approach with different user capabilities.

DDM provides the toolkit to centrally manage PTP settings and stream announcements for ST 2110 on Dante devices. The manual configuration aspects of ST 2110 require a significant level of understanding when adjusting settings, particularly PTP parameters and manual configuration of multicast addresses, where duplication would cause system issues. Using DDM provides a secure way to make changes. There is the added advantage that when configuring a large system a single interface can change all of the PTP settings on many devices at the same time.

System Engineering

Coming back to the usage requirements, consider who is performing audio routing within a broadcast facility. Obviously there are variations between different facilities, but typically this can be split into audio routing performed by the console operator and audio routing driven by engineering staff, typically using a routing control system.

Console routing – including connecting microphones, or stagebox I/O to processing channels – across a network of audio consoles would be stored within the consoles’ recallable settings (“showfile” in SSL System T language). To allow our clients the most flexible and future proof installations, SSL’s approach is to deploy the console router on the network and use the same technology stack to provide and receive AES67 and ST 2110-30 streams. The console’s routing software is a routing controller of the AoIP Dante network.

AoIP Routing Storage and Recall

The Dante API includes many key features to allow mono routing that would traditionally have been a function of TDM routing inside a processing engine. The Dante API provides automatic stream creation when routes are made, and includes unicast possibilities, both are advantageous when dealing with the relatively bandwidth light but high channel count requirements of audio relative to video.

Ensuring both the console centric audio routing and wider infrastructure hand-off is all performed on switches, using both an AoIP technology stack and transport standards removes the reliance on a proprietary TDM console router. It also negates the need for hardware shuffler and combiner nodes, that are essentially TDM routers. As the network infrastructure is agnostic, when open standards mature and are adopted these can be utilised in the software platform that is the console GUI for console driven audio routing alongside the existing Dante routing.

Conclusion

IP routing systems have significant advantages, a key advantage is the flexibility of the underlying infrastructure. Networks deal with data, data of any format, protocol, or standard as long as it respects the IEEE and IETF standards.

As with any system design, user requirements and intended applications are the primary concern. SSL’s System T broadcast audio production environment supports Dante, ST 2110 and AES67 transport standards.

The advantages of Dante provide mono audio network routing capabilities directly from the console GUI with auto discovery and connection management of thousands of available devices. SSL AoIP devices enable transmitting and subscribing to ST 2110 or AES67 streams on the same Dante interface, opening up interoperability to many more devices including IP video systems.

With System T you can have the best of both worlds as the audio routing is performed directly on COTS network hardware, not proprietary TDM audio routing hardware.

Embracing Immersive Audio

Aki Mäkivirta
R&D Director, Genelec

The popularity of OTT broadcasting is really helping to drive the growth of immersive content, and this presents both opportunities and challenges for the broadcast audio world. Mixing in immersive allows the audio engineer to create a sense of envelopment and realism like never before, but as channel count and mix complexity increases, the importance of neutral, uncoloured studio monitoring with precise imaging becomes even more important.

At Genelec we’ve long been involved with designing monitoring systems that are scalable from stereo to surround to immersive, so here we’ll examine some of the principles of immersive audio, and some of the considerations that audio professionals need to be aware of.

The principles

Immersive audio formats not only surround the listener, they also encircle them in the height dimension too. One way to understand the capability of an immersive audio system is to describe how many height layers an immersive playback system offers. The two-channel stereo and conventional surround formats offer only one height layer, and this layer is located at the height of the listener’s ears, with all loudspeakers located at an equal distance from the listener (in terms of acoustic delay), and playing back at the same level.

The different channel layouts for immersive formats serve several purposes. One target is to create envelopment, and a realistic sense of being inside an audio field. One height layer alone cannot create this sensation with sufficient realism, because a significant part of the listening experience is created by the sound arriving at the listener from above. So the extra height layers of a true immersive system provide this envelopment, and therefore add a significant dimension to the experience.

The second aim for immersive systems used with video is to be able to localise the apparent source of audio at any location across the picture. This is the reason why the 22.2 immersive format (pioneered by NHK in Japan) has three height layers, including the layer below the listener's ears. Since the UHDTV picture can be very large, extending from floor to ceiling, the audio system has to be able to localise audio across the whole area of the picture.

The growth of immersive

With the demand for immersive content gaining momentum at increasing speed, several systems are competing for dominance in the world of 3D immersive audio recordings. The front-runners are now the cinema audio formats, who are trying to increase their presence in the audio-only area and enter the television broadcast market too.

Whereas the cinema industry is always searching for the next ‘wow-effect’ to lure the audience from the comfort of their homes into theatres, the growth of immersive audio has been slightly slower in the world of television. But the pace is now really picking up, with several companies studying 3D immersive sound as a companion to ultra-high definition television formats, and the International Telecommunication Union (ITU) issuing recommendations about the sound formats to accompany UHDTV pictures. In preparation for the delayed Tokyo Olympic Games, NHK has already started to deliver 8K programming, with 22.2 audio.

How many layers?

We touched on this earlier, but modern immersive formats offer two or three height layers, while current cinema formats offer two - and the emerging broadcasting formats have three or more.

One of the height layers is always at the height of the listener’s ears, and this typically creates a layout with backwards compatibility to both surround formats and basic stereo. Typically, other layers are above the listener and, as previously mentioned, layers can also be located below the listener, to enhance the sense of envelopment.

Certain encoding methods for broadcast applications can compress 3D immersive audio into a very compact data package for storage or transmission to the customer. These formats offer a very interesting advantage over the many immersive audio formats, since the channel count and the presentation channel orientations can be selected according to the playback venue or room. Essentially any number of height layers and density of loudspeaker locations can be used - and furthermore, this density does not need to be constant.

Creating the feeds for loudspeakers dynamically from the transport format is called rendering. The compact audio transport package is decoded, and the feeds to all the loudspeakers are calculated in real time while the immersive audio is played back in the user’s location. This compact delivery format plus the freedom to adjust and optimise the number and location of the playback loudspeakers makes these flexible formats very exciting.

Common assumptions

Popular immersive audio playback systems typically share two assumptions about the loudspeaker layout and one assumption about the loudspeaker characteristics. Concerning layout, it is assumed that the same level of sound will be delivered to the listening location from all loudspeakers, and the time taken for the audio to travel from each loudspeaker to the listener will also be the same. If each loudspeaker in the system has similar internal audio delay, then this can be achieved by positioning each loudspeaker at an equal distance from the listening position. Otherwise, electronic adjustments of the level and delay are required to align the system.

Concerning loudspeaker characteristics, a fundamental assumption is the similarity of the frequency response for all the loudspeakers in the playback system. Sometimes this is taken to mean that all the loudspeakers in the system should be of the same make and model. In reality, loudspeaker sound is affected by the acoustics of the room in many ways. This can significantly change the character of the audio signal, so that even when the same make and model of loudspeaker is used throughout the system, the individual locations of the loudspeakers will change the audio in a way that renders each loudspeaker performance slightly different.

Getting aligned

To turn these assumptions into reality, Genelec have created a comprehensive range of Smart Active Monitors that integrate tightly with our own GLM (Genelec Loudspeaker Manager) software. This allows the creation of immersive systems in excess of 80 monitors and subwoofers, thus making it compatible with all existing audio playback formats.

GLM 4, the newest version of GLM, takes care of the essentials of calibrating an immersive audio playback system, providing systematic and controlled monitoring. This includes the alignment of levels and time of flight at the listening location, subwoofer integration, and compensation for the acoustical effects of loudspeaker placement. This ensures that all the loudspeakers in the system deliver a consistent and neutral sound character.

For the audio engineer, this will improve both the quality of the production and the speed of the working process, allowing them to produce reliable mixes that will translate consistently to any playback medium. Additionally, one of the key requirements for immersive monitoring is to accurately maintain a standard playback level to the listener, in line with new recommendations about maintaining loudness in broadcast signals - including a definition of the SPL at the listening location. Happily, GLM’s powerful monitor control features make this a simple and repeatable process.

Using headphones

We’d always recommend that in-room loudspeaker monitoring is the best method for evaluating an immersive mix, since our head, outer ear shapes and head movements provide us with a wonderful ability to localise sound sources. However, good headphones are also a useful complementary tool – particularly for mobile audio professionals working remotely in ad-hoc environments. Headphones, however, break the link to these natural mechanisms that we have acquired over our lifetime for localising sound. This causes sound to appear ‘inside’ our head when presented over headphones, rather than appearing all around us.

Fortunately, Genelec has a solution for this challenge too – in the form of our Aural ID technology. Aural ID contains all the information about the user’s personal sound localisation. When we create the Aural ID for the user, we compute how their head, external ear and upper body affect and colour audio arriving from any given direction. This effect is called the Head-Related Transfer Function (HRTF), and is totally unique to every user. Aural ID computer models the acoustics of the head and upper torso, based on data extracted from a simple 360 degree smartphone video showing the user from all directions. The user’s individual HRTF is then delivered as a SOFA-format file, which can be integrated into the audio workstation’s signal processing for the headphone output. This makes the immersive headphone listening experience much more truthful and reliable, with a far more natural sense of space and direction.

This is a subject that Genelec is researching intensively, so stay tuned for more developments from us in this area.

Need help?

So, whether you’re a Genelec user or not, if you need help and advice on any aspect of immersive audio, then our free helpdesk is ready to guide you through the principles, technologies and practicalities involved in handling immersive audio content. Staffed by our team of global experts, we can advise on room layout, acoustics, loudspeaker choice and placement, dimensioning, room calibration, playback standards and the other equipment choices you may find useful in the immersive recording and mixing process.

So, for personal advice, feel free to contact us at immersive.helpdesk@genelec.com and for a wealth of useful general information on immersive audio, please download our Immersive Solutions Guide here

Cedar Audio: committed to noise suppression, speech enhancement and audio restoration

Gordon Reid
Managing Director, CEDAR Audio Ltd

CEDAR Audio is a UK based company committed to noise suppression, speech enhancement and audio restoration. It has focussed exclusively in these areas for more than three decades and is the recipient of numerous accolades including an IABM Design & Innovation Award, two Cinema Audio Society Awards and an Academy Award for services to the movie industry.

When CEDAR was established in 1989, several universities were researching what soon became known as digital audio restoration – the science of removing unwanted sounds such as clicks, crackle and hiss from existing recordings. These ‘single-ended’ processes were quite different from existing noise reduction methods that encoded and then decoded the audio to limit the noise added by the medium; they attempted to identify and remove unwanted sounds that already existed in the signal without adversely affecting the wanted sound.

Early processes were limited by the state-of-the-art of digital signal processing (which had only recently been applied to audio) and the processing power of the available hardware. At the time, there were just two companies active in the field. One chose to implement all of its processes outside of real-time, thus allowing more computing power to be applied to each sample of the audio. In contrast, CEDAR chose to adopt newer, more powerful processors and to optimise its algorithms so that they could be applied in real-time. This immediately became CEDAR’s trademark; whatever we did, we did it in real-time so that the user could hear the effect of the processing as it was occurring. This is of much greater benefit than it might seem. If you can tweak a process while it’s running, you can soon identify the parameters needed to obtain optimum results. If you have to come back the following morning to listen to what you’ve done, you cannot. Real-time processing also removed the need for extensive hard disk storage, which was hideously expensive at that time.

Within a few years, it was apparent that the philosophy of real-time audio restoration was leading the company far beyond the bounds of libraries, archives and remastering for CDs and DVDs, and into areas such as broadcast, post-production and audio forensics. At the same time, solutions to other problems were being developed, and it was soon possible to remove complex buzzes, clipping distortion, timing errors between tracks, speed changes during a recording, and more.

However, neither the techniques nor the hardware of the 1990s were suitable for live broadcast because of the constraints on latency. Humans are very sensitive to any loss of synchronisation between lip movement and heard speech, and many (especially older) people are unaware of the degree to which they rely upon the former to aid comprehension. Consequently, the latency of any noise reduction process used for live broadcast has to be as close to zero as possible. The breakthrough in this area came in 2000 when CEDAR invented the digital ‘dialogue noise suppressor’ (DNS) and incorporated this within dedicated hardware so that the latency could be kept below 0.2ms at 48kHz. Although the earliest products were designed for post-production, units soon started to appear in areas such as newsrooms, reality TV, games shows and sports commentating.

Early versions of DNS were reasonably benign with regard to over-processing and the generation of unwanted artefacts, but they still required a degree of understanding and manual control to obtain optimum results. So the hunt began for a more autonomous version.

In 1994, CEDAR had released a product called the DH-1 dehisser, which used a very early implementation of machine learning to identify, track and remove the broadband noise contained within a signal. Common wisdom at that time suggested that this task was impossible without the aid of a noise fingerprint, but the DH-1 and its successors proved to be remarkably successful and remained in production until 2016. But in 2015, CEDAR refined its latest machine learning technology (often, but erroneously called ‘AI’) to create the Learn capabilities of a new generation of noise suppressors that offered the performance and near-zero latency of DNS while eliminating the need for complex controls. This means that products could be made smaller, lighter, simpler to use, and at a lower cost.

Of course, no process addresses all problems, and there is still much work to be done to cope with situations such as rapidly varying noise, noise that is too highly tuned for a broadband noise reduction system, and noise that reaches or even exceeds the level of the wanted signal. There are existing solutions for each of these cases, but with trade-offs. In particular, the algorithms capable of removing high levels of noise from signals obtained in extreme environments introduce a degree of tonal change that make their output unsuitable for broadcast. The CEDAR SE 1 Speech Enhancer (which was developed specifically for the surveillance community) uses these, but its ability to increase intelligibility is not the same thing as increasing listenability. Indeed, the two are often mutually exclusive.

So what of the future? New sources of noise and new requirements for noise suppression are forever being encountered. In 1989, nobody sat in a noisy office while talking to dozens of people worldwide using a laptop with a whirring fan as the communications device. Today, tens of millions of people do so every day, and sophisticated noise reduction and echo cancellation algorithms are running constantly on the servers (‘in the cloud’) that allow them to do so. Similarly, a telephone call made on the London Underground would be unlistenable without similar technologies being employed.

Elsewhere, perhaps as a consequence of improved delivery mechanisms, old problems are being readdressed with renewed energy, while new, speculative developments are being vigorously pursued in fields such as blind source separation. Isolating a single voice from the babble in a restaurant or club and simultaneously cleaning the resulting signal has long been deemed desirable and (perhaps) impossible, but current advances are bringing this ever closer.

To combat the ever-increasing noise in our lives, we are seeing more and more noise suppression products, whether for recording, mastering, podcasting, broadcasting, communications or security. Yet the Holy Grail remains what it has always been; a magic box that removes all unwanted sounds without human intervention, does so instantly without introducing artefacts, and leaves the wanted signal sounding totally clean but exactly as the listener originally perceived it. Is this possible? It would be unwise to say that it’s not. Today’s processes are vastly more effective than those of 30 years ago, and users nonchalantly expect results that would have seemed unlikely when CEDAR was established. A good example of this is the spectral editor, which we invented in 2002. The ability to remove, move or correct a single sound within a recording without damaging the surrounding audio was a huge breakthrough, yet there’s already a new generation of audio engineers for whom it has always existed. Arthur C Clarke once wrote that, “Any sufficiently advanced technology is indistinguishable from magic”. He forgot to add that, once it enters common usage it soon becomes accepted, if not mundane.

Is your company ready to bounce back?

Neil Goatcher
Managing Director, Exhibition Freighting Ltd

It’s no secret that our industry cannot wait to get back to exhibitions and face to face meetings. But of course the big question is, when will this happen?  Unfortunately, right now there are no definitive answers, but when those doors of exhibition centres open worldwide, are you as an individual and a company ready to push the ‘play’ button?

Companies need to plan ahead on elements such as stand design, travel arrangements, exhibition space, H&S protection plans, including COVID protection going forward. The key is to not wait until the last minute as it may be too late. This doesn’t mean every item must be ordered and paid for right away, but in a lot of cases forward planning will pay dividends.

Timings have changed. With Brexit, consideration now also needs to be given to the timings and paperwork required for freight movements into Europe. Export invoices for the freight and the relevant customs procedures need to be followed, including import or temporary import into the country/city where the exhibition is being held. You can do this yourself but we would recommend turning to experts to avoid any last minute surprises.

Some tips we can offer from Exhibition Freighting include:

  • Plan ahead - Knowing which shows you are going to and making sure you contact and organise them with a recognised forwarder will save you time in the long run. Additionally, make sure you inform other key suppliers about your intention to attend the show to ensure they have you in mind and you have an initial quote on everything, from contacting the stand designer to starting to plan for team expenses such as hotels and flights
  • Ensure you have the right paperwork - Check that you have a registered EORI number, which is your company VAT number, but registered for imports and exports into Europe. You are used to completing shipping invoices when shipping to shows in the US or Far East etc, but the same rules now apply for any shipments going to Europe and that includes courier shipments.
  • Adapt to the new regulations - All wooden pallets/crates/cases must be heat treated and have the appropriate marks on them to comply with ISPM15 regulations. Again, you may be used to this for the US and further afield but this rule now applies for shipments into Europe.

Whenever we do get back to attending tradeshows in person, and that day will come, preparation is key. Putting practices in place now will ensure your entry back to the show floor runs as smoothly and successfully as possible, and we get back to making the most of the events we know and love.