Towards hybrid production

Daniel Lundstedt

Intinor


We have all seen the way that producers have been forced into creating new solutions over the last year or so. If you are like me, you will have cringed at some of them!

It used to be easy. You got your contributors into a studio, or onto the platform of a conference hall; they did their piece; they interacted with each other.

Covid meant that became completely impossible. We got used to seeing people on screen coming from their own homes. Sometimes it was on Zoom, sometimes – usually when it was going to be a long term situation – then something rather more sophisticated (and high quality) was assembled.

If we now turn around and look forward, we can take what we have learnt over the past year and see how it will apply in future. I think it will fundamentally change production of both television programmes and live events like conferences or commercial presentations.

Contributors have got used to the idea that they are still valued for their expert opinion, but they do not need to waste a lot of time travelling to deliver it.

Imagine you are trying to get the CEO of a Fortune 500 company to deliver a keynote speech at your conference. In the past you had to persuade them to give up three or four days of their time, travel to your location (and maybe you had to pay the first class fares), look after them while they were there, just for that 30 minutes on stage.

Everyone is going to be happier if the CEO’s 30 minute presentation takes 30 minutes. The organiser is happy because it saves the travel and hotel costs; the CEO is happy because it takes much less time out from the real job; the planet is happier because of the eliminated carbon emissions in travel.

This will not work for everyone and every situation. I think we will find ourselves with some people in the studio or on the platform; others contributing remotely. We can call this hybrid production.

But to make this work, we have to have the right technology. Going back to our imagined conference, if you have a host and two subsidiary speakers on the platform or in the studio, but the keynote address from the CEO is on Zoom with its low resolution video, very poor audio quality and blocking, and unpredictable latency, then it is easy to guess who is not going to be happy.

For those prestigious guests, for whom Zoom will definitely not do, then you can ship a kit to them. Even if you send a technician too, their expenses will be significantly less than the CEO’s!

A simple kit might include a tripod and camera, a clip-on microphone, and maybe some flat panel lights. All you need then is some means of getting the sound and pictures to wherever they need to go.

That is where Intinor comes in. We provide bonded network services to carry signals direct point-to-point. Where we stand out is that we have developed our own protocol, BRT for Bifrost Reliable Transport. Bifrost is the burning rainbow bridge between the earth and the gods – we love our Scandinavian heritage.

We developed Bifrost to give secure and fast contribution from anywhere, over broadband or multiple cellular links. It includes a lot of technology to include that, which our competitors do not, like forward error correction which hugely reduces the need for retransmission of data packets.

With our latest version we have end-to-end latency below 0.5 seconds. Which makes for virtually seamless remote contributions.

So, at the simplest level, you pack a camera, tripod, microphone, lights and Intinor Direkt router into a flight case. All the recipient has to do is unpack it, plug in power and an ethernet cable and it is ready to go.

But you can be more sophisticated than that. Management of the bandwidth allows Intinor to partition off part of the signal for a VPN. That allows a remote PTZ camera to be operated from master control. Or multiple cameras to be switched at the location, again from the master control desk.

It also allows return video to be sent to the contributor, as a confidence monitor or to watch the rest of the proceedings. Return video is hard to do properly over an internet connection, as it normally requires ports to be open in the firewall. That is beyond plug and play understanding and, is not a good practice anyway.

The Direkt series allows the user to pull the return video into the location, eliminating the need to fiddle with firewall settings. So, it performs very much as people are used to on Zoom, but in much higher quality.

These set-ups can be quickly established in a home or an office. But we also see a growing business in local hubs: studios in towns and cities that can be rented by the hour and connected over the public internet to whichever broadcaster or production company needs them.

This is real, and available today. One company using this set-up is Zest4TV, a UK-based production companies.

Tom Herrington of Zest4TV said “What is great about the Intinor Direkt is that it is software controlled. So, we started with a basic SDI I/O, then added other functions like talkback, VPN and SRT as we needed them. No two jobs are the same.

“it has the features that we really need, like easy return video, or mix minus audio to each remote user,” Herrington added. “It also gives us control, over the VPN. The director is in charge of framing on a PTZ camera or switching between remote sources; the engineer can analyse connections and troubleshoot, relying on 4G to get the main link up.

“Hybrid events have been forced on us,” he admitted. “In the future, clients will be looking for it. Quality is the first concern; latency number two. Intinor gives us these, with very little trouble.”

The future of discussion and debate, whether on television or in conferences, will be hybrid. The key will be getting it right – stable, secure and fast.

KAIROS Powers Live IP Production at Mediapro for eLaLiga Santander

Mediapro is using Panasonic KAIROS, the next generation live production platform, to produce coverage of eLaLiga Santander, the official tournament of FIFA 21 in Spain.

The KAIROS IT/IP video processing platform was chosen for the esports production because it was ideally suited to the workflow designed for the esports tournament.

During the pandemic, players compete from home and remote camera feeds, as well as inputs from two separate studio productions, all flow directly into the KAIROS system. The output is then broadcast into various platforms such as Twitch and YouTube.

The KAIROS IT/IP video processing platform offers an open software architecture system for live video switching. It delivers complete input and output flexibility, resolution and format independence, maximum CPU/GPU processor utilisation and virtually unlimited scalability through unlimited “layers” that users can freely create and move around thanks to its intuitive and easy-to-use GUI.

“Kairos is a switcher that stands out from all others in the market, “said Jordi Sacasa, Mixer Operator at Mediapro. “It’s a very intuitive and powerful tool for layering and scenes. We need to make many compositions and lots of PiPs and KAIROS is the ideal switcher to be able to configure an infinite number of scenes. Live video compositing in this production allows us to be fast and solve any unforeseen events in moments.”

Having seen the capabilities of KAIROS, Mediapro is convinced that it’s the future of broadcast technology. Roberto Yelamos, Technical Director at the Barcelona Studio for Mediapro said: “The flexible nature of the system allows us to take advantage of past investment, based on SDI, while also meeting current needs, such as NDI, remote control and monitoring, remote management and configuration. It also means we are ready for the upcoming SMPTE 2110 environment.”

The new approach to post-production

Dominic Harland

CEO, GB Labs


In the old days, post-production may have been viewed as a licence to print money with post houses competing with each other for the biggest city centre facilities, the most inventive décor and the biggest espresso machines.

Even before 2020, this was definitely not the case. It became a much more carefully controlled business, with managers expected to deliver productivity and cost-efficiency. If there was enough margin left for a cappuccino that was a bonus.

Then a global pandemic came along, and everything changed. Travelling into city centres was no longer practical, let alone desirable. Smart editors set up suites at home. Facilities relied on remote freelancers to deliver excellent results.

As we move into the post-covid era, we will see fundamental, structural changes in the post industry. Having tasted the pleasures of working from home, more and more creative people will want to continue doing so.

It is not just the chance to take the dog for a walk in the middle of the day. It is a lifestyle choice. People no longer want to be crammed into commuter trains to London or New York (or spend three hours on the 110 freeway in Los Angeles). And it is not just time that is saved by eliminating the commute - there is a real reduction in the individual’s environmental footprint.

The post facility of the future will certainly still have a city-centre presence, but it will be much smaller. There will always be times when the client wants to attend the edit. Colorists tend to rely on carefully calibrated monitoring conditions across a range of formats which they are unlikely to have access to in their home offices.

But much more of the work will be done remotely, either in smaller facilities closer to the clients (and in areas with lower ground rents), or by individual editors and graphics designers – who may be on staff or freelance – working from their home set-ups.

This is possible because it is now practical to run many post-production functions on inexpensive off-the-shelf computers. Perhaps more importantly, the increasing spread of fast broadband makes practical the intelligent movement of the big, chunky files on which we rely.

There is a clear challenge here, though. That content needs to be moved and shared. Handling the media must be secure, and it must deliver media when it is needed. Most of all, it needs to be automatic and seamless: editors want to get on with cutting the programme; managers want to deliver to their clients on time and on budget.

Individual storage systems come from a range of vendors, suitable for a range of file formats, and a range of requirements for security and credentials. Facilities and individuals look to choose their storage systems based on their individual requirements and reinvesting in a common platform is unlikely to be economically feasible.

With a central pool of storage, it is (relatively) simple to manage file naming and log-ins. But the future will demand multiple islands of storage: in remote facilities, in homes, and of course in the cloud.

What is needed is some way to make this geographically distributed, multi-format set of storage sub-systems appear as a single, common pool with a single sign-in. Users should have access to the material they need without being overwhelmed with too much unnecessary information. Finally, all files should be maintained in consistent synchronisation, and there should be no doubt about the master file, so no work is wasted on outdated versions.

This is a challenge. You need to address connectivity, data management and access controls. The layer of automation and communication that sits below the user data must use intelligence and business rules to manage this, so it appears seamless.

Management and analytics need to be virtualised too, so that the IT manager can log on from a web browser anywhere in the world and see the state of the storage as a whole, gain metrics, trouble shoot and manage the content flows.

Those content flows are entirely dependent upon acceleration. That means not only using the latest data communication techniques to move media from A to B as quickly as possible, but also managing workflows by ensuring that the right content is pre-loaded into a remote store in time for the work to start. Hot caches of solid-state storage (Flash Cache is the GB Labs version) provide rapid access.

This is the background to our Unify Hub products. It applies intelligence to combine on premise and cloud storage. That on premise storage can be sub-systems from GB Labs, or it can come from any respected vendor of storage suitable for media. The translations between file formats and credentials are hidden in Unify Hub.

The storage can also be in the cloud: Unify can appear as an AWS S3 end point. So, if you have encoding or transcoding processes in the cloud, which typically depend upon S3 storage to manage the data, that can now be incorporated into the over-arching storage system.

It can also do the reverse: the result of anything processed in the cloud can be directly mapped into the unified storage. That means the output can appear at the central facility, and/or one or more remote facilities and home studios.

Of course, the big advantage of the cloud is its flexibility. Post house managers have a regular debate about whether to pitch for large chunks of business – finishing major series, for example – because the income (and prestige) generated needs to be offset against the cost of investing in new hardware to support it, hardware which may be unused after the project is completed, while still demanding storage space and air conditioning.

Processing and storage in the cloud allows you to increase your capabilities while you need it, slim down again when the project is over. Integrated remote working allows you to add new locations, like the home studios of additional freelancers, to work on the project or to handle the routine jobs while your star editors do the new job. Seamless access to shared storage means you can address a wider pool of talent.

The future will bring new challenges and new ways of working. There is no right or wrong about distributed workspaces and workflows: every business will set its own priorities. But the core functionality any system will need lies in making sure everyone has access to the right media; careful and secure synchronisation of files to avoid confusion; and simple and secure access so users can forget the underlying technology, and even where content resides, and just concentrate on making the best programme.

Unify Hub from GB Labs solves those issues. With it, you have the route to successful distributed, remote and home working.

Case Study: Building a production studio at SUNY Cobleskill to teach the next generation of video professionals

By Sarah Madio

Director of Marketing, Broadcast Pix


One of the most important aspects of any training set up is that it provides relevant

experience for students on industry standard equipment. For the broadcast industry that can be a problem. Not only is technology in the industry moving at a rapid pace, but the educational institutes that handle an increasing amount of training nowadays aren’t always as well-resourced as they wish to be.

The result is sometimes students learning how to run, for example, a studio set up on out-of-date equipment; which is exactly the scenario that confronted Douglas Flanagan when he started a new role as Assistant Professor, Communication at the State University of New York (SUNY) at Cobleskill.

“When I walked into the studio on my interview tour, I was greeted with an old

analog switcher, three analog cameras, analog audio, no teleprompters…It was all obviously going to need a bit of help,” he says.

Luckily, Flanagan had already overseen a digital transition project at his previous employer and had specified a Broadcast Pix FX integrated production system for use in a project that was never realized. So, when soon after he joined the faculty at SUNY Cobleskill, he was told they were holding a fundraiser for video studio improvements, he knew exactly what he was looking for.

The Broadcast Pix system was supplied by ComTech in Melville, NY, and encompasses BPswitch live broadcasting and streaming software, a FX10 10-input server, and a 1000 (1 M/E) hardware control panel. Other studio improvements included a new multi-purpose studio set system from Uniset, new teleprompters, a multichannel audio mixer, and the New York State Lottery donated three Ikegami studio cameras with accompanying Vinten pedestals.

Hands on approach

Flanagan says that he has always felt that his job is to try and get a student ready for their first job, and the flexibility of the Broadcast Pix system perfectly supports that ethos.

“One of the great things about this is that students can go into the industry knowing that they've learned their trade on a professional system,” he says. “The system is also very malleable. We can do a traditional full-fledged studio production with a typical full student crew of individuals running the studio cameras, a floor director, audio, graphics and TD — it has the flexibility to let us do that. Plus, who knew that we would also need to explore the ability to do things with maybe just one or two people due to social distancing. Luckily, with solutions like the FX system you can pull off with two or three people what it took 10 people to do 10 years ago.”

The oblique reference there is to the Covid-19 pandemic. Having taken delivery of the system in 2019, SUNY Cobleskill Communications students were prepping their first fully-staffed practice drill with the idea that it was going to go live the following week on March 12, 2020. The Campus closed due to the pandemic on March 15.

Thanks to the adaptability of the FX System Flanagan has been able to turn even that into a teachable moment. He cut his teeth at WUTR, an ABC affiliate in Utica, New York, which still maintains a traditional production and news department, with camera operators, TDs, directors, producers, and so on. Across the street is the NBC affiliate, WKTV, which takes a far more stripped-down approach with staff

performing multiple tasks, and, as a result, sometimes there is only a single person building a show.

“It’s two very different approaches to production, but when you look at them side by side, you'd be hard pressed to tell the difference,” he says. “One of the major selling points of the Broadcast Pix system was that it could easily address or be configured to address the likelihood that you might, as a new employee, be one of 12 or 14

people on a crew, or you might be one of three people, and you have to run all this gear yourself.”

Enabling future plans

While producing programs with minimal staff has become the Covid-driven reality for many stations, Flanagan already has his eye on the post-Covid future.

The new donated Ikegami cameras are a step up from the previous equipment, but they’re still SD, so he’s looking at using the JVC HD cameras used in the ENG courses in the Studio too.

Happily, the FX system can cope with that resolution upgrade and many more planned enhancements, without breaking the budget, a typical challenge that educational establishments grapple with.

“I knew the FX would be able to do all the things we needed it to and more besides,” said Flanagan. “If we were trying to do this 10 years ago, it would have cost well over six figures. This is a system that not only does everything we need today; it gives us a platform that enables us to grow in the future, even moving to 4K.”

The way ahead for location production

Bevan Gibson

CTO, EMG


When we talk about location production, our thoughts and processes are primarily driven by sport. This has always been the genre which has seen the most innovation: more cameras delivering more angles; more replays; more graphics; new formats like 4K and HDR UHD. Sports fans want to be continually engaged and informed: they want to understand the plays as well as appreciate them.

This inevitably makes it a technology-intensive operation. A big football game might have more than 30 cameras, each one with a slo-mo replay channel attached, plus motion graphics, statistical overlays and more. Controlling all those inputs means a big production switcher, and sometimes a second truck for replays.

So we have reached the stage where outside broadcast trucks are at the very limit of size and weight to make it onto public roads, yet still present a cramped operating environment in which the production team must work under great pressure. At EMG, with sustainability in mind, we have long been looking at how we can take a completely fresh approach.

The result is our IP fly-pack system we call diPloy. This modular production system is designed to cope with the largest sports events and our original goal was the multi-sport event planned for Tokyo in 2020, now likely to take place in 2021. Built into modular racks of varying sizes in dedicated 40-foot containers, diPloy allows us to plug together the functionality we want for each particular job. Even more important, it allows us to physically place modules where they are most appropriate – which could mean some at the location, others at any distance.

There is a huge advantage in complete remote production: putting the cameras and microphones at the event, but bringing all the signals back to a central production area at your headquarters, or even to explore distributed production facilities. Most obviously, this allows you to build the control rooms for comfortable and efficient operation, not to fit inside the physical dimensions of a truck.

The central control suite can then be used intensively. A more traditional truck might cover one football game in three days: with remote production, the control room could output three games in one day. That level of productivity allows the service provider, like EMG, to invest in the latest technology to engage audiences.

How does diPloy fit into this? The endgame is that you simply shift the endpoint modules to the location and use them to transport all the signals back to base. That way, you only transport the hardware you really need – unlike today when a 30-camera truck might be used for a six-camera opera or snooker shoot.

EMG already has a diPloy central production suite operational in the Netherlands, and we will shortly enhance our remote operations centres in UK and further across the Group with the same technology.

In the near term, we can build flexible outside broadcast units which use remote technology while still using a suitably-equipped truck. A great example was the 2021 FIS Nordic World Championships in Oberstdorf, where we provided host broadcast facilities for the cross-country skiing and ski jump competitions.

We produced from a centralised IP fly-pack based near the cross-country finish area. diPloy modules were placed around the cross-country ski course to bring cameras and microphones back to the truck. The ski jump hill was some five kilometres away, and again all the sources were brought back to the core system in the cross-country stadium.

Obviously diPloy depends upon IP connectivity and the SMPTE ST 2110 family of standards. This allows us to treat every source as an individual camera or microphone, exactly as we would have done in a traditional outside broadcast, but route them in multiplexes over dark fibre.

Critical to the diPloy architecture is the Selenio Network Processor (SNP) from Imagine Communications. This does two things for us.

First, it provides an interface to high-speed IP connectivity – up to 400 gigabit ethernet in the latest version. It also provides a time reference point in a PTP network, which makes system timing across a diverse network practical.

Second, the SNP is a powerful processor. To be precise, each 1RU device has four separate processor chains, running on FPGAs which are software defined. These software personalities allow us to build precisely the functionality we need.

One of the main tasks is format conversion: between 4K and HD, and between standard dynamic range and the various flavours of high dynamic range. Add in the multiviewer capability, and the SNP is vital just for managing and monitoring signals.

SNP also has the ability to bridge between IP and SDI, so legacy equipment can be easily connected into the network without latency or timing errors. We have huge numbers of EVS HD replay servers, for example, and SNPs can provide the SDI I/O in an IP production – 32 channels in a single SNP.

Many more software personalities are available for SNP: it can be used as a video proc amp, for example. It can also be used as a stage box, concentrating multiple camera and audio feeds onto a single fiber for the run back to the truck or central processing area.

We are aware that recent SNP updates also activate the feature licensing system, which will enable companies like us to realize one of the key benefits of virtualised software: the ability to pay for only the functionality required for specific workflows. This means that the hardware can be installed once enabling us to only buy the features we need to do the job at hand, and add functionality later as needs evolve.

We have an excellent relationship with Imagine, which allows us to see what is coming up for SNP. In particular, we are excited by the prospect of JPEG XS compression in SNP, which will help us to get even more circuits from a remote location over constrained bandwidth.

Our use of the SNP reflects our ambition for diPloy. We are service providers, so we have developed a production architecture which our clients can use in precisely the way they want. Whatever facilities they need, and wherever they need to operate them, we can software configure diPloy to do it.

Within diPloy, we software configure our SNPs to provide the workflow, the signals and the access that our clients need. On both levels, it is a common architecture made infinitely flexible through the software.

Universal File Ingest For Digital Video

In the media industry, conversations usually center on encoding, streaming, packaging and low latency. Put another way, it is all about delivering superior visual quality and high-fidelity audio with better compression. Amidst all the discussions on digital video workflows, file ingest is often ignored. For example, when is the last time anyone compared different file ingest tools? It is safe to say that file ingest is one of the most undervalued steps in video production workflows today.

What is file ingest? 

File ingest is the process of reading content and detecting the codecs used inside a media file from hard disks or memory cards, and sometimes over IP networks. Ingest tools include commands for unpacking the audio and video payloads from file containers, decoding bitstreams and providing access to various types of metadata. Most importantly though, proper file ingest will equip video applications and workflows with seamless support for a vast variety of file types, video and audio codecs and container formats in the wild. File ingest is the first step in the video production process, and when the right tool is used, users can avoid reprocessing video manually and simplify the immensely complex task of reliably ingesting media from recording devices and cameras, container formats and codecs for a variety of use cases.

What applications require file ingest and what are the requirements? 

Applications that require file ingest range from video editing, post-production, archiving, and Media Asset Management (MAM), to transcoding frameworks and video playback devices. File ingest tools thus require a complex and ever-growing list of features to support a wide range of video use cases. For example, a typical transcoder not only needs to decode video and audio, but also needs to accurately recognize source file metadata such as aspect ratio, framerate, resolution and channel data to preserve the most system-relevant parameters for compliant video encoding.

A video editing application also requires frame-accurate seeking (i.e., scrubbing) while keeping all audio synchronized. Moreover, if a given application also supports lossless reuse of compressed input data as an output file, also called smart rendering, the file ingest needs to mix and match the original bitstream with re-encoded data, while keeping frame references and group of pictures (or GOP structure) intact. Smart rendering can significantly reduce re-encoding time and eliminate unnecessary quality loss for sections of the timeline where the source video is not overlayed by a filter or transition.

When it comes to file ingest, a MAM workflow must be compatible with all types of user and content metadata for reliable, on-demand asset management within its database, plus allow user-friendly content searching of media library files.

For video servers, file ingest can also present significant challenges. For example, video often comes from a variety of different sources, such as a file, SDI or network, and can be written to disk in real-time or for video-on-demand or archival storage. Also, many broadcast playout servers use parallel processing to read from the file as it’s still being written (i.e., read-while-write), while allowing access to parts of the data as the file is being updated. Lastly, synchronization and highly CPU-efficient decoding is required in a 24/7 broadcast production, which further complicates the file ingest process and increases failure risk.

What types of video output do devices provide? 

There are literally thousands of device types that are used for producing and recording video with a virtually infinite number of combinations of output variants already in existence. Nowadays, file ingest tools need to seamlessly support these differences and provide a straightforward flow of data to the application, regardless of the type of input file.

Keeping up-to-date support for all these nuances requires constant maintenance and specialized skills. For example, to a user, two MP4 files might look the same, but to a capture device, depending on its configuration, this file may write vastly different outputs and require specific instructions to ingest the file properly.

Is there a tool for universal file ingest?

One of the best-kept secrets at MainConcept® is mfimport, our highly flexible file ingest tool that accompanies our codecs in each SDK and has been developed and maintained for nearly 30 years by the most experienced developers in professional video. mfimport is a small but mighty engine that serves as the backbone for many of our media customers and provides seamless media file reading and ingest for their editing, capture, archive, playback, and MAM products and platforms.

For decades, MainConcept has been a trusted partner to thousands of companies across all parts of the media production and broadcast industry, which affords us access to an unlimited number of real-world media files to work with. In turn, our mfimport, demultiplexers and decoders have been battle-tested for nearly every video use-case imaginable and are essential components in many production products (both MainConcept’s as well as other industry staples such as Adobe Premiere Pro).

How do you keep mfimport up to date?

With each new recording device that enters the market for capturing video with different metadata, video bitstreams, container formats, etc., our team of seasoned developers makes sure our mfimport is compatible, efficient and reliable. As mentioned above, a media file ingest module such as MainConcept’s mfimport isn’t static, so it’s constantly evolving. Our dedication to making file ingest seamless and serving the professional video industry has delivered field-hardened reliability to thousands of companies, organizations and development teams globally.

Where can I get MainConcept mfimport?   

mfimport is included in many MainConcept SDK packages and used in our applications and plugins. If you want to learn more about our mfimportdemultiplexers or decoders, please contact one of our media workflow specialists, and we’ll be happy to assist.

Why are custom foam inserts for cases so important?

Whatever you’re transporting or storing, you want your equipment or goods in perfect condition. Foam inserts for cases can offer complete protection, whether they’re holding medical, electronic, military or camera equipment, or some other delicate items.

How do they do this? Well, foam inserts for equipment cases have several vital properties. They are:

  • Precisely shaped to provide full support to equipment
  • Rigid enough to carry objects’ weight
  • Soft enough to cushion items against impact
  • Designed to avoid reacting with equipment, through discolouration or adhesion, for example

In addition, foam inserts must also look presentable for items that will be showcased in their cases.

How do foam inserts protect equipment?

Simply put, if there’s an impact on a case, the foam inserts come between that impact and the case’s contents. The energy from the impact is dissipated in the foam. Individual objects within the case are also prevented from coming into contact with one another, as each item sits within its own niche, which has been precisely cut out of the foam to hug the item on every side.

As well as guarding against impacts, the foam also protects equipment against prolonged vibration, which can loosen parts and lead to damage.

For added protection against impacts, hard cases are required. They can soak up more powerful impacts, sometimes becoming damaged themselves while leaving the goods inside untouched. When these powerful impacts occur, the foam decelerates the items before they can make contact with the case’s inside surface or, indeed, with each other. The foam then recovers, ready to protect against any further accidents.

Choosing the right foam 

If the contents of a case are to be fully protected, selecting the correct density and rigidity of foam is vital. The choice made will depend upon the mass of the contents and how delicate they are.

If there isn’t enough support coming from the foam, the objects inside could crash into the case. Thicker foam allows more room for deceleration after an impact and reduces the peak load on the equipment inside. However, thicker foam will require a larger case.

How are foam inserts made?

To create spaces in foam inserts for individual objects, CNC (computer numerical controlled) machining is often used. It smoothly and precisely cuts the foam to exact specifications, with measurements taken directly from the objects themselves, from scanned data or from 3D data supplied by the items’ manufacturers.

The objects can then travel together, but separated from each other to prevent damage. This has the added benefit of making the items easy to find and identify in their case.

A professional look

Not only do foam inserts in protective cases prevent damage to the contents. They also provide an elegant option for displaying a brand. Different colours and two-tone foam are available, as well as logos and text, helping to showcase a brand’s identity alongside its products.

CP Cases offers expertly crafted foam inserts, which have gone through a rigorous process of precise machine cutting, sculpting and hand-forming. We offer various options for colour, multiple laminations and foam materials, including those with anti-static, conductive and self-extinguishing properties.

If you have any questions about foam insert protection or wish to discuss a bespoke foam design,  contact CP Cases on 0208 568 1881 or email info@cpcases.com.