Limecraft 21.2 Product Update

Earlier this year, as demand for remote post-production grew, we focused on supporting a series of product updates to bring maximum performance to the platform. Extending to these, we’re excited to share our latest product update built for your specific needs.

Enhanced Security through Two-factor Authentication (2FA)

Security has always been a top priority. Working for top producers and brands requires us to go beyond and above state of the art when it comes to supporting security standards. As the demand for remote collaboration increased, we introduced multi-factor authentication. When you opt-in, your team members’ identities will be double-checked to avoid password guessing.

While not mandatory, you can make 2FA requiremed for all users working on productions in your account. Users who already have 2FA enabled will not notice any changes; users who don’t will be prompted to set up 2FA. To learn more about enabling 2FA in your account, check out our support page on how to enable or disable 2FA.

Logging into Limecraft Flow using Two-Factor Authentication (2FA)

Our product team remains committed to allowing you to set up the most secure environment for hosting your content. We welcome any suggestions in this regard.

A better Search Experience

The new search engine gives you maximum control without the complexity you would expect from it. Moreover, it allows you to efficiently retrieve the right fragments of very large collections as well, saving hundreds of hours of manual work.

The new search engine allows you to intuitively and iteratively refine your search

We understand that searching for media can be time-consuming, so we made it easier and faster to browse for collections, story parts, and scenes. Using the new search engine, not only you’ll save a lot of time when dealing with a robust production library, it also makes the process more intuitive.As was the case before, you can save your search results as a quick view, creating a shortcut for future use.

You can learn more about how to use the search engine on our knowledge base. As always, we warmly welcome your suggestions and comments.

Using AI for Automatic Shot Description (beta)

As part of the MeMAD project, we conducted extensive testing of various AI applications in media production, including subtitling, archive indexing and automatic shot description. We concluded that AI is still in its infancy when delivering a service that works reliably.

One of the key findings was that AI services produce massive amounts of mostly unusable data. To make these data into usable information they have to be reconciled into a single coherent description.

Making sure AI delivers a usable result, may require reconciliation and proofreading (picture courtesy of the Associated Press)

Capitalising on the results of the MeMAD project, Limecraft now delivers a complete shot list, including persons, scene description and soundbytes along with the rhythm of the edit. The result is presented in a user interface designed for proofreading and quality assurance, allowing you to add a human touch and achieve maximum accuracy.

This new approach of indexing video potentially saves massive amounts of manual work, while delivering a more complete and accurate result than produced by any alternative. Moreover, it cuts the turn around cycle of incoming newsfeeds to a minimum.

Improved export to Edit

Limecraft makes it easy to export shot lists ,or sync pulls to the edit suite. However, to make sure the content description, whether produced automatically, manually or anything in between, are useful to the editor, we had to improve the export to edit.

Using AI generated metadata in the edit suite requires extensive processing

When you want to make available AI-generated metadata in the edit suite, one of the challenges is the synchronisation of the different timelines. For example, if the video consists of several shots, but you only have one large transcript, it doesn’t make sense to import the transcript.

In order to make the transcript and shot descriptions usable for the editor, we cut and synchronise them along with the edit decisions. As a spill-over, you’ll end up with a detailed table of content of what’s in the edit without any manual tagging or post-editing.

AI-generated shot lists, and the export to edit are currently available in private beta. If you are interested in a demo or in case you would like more information, feel free to get in touch.

More Sustainable Sports Competitions through Remote Production with JPEG XS

intoPIX new low-latency solutions reduce the need for high-bandwidth connections & eliminate the need for on-site hardware and people

Like many other areas, sport is rightly becoming increasingly environmentally conscious. There are many components that negatively affect carbon emissions during a competition. The transport of the broadcasting equipment and team are among them. On this point, sports production as a whole can make significant progress by shifting to remote production.

Initially focused on the financial benefits of remote production, the media technology sector could also quickly recognize the environmental benefits of this mode of production.

Using JPEG XS to enable remote production

Using the new JPEG XS lightweight low-latency codec, reduces the need for high-bandwidth connections and eliminates the need for on-site hardware, as content can be worked on into the cloud without any latency. All this while ensuring near-zero latency and pristine image quality.

Remote production does not eliminate all transport costs but it does provide a significant reduction if you want to make a difference to the environmental impact.

Let’s be honest, from an environmental point of view, does it really make sense for a broadcaster to send its entire team of presenters to the four corners of the earth to cover a competition when there are facilities and studios that can do it from home?

We still have a long way to go !

The broadcasting sector has the opportunity to lead the way in sustainability. There are smart ways to use technology, measure and manage our footprint and ensure that we build next-generation technical models in a sustainable way.

Discover our JPEG XS Solutions.

Read more about our media IP & broadcast solutions.

High Quality Live Production in the LAN, over the WAN and into the Cloud using JPEG XS

IP networking has revolutionized media contribution and distribution along with the SMPTE ST2022 and ST2110 standards which lie at the core of new live workflows, dematerialized facilities and cloud-based operation.

intoPIX offers essential transport solutions to the broadcast world and live production. Whether for transport in the LAN, over the WAN or to the CLOUD, JPEG XS simplifies transport at low cost, without any latency and on existing infrastructures.

As a broadcaster, you are no longer forced to use heavy and costly SDI cables. You can work easily and quickly with compressed flows without any impact on quality.

Transport of HD, 4K, 8K over SMPTE 2110 (-22) with TICO-XS enabling you to carry any video formats over IP with full flexibility and all the benefits of intoPIX solutions. JPEG XS is extending the network capacity to manage HD, 4K, 8K streams over IP (1G/10G/25G) keeping the lowest latency (down to a single millisecond) and a visually lossless quality.

The advantages of JPEG XS are numerous in live production

JPEG XS to help the deployment of remote video review solutions for Video Assistant Referees (VAR)

VAR match officials can now work remotely from any location, with no latency thanks to JPEG XS !

Spurred on by the COVID-19 pandemic, intoPIX with all its customers and partners have devoted time and energy to develop and deploy a remote video access and cloud-based solution with zero latency and very high image quality.

With intoPIX JPEG XS software-based or hardware-based solutions, it is possible to import or export remotely high-quality live video streams without any latency to your TV facility, the public cloud (like AWS) with high reliability and near-zero latency, with manageable bandwidth requirements. It enables to reinvent VAR. This new way of working will have a significant impact on the required size of the broadcast infrastructure and will thus allow significant cost reductions.

When using VAR at sporting events such as premium football, an OB van is usually assigned to the stadium with several people inside. Existing remote VAR rooms are also available but expensive to connect to the stadium. With very low-complexity compression technologies such as intoPIX’s JPEG XS solutions and the ability to send many streams over the network, VAR is becoming cheaper to set up remotely and is now accessible to federations that could not afford to use VAR previously.

Remote Production with JPEG XS offers many additional benefits : Staff, Equipment, Transport, …

Let’s focus on a standard weekend, where there are usually about ten football matches played! With remote production, it is no longer necessary to send as many teams to each of the match cities with all their own equipment. With this new way of working, it is only necessary to send one team to a fixed office pre-equipped with all the necessary material. This single team can then manage several matches over the weekend.

In addition to limiting the number of people and equipment needed, the number of travels is drastically reduced. This results in huge cost and environmental reductions.

People did not wait for the current pandemic to start working on remote production concepts, but COVID-19 has changed people’s mindset irrevocably. More and more of broadcasters are actively looking for remote solutions to reduce the need for their staff to travel. Overall COVID-19 has been a trigger for most organizations to seriously consider remote-based or cloud-based workflows for the future.

Journal 117

IABM Journal 117

Published Q2 2021


Journal is the IABM Magazine released every quarter that covers hot topics within the industry. It is distributed widely throughout the industry.

Why “Not-compressing” simply doesn’t make sense?

… when uncompressed involves unnecessary extra cost 

Images and videos are like sponges !

Imagine a sponge or rather thousands of sponges to be transported over several kilometers. It would require several trucks to store and transport them all!

Let’s take all those sponges and squeeze them. If you look closely at the sponge, you will see that it has lots of holes in it, filled with air. When you squeeze it, you remove the air which is useless and the entire sponge takes up less space.

Now let’s put them each in a small box to store and transport them. Sponges take up less space and require a lot fewer trucks. Basically, this is what compression does to transport images and videos

The advantages of this compression are numerous. Less congestion on the roads, reduced transport costs and a reduced level of pollution, without the slightest loss of material. On arrival when the sponges are extracted from their packaging, they will get their initial shape back and will not have lost anything in terms of quality. We speak then about “Lossless compression”, because there is no degradation of the information compared to the original.

As experts in image compression, images are like sponges – they are filled with a lot of “air”, –  data that are not bringing any information to our human eyes, and that we can remove. The drawbacks of uncompressed are systems with bigger memory, networks using more bandwidth, higher storage requirements and higher power consumption. Compressing those images can be so efficient that it makes no sense to transport or store them uncompressed.

Visually lossless but not Lossy !

Images hold a lot of information that can be perfectly processed to keep only useful information without having the slightest impact on human or machine vision.

Codecs remove unnecessary air from the  image/video frame and keep only the most relevant information possible and the compression rate positions then the cursor between “Totally lossless” with a very high bit rate to “Totally lossy” with a very low bit rate.

There is nothing magical about compression! Let’s go back to our sponge metaphor, if you want to put your sponges in smaller boxes (than what you could achieve by squeezing them naturally), you have to cut through the material to make it possible. In this case, when you decompress it, it will no longer have the same visual appearance. It means that we had to introduce loss, some information is missing. It is the same when you want to transport video frames with a lower bandwidth than usual, you need to increase the compression ratio and if you put the cursor too far, then we lose quality.

We then speak of “Lossy compression“, quality is lost and this can lead to errors for the human eye or for the machine. Obviously, intelligent choices are made to minimize the perceived impact of the loss. Trade-offs between bandwidth, resolution, frame rate, color accuracy and other factors determine how lossy.

With JPEG XS, the cursor is much better positioned than in uncompressed, it allows you to divide the bitrate by up to 10 without having a visual impact.

The objective ofJPEG XS and intoPIXis to put the cursor at the right place to get the best compromise between quality, speed and complexity.

Does uncompressed really exist ? 

When it comes to filming natural content, there will always be a difference between the human eye and the image captured and processed by the camera, the sensor and the ISP. Indeed, the camera sensor will not be able to capture the entire reality, it will capture as much information as possible to be as close to reality as possible. The camera will therefore already introduce a form of compression. We can say that camera sensors are an inevitable sacrifice of reality.

In fact, uncompressed already uses compression mechanisms, codecs go just a step further to capture more resolutions with a visually lossless quality, the lowest latency and complexity, low power consumption and a high-speed rate.

With the new JPEG XS compression standard for example, with a 16:1 compressing, you can afford to capture 16x more details from reality: it enables a camera to handle 8K instead of HD, so 16x more resolutions and therefore have a better definition. We can transport or record very easily 8K video on an HD uncompressed capable video workflow.

Compression solutions to replace uncompressed !

IntoPIX has always been driven by the wish to find the best image compression solutions to avoid unnecessary costs and nuisances, while maintaining perfect image quality.

Our TICO-XS and TICO-RAW solutions are compression algorithms and solutions that allow this reduce the size of images by selecting the useful information to preserve and those that can be sacrificed without having the slightest impact on the quality of the image from point of views of the human eye, or the machine. The ultimate goal is to always position the cursor in the right place!

You now understand why it’s time to replace uncompressed everywhere and why “Not compressing” simply doesn’t make sense.

Every gambler knows what’s on Skol Casino you can win a lot…

JPEG XS … What does it mean ?

JPEG XS is a new ISO format replacing uncompressed video 

Over the last 20 years, the number of shared images and videos has considerably increased. In terms of resolution, we have moved from SD, HD to 4K – even 8K – and this is not about to stop. Higher frame rates, higher resolutions, more precision and higher dynamic range (HDR) imply a considerable increase in the amount of data to be transported on our networks. Bandwidth and storage are getting cheaper but this does not compensate the drastic increase of data to be transported or stored. Compression is therefore more than ever a fundamental step in distributing your video over the internet.

To smoothly manage the development of these technologies and the increase of manufacturers, it is essential to define interoperable solutions that work with other existing or future systems, without access restrictions. Especially in the context of communication media, we had to find a way to speak the same language and have similar formats. Standard Development Organizations (SDOs) play an important role to reach this interoperability requirement.

The need to set up a JPEG committee

In the 1990s, the International Standard Organization (ISO), the International Electrotechnical Commission (IEC) and the International Telecommunication Union (ITU) jointly created the Joint Photographic Experts Group (JPEG Committee) to specify image compression standards.

The first standard published in 1991 by the committee is also the most famous standard for the general public: the JPEG standard. It is very well known because it targets the consumer market and is universally deployed. Nowadays, the vast majority of images shared on the Internet and on social media are “JPEG”. This format is characterized by a very strong interoperability as the majority of devices and systems are equipped with software allowing encoding and decoding of this format. However, other image standards exist: JPEG 2000, JPEG XR, JPEG XT or the new JPEG XS, all published by this JPEG committee of experts. Each of these standards targets a specific market or use-cases, with specific requirements, leading to a global ecosystem of complementary specifications.

“XS” means “eXtra Small” and “eXtra Speed”

Among all the JPEG standard, JPEG XS, an international ISO standard published in 2019, simplifies video transport between devices. The main purpose of JPEG XS is to bring a transparent compression wherever uncompressed video was still in use, so as to decrease pressure on bandwidth requirements. 

The JPEG XS standard can be defined as an intra-frame (visually) lossless image compression algorithm with very low latency and very low complexity. It enables video connectivity over lower bandwidth connections. The compression range is typically between 2:1 to 15:1 depending on the use case and video content. JPEG XS helps reduce power consumption of electronic devices, especially in the ultra-high definition video use-cases like 4K and 8K.

JPEG XS offers a scalable algorithmic latency, ranging from a small number of lines down to less-than-a-single-line for a combined encoder-decoder suite. JPEG XS is characterized by its robustness to multi-generation, meaning there is no significant quality degradation even after 10 encoding/decoding cycles. Multi-platform interoperability is also one if its key feature: JPEG XS allows for highly optimized implementations on CPU, GPU but also on hardware platforms like FPGA and ASIC.

The JPEG XS mezzanine codec standard can be applied wherever uncompressed video is currently used.

Download the JPEG White Paper

SOME EXAMPLES OF USE CASES for JPEG XS

  • In VIRTUAL REALITY, the transmission latency must not exceed one microsecond between the headset and the video source to preserve the experience. JPEG XS enables this wireless transmission, offers significantly higher video resolutions and eliminates the need for wires.
  • In LIVE PRODUCTION & AV over IP, the LAN or WAN infrastructures (cabling, network, devices…) usually cannot exceed 1G, 2.5G or 10Gbps. With a minimal latency, JPEG XS will use the bandwidth that is available today to carry a single or multiple HD, 4K and 8K streams without delay!
  • In all types of MOBILE DEVICES, video transmission between electronic chips consumes a lot of energy, especially when the video quality is high. By reducing the amount of data to be transmitted, JPEG XS considerably limits energy consumption and thus extends the battery life.
  • AUTONOMOUS CARS rely heavily on image and video sensors to operate reliably and avoid accidents. Even a delay of only a fraction of a second could be critical. JPEG XS compresses images in such a way that this latency is reduced to an absolute minimum – around a microsecond – while ensuring lossless quality, both for human vision and obstacle detection based on artificial intelligence.
  • JPEG XS guarantees a live video signal from DRONES to pilots, which is crucial to avoid errors.
  • Given the rapid growth of 5G & WIRELESS NETWORKS, JPEG XS aims to become the standard for improving video quality, reducing latency and simplifying wireless transfer. All these requirements are significant advantages for online GAMES & LIVE STREAMING.

What are the file formats for JPEG XS? 

JXS is the file extension for storing single image files. But the JPEG XS standard can be encapsulated in all the most common images/videos transport/container formats such as HEIF, MP4, MXF, MPEG2-TS, RTP, SMPTE 2110-22,…

The new JPEG XS standard can already be opened and manipulated within most of these containers thanks to Adobe Premiere Plugin or FFmpeg Add-on developed by intoPIX on top of our FastTICO-XS SDK.

Discover TICO-XS, the JPEG XS standard engineered by intoPIX

Being the proponent and project leader of this “JPEG XS” international ISO standard technology, intoPIX has an unequaled knowledge of the standard and has developed optimized implementations. These implementations are branded under the name TICO-XS, where TICO just means TIny COdec.

IntoPIX has built a wide range of implementations, optimizing every aspect of JPEG XS and meeting all of its customers’ requests while remaining compliant with the JPEG XS standard. intoPIX offers visually lossless compression capabilities exceeding the JPEG XS reference, with an extremely small footprint in ASIC and FPGA and blazingly fast performances on CPU & GPU. It has integrated innovative processing for screen/desktop content applications and also developed a mode that can operate much more faster on software.

Interested in learning more? Contact our experts now!

The best way to make money is to play at Abo Casino right now.

Understand the concept of bpp and Mbps to define your compressed data rate!

Pixels, color components, chroma subsampling and bit depth… What is my targeted video bandwidth?

​​The bits per pixel (bpp) concept

​Every color pixel in a digital image is created through some combination of the three primary colors: red, green, and blue. Each primary color is often referred to as a “color channel” or “color component”, and has a range of intensity values specified by its bit depth. The bit depth for each primary color is termed the “number of bits per channel”, typically ranging from 8 to 16bits. The “bits per pixel” (bpp) refers to the sum of the “number of bits per color channel” i.e. the total number of bits required to code the color information of the pixel.
An uncompressed RGB image with a bit depth of 8 bits per color  will have 24 bpp or 24 bits per pixels (8 bits for the Red, 8 bits for the Green, 8 bits for the Blue)

​​RGB, the basis of the color signal

It is possible to combine primary colors with different intensities to create all other colors. The RGB (Red, Green Blue) format proposes to divide the information of a pixel into 3 values: one to code the red intensity, another for the green and the last for the blue. Each image in an RGB video stream is in fact the sum of 3 sub-images, each pixel on a screen being composed of 3 sub-pixels.
The panel therefore simultaneously displays the red, green and blue images, and human eyes interpret it as an image full of colors. By playing with the intensity of each of the sub-pixels it is thus possible to reproduce a large palette of RGB colors.

​​YCbCr, the signal separating Luminance and Chrominances

With the advent of color TV, it became necessary to add chrominance (colors) information to the historical luminance (black and white) signal in a single signal.

YCbCr splits the image into 3 components:

  • Y = black and white image (luminance)
  • U / Cb = blue / green image (chrominance obtained by Y – Blue)
  • V / Cr = yellow / red image (chrominance obtained by Y – Red)

In the digital world, Reversible (RCT) or Irreversible (ICT) Color Transforms can be used to convert RGB images to the YCbCr color format and vice versa.

​​Chroma sub-sampling to reduce bandwidth

Most video signals separate luminance from chrominances. It has been established that human eyes are much more sensitive to black and white (luminance) than colors (chrominance). In an effort to save bandwidth, why not reduce the color information since most of it would be lost to the viewer anyway?

Each pixel of the final image is actually reconstructed from the 3 components: Y, Cb and Cr. Chroma subsampling consists of reducing the resolution of the Cb and Cr color components without introducing actual compression. Since the luminance (Y) remains unchanged and is the main information caught by the human eye, the result can be quite impressive on natural content. It is often impossible to see the differences between a subsampled image and the original, provided you use a suitable down-sampling format.

The sampling structure is defined by 3 numbers on a matrix of 8 pixels (4×2). The first digit refers to the number of luminance samples (Y) per row, the second the number of chrominance samples (Cb / Cr) on the first row of pixels and the third the number of chrominance samples (Cb / Cr) on the second row of pixels.

The 4.4.4 format corresponds to a raw format, without compression, sub-sampling or loss of quality. Each pixel of the final image is generated from a Y luminance pixel, a Cb chrominance pixel and a Cr chrominance pixel – Red pixel, Green pixel and Blue pixel in the case of RGB. In this configuration, there is no difference between an RGB or YCbCr signal. This configuration is used in ProAV, Computer displays, but also in the professional world of cinema. The very high bandwidth represents a significant cost.

With the 4.2.2 format, the horizontal resolution of the chroma is halved, in other words, the same Cb color will be used for the final rendering of two pixels (same for Cr color). With a 33% reduction in throughput and a difference invisible to the naked eye, this format is the preferred one in the television world.

The 4.2.0 format is the sub-sampling used for the general public: TV programs, films, video games, video streaming… In this case, the color images (Cb and Cr) see their horizontal and vertical resolutions divided by two.

Here the bandwidth is reduced by 50% compared to 4.4.4. Since the human eye is more sensitive to light than to color, the visual quality remains excellent even in 4.2.0.

Chroma Subsampling means switching, for example, from a 4.4.4 format to a 4.2.2 format, which will have the effect of reducing the number of bits per pixel (bpp) without actually compressing it. In other words, Where a 4.4.4 format means 24 bpp (3 colors x 8 bits), only 16 bpp  (2colors x 8bits) will be needed in the equivalent a 4.2.2 format.

​​How to calculate the Mbits per second (Mbps) ? How many bit-per-pixel (bpp) do I have ? What is my video bitrate?

Mbps = resolution in pixels x fps x bpp

Let’s take a few examples to better understand how we compute the size of a video stream:

Example : 4K@24fps 444 8 bit uncompressed

Resolution : 4K = 3840 x 2160 = 8 294 400 pixels
Resolution with blanking : 4K = 4400 x 2250 = 9 900 000 pixels

Frame per second (fps) : 24

Format : 444 8bit = 24 bpp (3colors x 8bits = 8+8+8)

9,900,000 pixels x 24 fps x 24 bpp = 5 702 400 000 bps = 5 702 Mbps = 5,7Gbps

Example : 4K@60fps 444 8 bit uncompressed

Resolution : 4K = 3840 x 2160 = 8 294 400 pixels
Resolution with blanking : 4K = 4400 x 2250 = 9 900 000 pixels

Frame per second (fps) : 60

Format : 444 8bit = 24 bpp (3colors x 8bits = 8+8+8)

9,900,000 pixels x 60 fps x 24 bpp= 14 256 000 000 bps= 14 256 Mbps =14,2Gbps

Example : Full HD@24 fps 444 8 bit uncompressed

Resolution : FHD = 1920 x 1080 = 2 073 600 pixels
Resolution with blanking : FHD = 2200 x 1125 = 2 475 000 pixels

Frame per second (fps) : 24

Format : 444 8bit = 24 bpp (3colors x 8bits = 8+8+8)

2,475,000 pixels x 24 fps x 24 bpp = 1 425 600 000 bps = 1 425 Mbps = 1,4Gbps

Example : 4K@60fps 422 8bit uncompressed

Resolution : 4K = 3840 x 2160 = 8 294 400 pixels
Resolution with blanking : 4K = 4400 x 2 250 = 9 900, 000 pixels

Frame per second (fps) : 60

Format : 422 8bit = 16 bpp (8bits + 4bits = 4bits)

9,900,000 pixels x 60 fps x 16 bpp = 9 504 000 000 = 9 504 Mbps = 9,5Gbps

We can see here that by going from a 4K60-444 format to a 4K60-422 format, the Mbps are reduced (around 33%) and this thanks to sub-sampling. No compression algorithm was applied.

​​Reducing the “bpp” thanks to the compression !

The number of shared data, especially videos has considerably increased. We are moving from SD, HD to 4K to 8K and it keeps evolving: higher frame rate, higher resolution, more precision and higher dynamic range (HDR) imply a considerable increase in the amount of data to be transported on networks.

Compression technology helps managing more pixels, more quality over a limited bandwidth using existing devices and infrastructures.

A quick example: Standard CAT5E ethernet cables can easily transport 1Gbps. But uncompressed video often reaches 10 to 16 Gbps in 4K. HD streams (720p) can be transported on CAT5E, but as soon as one needs to transport 4K, compression is required! intoPIX Codecs are the best way to transmit 4K easily below 1Gbps without any latency and quality loss: reducing the bpp to 1.5 takes your 4K below 1Gbps at 746 Mbps.

​​Discover the VIDEO COMPRESSION CALCUATOR ! 

Our intoPIX Video Compression Calculator is built to help you compute the compression rate you need. You can use it for any application and any codec.

Select your video format parameters, and determine the compression rate required to transport your video stream. This can even help you configure your compression rate with any intoPIX IP-cores or Fast SDK.

Only at Yoju Casino you can play and win big and fast!

Why would you need video compression ?

How to provide a data compression solution without making the slightest compromise on quality or latency while using the existing infrastructure and at low hardware complexity?!

Technology improves constantly, internet connections get better and faster, but in conjunction, video resolutions get higher and files exponentially bigger! Compression is more than ever a requirement to distribute your video. 

Over the last 20 years, resolution and frame rates have moved from SD (@24fps), to HD, then 4K (@60fps), now reaching 8K (@120fps) and will likely keep increasing. Thanks to all these improvements, we are entertained with better images and videos, and machine vision algorithms (AI, Analytics) can make better decisions.

Higher frame rate, higher resolution, more bit per pixel (or precision) and higher dynamic range (HDR) imply a considerable increase in the amount of data to be transported, recorded and processed. 

We have more pixels to manage, store and transport … and the pipelines are jammed already !

The typical issues we are going to face if we don’t compress our video contents are multiple : big investments due to expensive hardware replacement and expending storage capacities, huge transition costs, heavy infrastructures and systems requiring complete redesign or re-installation with higher complexity, accelerated obsolescence, increase of carbon footprint, … 

And in addition, a big power consumption impact because of the larger interfaces, bandwidth and memory requirements.

Compression is no longer optional !

Compression technology helps managing more pixels over a limited bandwidth using existing devices and infrastructures

Today it is estimated that 70 billion meters of 1 GbE cable are placed and used worldwide. Replacing them will have a huge environmental impact.

JPEG XS compression algorithm is here to help you manage higher resolutions, frame rates and number of streams, while keeping the uncompressed visual quality with no latency and a very low hardware/software complexity.

In addition to power consumption savings, video compression reduces BOM cost thanks to cheaper switches, less cabling, smaller FPGAs and a better use of CPU/GPU processing power.

Why should you choose JPEG XS compression technology? 

The question in the past was: Why NOT using compression? It appeared that latency/complexity was an important drawback of compression. Thanks to JPEG XS technology, those last remaining obstacles have been removed.

The JPEG XS standard defines a lossless quality compression algorithm with very low latency and very low complexity. JPEG XS reduces energy consumption of electronic devices, enables ultra-high-definition video such as 4K & 8K, and foster the video connectivity over lower bandwidth links such as gigabit ethernet or wireless connections. This increases possibilities in a wide range of communication technologies !

  • JPEG XS has a much lower complexity than any inter-frame codec. It leads to much cheaper implementation, a tiny hardware footprint with no need to store frames in an additional expensive DDR memory. It also has a more balanced complexity between encoder and decoder, making it more suitable for environments where you have the same number of encoders and decoders.
  • A huge positive impact in terms of power consumption is a collateral: JPEG XS does not require a lot of memory since it’s a line-based compression technology.
  • In terms of latencyJPEG XS has a microsecond-latency and can thus be run throughout a whole live production workflow without even inducing the latency of another codec single encoding-decoding step. JPEG XS stays well below this measure at < 1 millisecond for combined encoding and decoding.

Given the constant evolution of image technology such as the evolution of 4K and 8K, the emergence of Virtual Reality, autonomous vehicles, 5G, AV-over-IP, … it is essential to provide an invisible video compression solution. This means without making the slightest compromise on image quality and latency while using the existing infrastructure and at low hardware or software complexity. If compression is no longer optional, JPEG XS allows it without any compromise !

Interested in learning more about the benefits of compression? Contact our experts now!

Need money ? Play at Kasyno Slottica and don’t deny yourself anything.

What makes JPEG XS technology different from other codecs?

Let’s have a little talk with our expert in compression technology, Antonin Descampe !

Antonin Descampe is co-founder of intoPIX and member of the JPEG committee since 15 years.

During an interview, Antonin explained us how the JPEG XS technology differs from other codecs & what are the advantages of JPEG XS compared to other existing codecs.

Antonin, could you explain us what is JPEG XS and how does it differ from JPEG 2000, Motion JPEG and various MPEG standards ?

Antonin : The main difference between JPEG XS and existing codecs from JPEG, MPEG or other standardization Committees is that compression efficiency is not the main target. Whereas other codecs primarily focus on their compression efficiency, disregarding latency

or complexity, JPEG XS  addresses the following question: “How can we ultimately replace uncompressed video?”. The goal of JPEG XS

is therefore to allow increasing resolutions, frame rates and number of streams, while safeguarding all advantages of an uncompressed stream, i.e. interoperability, visually lossless quality, multi-generation robustness, low power consumption, low latency in coding and decoding, ease of implementation, small size on chip (no additional DDR), and fast software running on general purpose CPU and GPU.

No other codec fulfills this set of strong requirements simultaneously. It can thus “compete” with uncompressed in every aspect and reduce bandwidth / video data significantly.

What sort of compression will be reasonable with JPEG XS and what are the compression choices for a 4K video with JPEG XS ?

Antonin : In a nutshell, we can say that the typical operating points for visually lossless quality with JPEG XS are around 10:1.

However, it is important to take resolution and content type into account when identifying a maximum compression ratio. For instance, natural content usually reaches higher compression ratios for a given quality level.

Moreover, “visually lossless quality” can also mean different quality levels. During its development, JPEG XS has been tested against the strictest quality assessment procedures (ISO/IEC 29170-2, “Evaluation procedure for visually lossless coding”), seeking the threshold guaranteeing an “indistinguishable flickering” between original and compressed image – a measure often referred to as “visual transparency”.

Based on our tests, including different kinds of content (screen content, Computer Generated Images (CGI) and natural imagery), we defined the following table. The lower compressed bitrate in the table defines use cases playing with natural content typically while the upper range defines more complex content or use cases requiring full visual transparency.

Formats Compressed bitrates IP network & SDI mapping 
 HD 720p60 / HD 1080i60  70 – 200 Mbps  1 to X streams over 1GbE
 HD 1080p60  125 – 400 Mbps  1 to X streams over 1GbE
 4K 2160p60  500 Mbps – 1,6 Gbps  1 stream over 1GbE
1 to X streams over 10GbE
Single 3G-SDI / Single HD-SDI
 8K 4320p60  2 – 6,4 Gbps  1 to 4 streams over 10GbE
Single 3G-SDI / Single 6G-SDI / Single 12G-SDI

 

JPEG XS is specifically targeted at high-end video applications, such has broadcasting, broadcast contribution, virtual reality applications, etc. Why JPEG XS and not H.264 or H.265?

Antonin : Video applications like broadcasting, broadcast contribution, virtual reality applications, … require features that MPEG-4 AVC / H.264 or HEVC / H.265 do not offer.

JPEG XS has a much lower complexity than any inter-frame codec like the MPEG ones. It leads to much cheaper implementation, a tiny FPGA footprint and there is no need to store frames in an additional DDR. It also has a more balanced complexity between encoder and decoder, making it more suitable for environments where you have the same number of encoders and decoders. MPEG-4 AVC / H.264 encoders are much more complex than the decoder.

There is also a huge difference in terms of power consumption. MPEG-4 AVC / H.264 and HEVC / H.265 require a lot of memory due to their inter-frame / GOP-based scheme. Thus, they would never be used for reducing power consumption / interfaces within an electronic device, as they are highly complex and use lots of power by themselves already. JPEG XS does not require such memory since it’s a line-based compression technology.

In terms of latency, using MPEG-4 AVC / H.264 and HEVC / H.265 in a live production workflow with multiple encoding & decoding steps would lead to a compiled latency of many seconds. JPEG XS has a microsecond-latency and can thus be run throughout a whole live production workflow without even inducing the latency of a single MPEG-4 AVC / H.264 encoding-decoding step. Even though we need H.265 for the last mile to distribute it to the consumers, we try to avoid any additional latency in the production workflow before distribution. Aside broadcast, applications that JPEG XS targets need real-time transmission, such as autonomous driving systems, KVM extension, VR/AR gear, … A delay of > 100 milliseconds would make these applications unusable (or in case of an autonomous car even lead to a crash). JPEG XS stays well below this measure at < 1 millisecond for combined encoding and decoding.

In fact, JPEG XS not only targets high-end video applications, but is suitable anywhere where uncompressed video is currently used and needs to maintain high quality levels, while wanting to gain efficiency – and who wouldn’t want that? Hence, there is also a  great focus on consumer electronics such as mobile devices, cars, TVs and other screens, etc.

What is the status of the JPEG XS standardization process ?

Antonin : Concerning the status of the standardization process itself, JPEG XS Part-1 (Core Coding system), Part-2 (profiles and buffer models) and Part-3 (Transport and container formats) are published and available online as International Standards already. Part-4 and Part-5 (respectively conformance testing and reference software) are in final stage and shall be published during Q2 this year.

While Part-1 relates to the actual compression algorithm, Part-2 defines several profiles that can be seen as operating points suited for particular applications or content type. In Part-3 and in other standardization activities, various file formats and transport formats are specified, allowing to store or stream one or several JPEG XS code streams (see table hereunder).

Recently, a new activity has been started within the JPEG Committee : a first amendment to Part-1 and Part-2 specifying additional coding tools dedicated to compression of Color Filter Array (CFA) data, mostly known as Bayer patterns. These new tools will make JPEG XS even more suited for use cases involving image sensor data compression, like the ones found in the automotive industry or in professional cameras.

Besides this process, there are several ongoing liaisons between standard bodies and industrial organizations such as AIMS, VSF, SMPTE, TICO Alliance, IETF, etc. At the last IP Showcase at NAB there was a presentation about JPEG XS in ST2110-22. Several broadcast suppliers are already working on implementation within their upcoming products.

 ITEMS DESCRIPTIONS  CURRENT STATUS  TARGET PUBLICATION DATES
 ISO/IEC 21122-1  Part 1 : Core coding system  Published  Published
 ISO/IEC 21122-2  Part 2 : Profiles and buffer models  Published  Published
 ISO/IEC 21122-3  Part 3 : Transport and container formats  Published  Published
 ISO/IEC 21122-4  Part 4 : Conformance testing  Final stage  Q2 2020
 ISO/IEC 21122-5  Part 5 : Reference software  Final stage  Q2 2020
 ISO/IEC 21122-1:2019/AMD1  Part-1 Amendment 1: Extended capabilities for JPEG XS  Draft under review  Q1 2021
 ISO/IEC 21122-2:2019/AMD1  Part-2 Amendment 1: Profiles extensions  Draft under review  Q1 2021
 IETF RFC JPEG -XS RTP JPEG XS RTP playload  Draft formally adopted by IETF playload WG  TBD
 SMPTE 2110-22  Compressed essence in ST 2110  Published  Published
 ISO/IEC 13818-1:2019/AMD1  MPEG-2 Transport Stream (TS) wrapper for JPEG XS  Published  Published
 SMPTE ST 2124  MXF wrapper for JPEG XS  Final draft under review  Q3 2020

 

Thank you Antonin for all this great explanations !

We hope this will have given you a better understanding of the JPEG XS technology and its advantages against other codecs. Please feel free to contact us if you want more information about the JPEG XS technology, we would be happy to talk about it with you !