Transforming live video streaming: The power of edge computing at the video source

With the growth, variety, and criticality of video – the ways in which we capture, process, and distribute video streams are changing. Today, video producers and consumers want more control over the process. This includes technical aspects such as resolutions, latency, workflows, and a myriad of other factors. However, just as important are the ways video is distributed. The rise of platforms such as YouTube, Twitch, and TikTok have been staggering. Yet public sites such as these and others are entirely unsuitable for a manufacturing plant monitoring solution, public safety, and security applications or even for fully controlled, subscription-based entertainment or educational services.

The goal for many is to gain more freedom around how to control the entire video process from camera to screen or even from machine to machine for use cases where video is just a means of delivering an analytical insight. This overview is the starting point to discuss how the power of edge computing at the video source is set to revolutionize how video is handled.

The importance of using foam protection in packaging?

Foam’ can describe many materials. For equipment protection, it is precisely defined and controlled.

  • It must be rigid enough to hold the equipment yet soft enough to cushion it against impact.
  • It must be capable of being shaped to support the equipment.
  • It must have good presentation quality to reflect the value of the equipment that it protects.
  • It must not have any adverse reaction with the equipment such as discolouration or adhesion.

If you do not want your equipment to be damaged during transit – whether it be military equipment, medical or especially delicate and valuable – then you need to invest in tough protective cases lined with foam.

What are the benefits of foam protection?

Foam separates the item to be protected from an external point of impact or from a collision with other items within the same case. If there is an impact this separation allows the item to decelerate harmlessly. The impact may last just hundredths of a second and show no evidence that it has happened because impact energy has been dissipated in the foam. During normal transport, the foam can protect against prolonged vibration which might otherwise loosen fastenings. This basic protection can be provided by soft fabric cases.

For higher protection values, foam is combined with ‘hard’ cases. In extreme impacts, the foam decelerates the equipment before it can contact the inside of the case. The foam recovers and can cope with repeated impacts. The case itself may show damage because it is dissipating energy that could have been transferred to the item under protection. This is how the design is intended to work, sacrificing the case to protect the valuable equipment within.

How does it work?

Recess shapes in the foam are designed to support the equipment at its strong points and selection of the right foam density is of crucial importance – too little support and the equipment may crash into the case. Foam density and rigidity are chosen to suit the mass and delicacy of the equipment.

The thicker the foam, the greater is the travel available for deceleration and the lower is the peak load on the equipment. However, foam thickness affects the overall size of the case, so there is a balanced judgement to be made and, at CP Cases, this is based on years of successful experience of protecting a great range of equipment.

When would you need custom-made foam protection?

The variety of opportunities for designing and producing bespoke foam protection are considerable:

Whether for impact protection or orderly transport, foam lends itself to a high-quality presentation that reflects the quality of the equipment and appropriate care.  Combining different colours and laminating them provides many opportunities for sophisticated presentation of corporate logos, coded recesses etc.

Equipment assurance

It is crucial for many professions to have all of the right equipment on hand at all times, whether you are a paramedic, soldier or builder. Protective foam cases provide a way to keep all the necessary tools, equipment and devices together avoiding loss and damage.

Protection of delicate equipment

Protective cases with foam engineered cushioning are almost always necessary for delicate equipment. Foam engineering is essential to protect equipment during transit and storage, which could be anything from military equipment in a war zone to a laptop [working in field research] or camera [recording a sporting event].

Foam shapes are designed from direct measurement of the equipment or from 3D data supplied by its manufacturer or scan data. Additional details, such as finger cut-outs or clearances around delicate features, are added by the designer.

A professional display solution

As well as ensuring a high degree of protection for the contents, case inserts provide an attractive display solution. Various colour options or two-tone foam can be utilised to enhance the presentation or promote a corporate brand.

Logos and text can also be engraved into the foam, creating a unique, polished and original presentation. Your logo or text can be inserted with foams in multiple colours to help enhance brand recognition and differentiate your product from your competitors.

The foam can be shaped by a variety of techniques including CNC routing, stamping, carving and sculpting. These can then be built up by lamination in various densities and colours.

Unusually shaped equipment

Custom-made foam protection is also essential to organise the contents of protective cases that hold tools that are of a non-standard shape. In these instances, the tools need to travel together but not loose, so a bespoke case design is required to keep each tool safely in place.  In addition, it can hold a number of tools or components in an orderly manner so that they can be found quickly and efficiently. It will be apparent, at a glance, if any of the items are missing. This can be enhanced by using a contrasting colour in the equipment recesses.

If you have any questions about foam protection or wish to discuss a bespoke foam design, contact CP Cases on 0208 568 1881 or by email at info@cpcases.com

Adoption Trends: Artificial Intelligence and Machine Learning

IABM Adoption Trends reports annually track the adoption of specific emerging technologies within the broadcast and media sector. The purpose of these reports is to enable member companies to better understand what is driving the adoption of emerging technologies within customer organizations. This will provide member companies more insight to better address the challenges lying ahead, from new product development to marketing strategy. These reports contain a discussion on the state of adoption of a specific emerging technology in broadcast and media, as well as an analysis of significant customer deployments.

Highlights:

  • According to IABM’s Media Tech Business Tracker, about a quarter of broadcast and media companies have already deployed some sort of AI/ML technology as of 2021.
  • From the BaM Content Chain® perspective, end-users are most likely to deploy AI/ML in Manage- and Produce-related workflows.
  • The primary drivers of the adoption of AI technology are vitally important in the direct-to-consumer model.
  • Employing AI/ML technology in production enables broadcasters to cut staff and thus save money.

View the interactive version below or click here to access the PDF.

Where should sports sit within an operator’s digital strategy?

Luke Gaydon
Business Development, Sports, Accedo


Where should sports sit within an operator’s digital strategy? An important question, not least because of the continued disruption to sports and sports viewing;

  • the impact of Covid is ongoing (matches cancelled, players unavailable)
  • the pay tv bundle - the ‘traditional’ model for sports viewing - is under attack by new market entrants (e.g. FANG, DAZN)
    and,
  • there is a generation of younger viewers who have shown they aren’t interested in watching something that happens at a set date and time and lasts for 90 minutes, e.g. a football match.

We are starting to see light at the end of the Covid tunnel. Will the 2022 sporting calendar take place as scheduled, without the late / last minute cancellations that continue to mar the 2021 calendar? Let's hope so. But what about our viewing habits? How have they changed over the last 18 months? We’ve all spent, and continue to spend, a lot more time at home. Our daily routine looks a lot different and flexibility is more important than ever. This works well with the ‘On Demand’ viewing model and it’s no surprise therefore that VOD streaming has flourished.

We are also more interested in different content formats. According to a recent PWC survey, the top three fastest growing sports media content types are;

  1. Highlights
  2. Team/athlete-generated
  3. Originals/documentaries

All three of the above content types can be consumed on demand and are well suited to viewing on a mobile device. For the younger generation of sports fans, short-form content viewed on Instagram and Snapchat has become their preferred way of watching their favourite sport, team or, more likely, player.

In spite of the above, live sports remains the most popular, and therefore most valuable, content type on the TV screen. In the UK, the first race of the 2021 Formula season, the Bahrain Grand Prix, set a record for Sky Sports viewing figures. It was watched by an average of 1.98m viewers and became the first race to peak at more than 2m viewers (it reached 2.23m). NFL games represent 41 of the 50 most-watched programmes on US television in 2020.

This enduring value is why live sports rights continue to be valued so highly by cable networks, big tech companies and the new breed of sports broadcaster, e.g. DAZN. As former AOL CEO Jonathan Miller put it recently, “The only thing left holding the (pay tv) bundle together today is sports...”.

So where does this leave operators when they consider whether and how to invest in sports?

Perhaps we should start by thinking about what the end customer wants? Well clearly live sports, but also different types of sports content. Also flexibility and availability (e.g. cross-device). And what about privacy and/or security? In a recent YouGov survey, tech (9%) and social media (8%) companies came bottom of the list when consumers were asked which industries they would trust with their personal data.

What are the pain points for today’s consumer? Price, availability (i.e. where can I watch the sport/team I support) and increasingly, fragmentation top the list. Price and fragmentation can go hand in hand. If you’re not a cost-conscious consumer then you may be happy to continue spending pay tv prices to watch your favorite sports. If you are cost-conscious, then you’ll shop around for the streaming service that provides what you’re looking for. The catch is that this a) may come in the form of more than one service and/or b) may tip your “digital wallet” into the red when added to the list of other services you already subscribe to (e.g. Netflix, Spotify, Disney+ etc.).

But fragmentation is an issue even if you’re not worried about the cost of subscribing to multiple services. Navigating between multiple services which have a different feature-set (service X doesn’t support picture in picture, service Y doesn’t support captions etc.) isn’t a great user experience.

In summary, consumers are looking for a simple, secure way to access their services. They want a great, consistent user experience. And they want sports content, whether it’s the big global brands like the NFL and the Premier League or national and local leagues or team, live matches, catch up clips or documentaries.

Operators are well placed to deliver on all of the above. They have a single billing relationship with their subscribers, often developed over several years. As a result, they are trusted and will deliver benefits to monetisation and product development. When it comes to the content, there are several options;

Act as the trusted aggregator, bringing together streaming and social media applications into one easy-to-navigate interface.

Form partnerships with the OTT players that best serve your subscriber-base, e.g Telecom Italia and DAZN (who acquired Serie A rights earlier this year)

License rights directly (or via a 3rd-party, e.g. an OTT streamer) for the sports interesting to your subscriber base but ill-served by the OTT players.

Sports content continues to deliver higher engagement, which should lead to happier customers and better ARPU. In spite of the increased competition for rights and eyeballs, operators can themselves continue to extract value from sports by focusing on their core business value and the benefits that it can bring to today’s sports fan.

ELEMENTS BOLT & BeeGFS set a new SPEC SFS performance high score

We are proud to announce that ELEMENTS has achieved the highest results in the SPEC SFS VDA storage performance benchmark. This has been done together with our new technology partner ThinkparQ and their BeeGFS file system, which is well established in High Performance Computing, AI and Deep Learning, and Science. After introducing this file system to the Media and Entertainment industry, it was vital that we proved its capabilities in video production workflows. In this article, we will explain how we have achieved the high score, examine and compare benchmark outcomes in detail and discuss what these results really mean.

Executive summary

The goal was to showcase our new partnering file system technology BeeGFS, to underline its future potential for the Media & Entertainment industry and compare it to some well-established technologies in this segment. The Video Data Acquisition (VDA) workload of the SPEC SFS Benchmark is designed to simulate data acquirement from a volatile video source, measuring the number of video streams at roughly 36 Mbit/s each.

The VDA workload was executed on an ELEMENTS BOLT based storage environment running the BeeGFS file system. The test environment consisted of a comparable amount of hardware to those environments behind the three highest test scores. This was a conscious decision to enable a more meaningful and fair comparison of the performance. The summary of the ELEMENTS test results:

  • Highest stream count (highest throughput) – 11000 streams (50708 MB/s). This equates to a 14.58% higher result than the previous high score.
  • The highest stream count per storage device – average of 76.39 streams per NVMe device.
  • Best overall latency curve – the latency remains both low and stable throughout the entire test run.
  • Highest storage CPU efficiency – 23% more concurrent streams per CPU core than the highest scoring competitor.
  • Highest client CPU efficiency – 107 % more concurrent streams per CPU core than the highest scoring competitor.
  • Highest RAM efficiency – 68% more concurrent streams per Gigabyte of RAM used than the highest scoring competitor.

The benchmark results were officially approved by the SPEC SFS committee and published on their website in September 2021.

SPEC SFS performance benchmarking

Standards Performance Evaluation Corporation (SPEC) is a non-profit corporation formed to establish, maintain and endorse standardised benchmarks and tools to evaluate performance and energy efficiency for the newest generation of computing systems. SPEC develops benchmark suites and reviews and publishes submitted results.

The SPEC SFS’ Video Data Acquisition (VDA) benchmark is a well-established benchmark workload, used to evaluate performance by measuring the maximum sustainable throughput that a storage solution can deliver. At the same time, the required response time is also being tracked. The benchmark runs on a group of workstations, also known as clients, and measures the performance of the storage solution that is providing files to the clients’ application layer. In the VDA benchmark, the workload simulates applications that store data acquired from a volatile video source. Therefore, the benchmark can be best described as a ingest workflow performance test. The business metric is the number of video streams with each stream corresponding to roughly 36 Mbit/s bitrate.

Meet BeeGFS

BeeGFS is a parallel file system that started life as a research project within the German Fraunhofer Center for High Performance Computing, initiated to support performance-oriented use cases, including HPC, AI and Deep Learning. It has played a central role in impressive projects such as the first ever black hole visualisation in April 2019 and has found itself forming an important part of the organisation workflows of NASA, Shell and the Max Planck Institute. It is also the file system of choice for a number of TOP500 supercomputers. This innovative file system achieves high performance by transparently striping user data across multiple storage nodes. Furthermore, it can distribute file system metadata across multiple metadata servers. In other words, BeeGFS is specifically designed for concurrent access and cluster applications, and delivers high robustness and performance in situations of high I/O loads or patterns. Its flexible architecture allows for “on the fly” scaling of capacity and performance from small clusters up to enterprise-class systems with thousands of nodes.

BeeGFS is designed for Ethernet from the ground-up and offers highest performance with the native RDMA support and a native Linux client. Several other useful features, such as storage quotas and user data / metadata mirroring are offered as well. One characteristic of BeeGFS that we find particularly useful is that it can support hybrid storage environments easier than other solutions. It will enable us to build performant, yet efficient environments by mixing different storage mediums and build futureproof hybrid cloud solutions.

BeeGFS and Cloud

Another area in which BeeGFS truly excels is cloud workflows. Besides allowing for an easy integration of cloud instances such as AWS and Microsoft Azure into a hybrid storage environment, this file system offers one awesome innovation.

BeeOND is an on-demand deployment BeeGFS that lets you spin-up a file system on any number of machines with a single command. Such BeeOND instances provide a very fast and easy to use temporary buffer while at the same time keeping I/O load away from your primary storage cluster. In other words, it will soon be possible to flexibly expand the capacity and performance of your file system and enjoy all the benefits of cloud storage. As soon as your needs are met, the BeeOND file system instances can be removed just as quickly – a true, efficient cloud on-demand solution.

The performance of BeeOND was even tested by the Azure HPC team, demonstrating the first-ever one terabyte per second cloud-based parallel file system. To be exact 1.46 TB/s of read performance and 456 GB/s of write performance.

Our test setup

In a test in which performance is the only relevant metric, a typical approach to set the new high score would be to simply use more hardware. We, however, have decided to build an environment with a comparable amount of hardware to those environments behind the three highest test scores. This allows for a more meaningful and fair comparison of the test results.

Twelve units of the all-NVMe ELEMENTS BOLT are each only half populated with NVMe devices – 12 Micron 9300 devices per ELEMENTS BOLT (instead of 24) with a grand total of 144 NVMe devices used in the whole environment. Each ELEMENTS BOLT is running the BeeGFS file system.

Each storage node is connected via a 100Gbit to a 100Gbit Mellanox switch. The same switch has 20 load generating clients (ELEMENTS GATEWAY) connected via a 50Gbit connection. Also connected via a 50Gbit connection is a prime client node (ELEMENTS WORKER), on which the benchmark application is running. For administrative and management access, all storage and client nodes are connected to a 1Gbit house network.

Who are the three best-performing competitors?

WekaIO, a US-based private company specialising in high-performance storage, uses the SPEC SFS benchmark to showcase WekaIO Matrix, their flash-native parallel and distributed, scale out file system. The file system runs on six Supermicro BigTwin chassis, each consisting of four nodes, populated with 138 Micron 9200 NVMe devices.

Quantum Corporation, a public data storage and management company that is well-known in the Media and Entertainment industry for several innovations, particularly the StorNext file system. Quantum is also a long-standing and highly valued technology partner of ELEMENTS. For the test, ten F1000 Storage Nodes with a total number of 100 Micron 9300 NVMe devices were used on the Xcellis based StorNext7 (v7.01) platform.

CeresData is a company based in Bejing which specialises in storage technology. Their test environment was built on their ten-node storage cluster D-Fusion 5000 SOC, running the CeresData Prodigy OS.

Benchmark results

While using a comparable amount of hardware, similar to the three best-performing competitors, ELEMENTS and BeeGFS achieved a higher number of streams and thereby a higher throughput than any other environment tested in in the SPEC SFS’ VDA Benchmark. The new high score is 14.58% higher than the previous, which translates to 6.25 Gigabytes per second. When comparing the relation of the maximum stream count to the number of used storage devices, similar results arise. The ELEMENTS environment displays the most efficient utilisation with an average of 76.39 streams per NVMe device.

While the straightforward metric of maximum stream count is interesting and easy to digest, more valuable realisations can be made upon deeper inspection of the details.

CPU utilisation

Using a comparable amount of hardware as the other solutions allows us to examine how efficient the ELEMENTS BOLT running BeeGFS really is in delivering performance. One interesting aspect to analyse is of course the CPU utilisation. When broken down to its bare essence, the role of the CPU in a storage system is primarily one of data processing, making sure information is interpreted and the instructions executed in the shortest possible time. It is a key component, and its efficient utilisation can have a large impact on the overall performance. Modern CPUs can have a varying number of cores. For this reason, in chart number 4, we have set the maximum number of streams in relation to the overall number of CPU cores that the environment has employed. The ELEMENTS BOLT environment running BeeGFS has achieved the highest number of streams (throughput) per storage CPU core. This doesn’t come as a huge surprise when considering that BeeGFS was first developed for High-Performance Computing use cases.

A factor which is just as important for achieving the highest possible benchmark results is the performance of the clients running the test application. Chart number 5 shows that ELEMENTS has achieved the highest number of streams (throughput) per client CPU core. In other words, the impressive benchmark results are in no way falsified by the use of excessively powerful clients.

Latency measurements

Another important metric for media workflows is the latency. Latency refers to the time delay, usually measured in milliseconds, between the initial data request and its eventual delivery. In media workflows it is important for the latency to be both low and stable. In other words, upon pressing play, the video playback should start as fast as possible and always at roughly the same speed. Or in the case of the VDA workload, the ingested video is captured without missing any frames of the volatile video source.

Chart number 6 displays the measured latency during the test runs of all four environments. As the number of streams rises, it is expected that the latency will also rise. This is the  result of an increased load on every component within  the environment. The most desirable outcome of the test should produce a steadily growing line as the stream count increases with the overall change in the Y-axis being as low as possible.

It is reasonable to claim that the latency line of the ELEMENTS BOLT environment running BeeGFS depicts the best results on average of all four contenders. This is because the latency remains both low and stable throughout the whole test. The overall significantly higher latency of WekaIO and CeresData can possibly be explained by the different use cases that these companies are focused on, while the drastic latency increase during Quantums last test runs is most probably a consequence of the environment running at its performance limit. Other than that, the very low latency of their lower stream count runs highlights the strength of iSer / RDMA based block-level access of the StorNext SAN clients.

Memory (RAM) efficiency

Over the years, memory has developed into one of the most important components to support storage performance. RAM is the fastest memory at a computers disposal and is used to hold the data gathered from the systems non-volatile storage devices (HDD, SSD, NVMe) to be used for the CPUs immediate access. Beside caching processes of the operating system and running applications, memory is also used for caching file system metadata and database queries. By replacing a portion of metadata reads with reads from the cache, applications can remove latency that arises from frequent accesses. This means that increasing memory can be an easy way to increase the overall performance of a storage solution.

An interesting fact to point out is that ELEMENTS & BeeGFS have achieved the best performance while at the same time using the lowest amount of total memory.

  • WekaIO: 11712 GB
  • CeresData: 10624 GB
  • Quantum: 3392 GB
  • ELEMENTS: 2976 GB

While using exorbitant amounts of memory is in no way against the rules of the benchmark, one might ask, how efficient it is to use 11 Terabytes of RAM on a 342.73 Terabyte file system.

Chart number 7 illustrates just how memory-efficient these different environments truly are by displaying the maximum stream count per Gigabyte of RAM used. This is one more metric in which ELEMENTS BOLT and BeeGFS easily come out on top.

Conclusion

ELEMENTS & BeeGFS

BeeGFS is an innovative file system that is highly performant, very flexible and has proven itself in a number of performance-demanding use cases. Now, after extensive testing in the ELEMENTS ecosystem, we are very happy to introduce it to the Media and Entertainment landscape. This cooperation will allow our clients to enjoy extremely high-performance Ethernet, the option for easy, on the fly expansion and in the near future, revolutionary cloud integrations and on-demand workflows.

VDA Benchmark

The SPEC SFS VDA workload is a very well-designed test that allows its participants to give their all to showcase the performance in a fair and comparable manner. By being a particularly write-heavy workload, it simulates an ingest workflow most accurately. However, post-production workflows generally tend to be more read-centric due to the real time playback requirements. Currently, we are working on releasing a custom benchmark workload that can be used together with the new SPEC Storage Benchmark 2020 to gather metrics that match the requirements of video editing and playback more closely than just focusing on ingest.

When it comes to benchmarking in general, using only the minimalistic metrics makes it easy to win by simply utilising more hardware. A standardisation of the hardware environment would allow for a more meaningful comparison of the solutions.

A second takeaway is that a much more detailed look at the benchmark particularities is needed to truly gain valuable insights. For instance, it is apparent that when the results are compared to the amount of hardware contained within the environment being tested, Quantum StorNext (overall third place) overtakes CeresData Prodigy (second place) in just about every metric but the maximum stream count.

For these reasons, when the next benchmark high score is announced, we would like to invite you to inspect the details rather than concentrate on the narrowly defined main metrics.

What do you think about the benchmark and its results? We would love to hear your opinion.

For more detailed information about the hardware, settings and test results visit the SPEC SFS website.

See the environment setup and full test results:
ELEMENTS BOLT with BeeGFS
Quantum StorNext7
CeresData Prodigy
WekaIO Matrix

RIST and SRT overview: what to choose and why

Vitaly Suturikhin
Head of Integration and Technical Support Department at Elecard


New times place new demands in terms of data transfer speeds and delivery reliability, while the amount of content being transferred keeps growing. When tasked with delivering high-definition video via the public internet, network and content providers inevitably encounter the following problems:

  • delivery is not guaranteed, and the video on the receiving side may have missing frames, be out of sync, or contain artifacts or frozen frames
  • many solutions have a high latency and cannot be used for live event broadcasting
  • a desire to be able to use a link of any bandwidth, or even several links
  • the content needs to be protected against theft
  • the implementation needs to be simple, while the protocol must be compatible with other hardware and software.

Many vendors, content providers, network providers, and broadcasters are at a loss: which protocols should they support and implement in their encoders, players, set-top boxes, and playout systems? Meanwhile, low-latency, guaranteed data delivery protocols such as RIST and SRT have gained a lot of popularity lately. But which of them should you choose?

What do RIST and SRT have in common?

Both protocols are designed for low-latency video delivery via public internet networks. SRT was originally developed by Haivision for use in their own encoders and decoders. It was released as an open real-time video delivery protocol in 2017. Note that Haivision is not only the developer of SRT and the founder of the SRT Alliance but also a member of the RIST Forum which is part of the Video Services Forum.

2017 was also the year when the development of RIST started. Many companies used various RIST implementations in their products, but their solutions were not mutually compatible.

RIST and SRT have the same encryption level and both support high-bitrate streaming and Forward Error Correction (SMPTE 2022-1). Both protocols support pre-shared keys up to 256 bits in length and automatic repeat requests (ARQ), can bypass firewalls, and allow tradeoffs between delivery reliability and latency.

Today, both protocols are implemented as open-source libraries, which helps accelerate and simplify the launch of broadcasting as well as avoid dependency on a specific vendor, as opposed to using proprietary solutions like Zixi.

SRT and RIST are present in many popular solutions and frameworks, such as AWS Media Connect, Nimble Streamer, VLC, gstreamer, ffmpeg, and wireshark (via plugins). The librist and libsrt libraries are available for all three major operating systems: Windows, Linux, and MacOS.

What is the difference between the protocols?

SRT was originally developed by a single company based on UDT (UDP-based Data Transfer Protocol), a well-known and proven file transfer protocol. UDT is much faster than TCP and can be easily configured. Unlike files, however, media data are much larger in volume and very susceptible to losses. SRT shows excellent performance at a low or medium packet loss ratio — say, no more than 10% to 12%. The primary aim of SRT was to replace the legacy RTMP protocol that Amazon stopped supporting, while browsers dropped the support of Flash plugins.

RIST was co-developed by a team of experts from different companies specializing in video content delivery (the Video Service Forum and a group of technical representatives from various media companies that would later form the RIST Forum). RIST is based on the RTP, RTCP, and SMPTE-2022 protocols (with IP transport) as well as several other Internet standards (RFC). RIST was originally developed for transferring video content and incorporated much of the experience gained in developing the earlier open and proprietary streaming protocols. RIST can recover up to 55% of sustained and up to 86% of short-term packet losses.

Even old players, transcoders, media servers, or analyzers can work with RIST on the basic level by accepting RTP, however, they do not support SRT.

The approach to authorization is different with the two protocols. SRT uses only pre-shared keys (PSK), which provides an acceptable level of security but does not suit all broadcasters. RIST also uses PSK but can be supplemented with the SRP (Secure Remote Password) protocol for additional protection. In addition, RIST supports DTLS with certificate-based authorization, which is a fundamental requirement of most broadcasters.

For firewall bypass, SRT uses the concept of caller/listener handshaking without permanent rule configuration and also has a special rendezvous mode for that purpose. The principle is based on the connection monitoring function in firewalls. RIST, on the other hand, uses RTCP messages for bypassing firewalls.

The methods for lost packet retransmission also differ between the protocols. SRT is not always suitable for narrow-band internet links because it can congest the link with retransmitted packets in case of a high error rate, whereas RIST has the ability to reduce bandwidth consumption for such retransmission. ARQ is implemented in RIST using NACK only, whereas SRT uses both NACK and ACK to acknowledge receipt.

SRT only supports point-to-point mode, while RIST employs the point-to-multipoint approach, including multilink support and a multicast implementation. In contrast to SRT, which is based on an open-source library with a reference implementation from one specific company, RIST is based on open specifications developed with the participation of a group of companies. The librist project has active volunteers who contribute as testers and technical developers.

Why choose SRT?

With SRT, the lost packets are rebroadcast as soon as possible, meaning a higher content quality and lower latency, unless the bandwidth is limited.

Today, SRT has already gained a certain level of market share and spurred an alliance of developer companies that support this protocol and use it in their solutions. SRT is an open-source project that has attracted a considerable community. Currently, the SRT Alliance has more than 450 member companies, including the recently joined AWS, OBS, and Sony.

SRT also works well for transmitting large volumes of data but suffers a sharp decline in efficiency or becomes totally inefficient at loss ratios of 15% and more, which is confirmed by various research studies.

Being still more common today than RIST, SRT is more effective in terms of compatibility with the potential environment. Unlike RIST, SRT is already present in popular solutions such as OBS Studio and Wowza.

The release of SRT v1.5 was planned for 2020 but has still not happened at the time of writing. In this release, the developers promise to implement bonding, C++11 support, and bi-directional metadata exchange as well as improve bandwidth estimation and multicast support.

I have already discussed SRT in detail in an earlier article.

Why choose RIST?

RIST supports IP multicast broadcasting, which enables considerable traffic and network resource savings. RIST makes it possible to broadcast several streams in parallel (multistream multiplexing), requiring only a single UDP port. Seamless switching without glitching is supported between stream copies transmitted over backup links based on the widely used SMPTE 2022-7 standard. On the receiving side, RIST combines several streams into one common stream (link aggregation/bonding).

Since RIST is based on RTP, the vast majority of devices that accept the RTP protocol can also work with RIST to some extent (except for the ability to handle packet retransmission and other killer features of RIST).

RIST has the ability to reduce traffic during packet retransmission to achieve stable broadcasting and eliminate traffic overhead by discarding null packets (padding/stuffing). RIST is optimized for transmitting high-bitrate video via RTP header extension, which allows the range of packet numbering to be expanded from 16 bits to 32 bits. RIST is also deemed to have better security because it supports both PSK (Pre-Shared Key) and DTLS certificate-based encryption which is considered more secure and used by the majority of banks. RIST can recover from losses of up to 25% with 100% overhead and up to 50% loss with 200% overhead. During testing at the Virtual NAB trade show in 2020, RIST was demonstrated to recover from an 86% burst loss with a successful delivery of all the packets (Fig. 1).

Fig. 1 – Successful recovery of all the packets with a burst loss of 86%

A new Enhanced/Advanced profile is currently in development for the protocol. It can be expected to include improved bandwidth management, adaptive bitrate, lossless compression, optimized management of the created broadcast links, hybrid broadcasting support as implemented in HbbTV and ATSC 3.0, and other things (Fig. 2). The release of the Advanced Profile is already planned for 2021.

Fig. 2 – RIST Roadmap

Conclusion

Compatibility has always played an essential role in the video industry. To achieve compatibility, various standards were conceived and approved that could bring together different vendors under a common infrastructure where proprietary technology always has the potential to become a project bottleneck.

Producing content in a form that is accessible for all partners, customers, network providers, post-production houses, and viewers is a key requirement for any broadcaster. But as time goes by, broadcasters' demands increase, concerning not only compatibility but also in terms of usability, latency, bandwidth minimization while maintaining the ability to broadcast UHD content, broadcasting over lossy public networks, security, authorization, and ease of configuration and management. In response to these demands, new technologies and protocols are emerging, including the two that are being compared in this article. Both protocols are already widely used: at the time of writing, the SRT Alliance has more than 450 member companies while the RIST Forum has more than 130. However, it is anyone's guess as to who will capture the market in the medium- and long-term. Perhaps a time will come when SRT and RIST will be combined into a single protocol, because, despite the differences, they serve a similar purpose and are close to each other in their functional characteristics.

A comparison of SRT and RIST that summarizes all of the above is provided in the table below.

Functionality

SRT v1.4

RIST (Main profile)

UDP-based

Yes (UDT)

Yes (RTP)

Created by

Single company

Group of companies

Point-to-multipoint broadcasting

No

Yes

Lost packet retransmission mechanism

Yes

Yes

Firewall bypass mechanism

Yes

Yes

Support for all codecs

Yes

Yes

Refference implementation (open-source library)

Yes

Yes

Removal of padding/stuffing (null packets)

No

Yes

Compatibility between different vendors' implementations

Yes

Yes

FEC support

Yes

Yes

Security/Encryption

PSK

DTLS or PSK

Backup

No

Yes (bonding and seamless switching per SMPTE-2022-7)

Authentication

PSK

Certificate-based or TLS-SRT

Upper loss threshhold

12% to 15%

40% to 55%

High-bitrate broadcasting

Yes

Yes

Existing community

Yes

Yes

Compatibility with earlier standards

No

Yes

Connection multiplexing at a single port

Yes

Yes

Bandwidth saving during packet retransmission

No

Yes

Low latency

Yes

Yes

Lo jitter

Yes

Yes

Wide market presence

Yes

Yes

Compatibility with legacy solutions

No

Yes

Native latency measurement function

Yes

Yes

Tunneling (GRE)

No

Yes

Here is the comment from one of the main developers of open source RIST library librist – Gijs Peskens – about what he thinks about RIST and SRT comparison:

“I think the biggest reason why I love the RIST protocol is because it's very simple. I would be able to sit down with someone and be able to explain the core simple profile protocol in less than half an hour, less if they have a good knowledge of video flows and networking I guess. From an operation side of things I think the prime reasons we went with RIST are that simple profile supports multicast (something SRT at the time did not, I'm not sure if it does at this moment), and it being backwards compatible with plain RTP receivers.

To dive a bit deeper into the protocol, RIST was designed from the get-go as a protocol for video transport, primarily MPEG-TS, and is based on existing technologies used in video transport/networking like RTP and GRE, and uses Adaptive Retry reQuests to signal packet loss to the sender.

About libRIST, which I help maintain, we just released our first stable release, and are laying the ground work for 0.3 which will feature full duplex communication, certificate based access control and more”.

Author

Vitaly Suturikhin

Head of Integration and Technical Support Department at Elecard since 2015. Vitaly has over 15 years of experience in information technology. He is in charge for support of the most important Elecard clients such as MTS, Moscow Metro, Innet, ReadyTV. Vitaly was responsible for IPTV and DVB broadcasting at FIFA Confederations Cup 2017 and FIFA World Cup 2018 in St. Petersburg.

How Bulb & Agama’s innovative solutions support A1 Hrvatska in its digital transformation


Over the past few years, customer habits and expectations have changed rapidly, and operators and service providers must now deliver the highest quality content across multiple devices. To achieve customer satisfaction, video service operators and providers must collect huge amounts of user data in real-time.

Then, to fully understand their users’ behaviour and the issues that frustrate them the most, they must have the tools and know-how to analyse this data and interpret it.

A1 HRVATSKA INTRODUCING A "NEW FORM OF LIFE"

When digital services and communications solutions provider, A1 Hrvatska, launched its "New Form of Life” campaign, everyone was puzzled. They kept asking: ”What does this mean?” Behind this catchy slogan was an inspiring campaign based on the symbiosis of ‘man’ and ‘technology.'

A1 Hrvatska was quick to recognise the potential of the latest data-analytics technology. It wanted to take advantage of a solution that provides opportunities that seemed futuristic only twenty years ago but can now be rapidly and economically implemented.

It has always followed its customers' needs when it comes to developing its service to fit in with the latest lifestyle changes.

Looking to combine next-generation content with a unique user experience, its focus is on helping customers to really enjoy the benefits of their digital products and services. To achieve this, A1 Hrvatska assessed thousands of different options to find the best smart solutions that save time and are easy to navigate.

 One of the services that needed to be included in this new troubleshooting plan was DTV (Digital Television). Prior to this project, A1 Hrvatska used a comprehensive Agama solution that helped it monitor network health, including the video head-ends, and assess the individual customer experience.The solution is broadly deployed on its entire network and is integrated on both IPTV and cable STBs, providing network assurance capabilities and distributed analyzers. A1 Hrvatska wanted to ensure seamless integration of the new project with this

ABOUT A1 HRVATSKA

A1 Hrvatska, part of A1 Telekom Austria Group, employs about 2,000 people and takes care of the digital communication needs of 2 million customers on an everyday basis. It strives to improve its customers' digital experience with innovations and solutions. This led A1 Hrvatska to reinvent its customer service and empower its representatives with a state-of-the-art solution for fast diagnostics and guided troubleshooting.

THE CHALLENGE

Determined to reinvent standard customer support processes and transform them into new digital flows for both customers and customer support representatives, A1 Hrvatska had far-ranging requirements. These included:

  • Automation and simplification of standard troubleshooting
  • Troubleshooting flow enrichment with real-time metrics for all services
  • Empowering agents and end customers through guided flows, from issue diagnosis to resolution
  • Integration with existing IT systems
  • Flexible customer-care solutions
  • Improved customer satisfaction


existing solution.

THE SOLUTION

The goal of the project was to automate customer service to provide agents with automated diagnostics and troubleshooting tools and to empower end-users with an intelligent self-care tool. To accomplish this A1 Hrvatska chose Bulb’s Cempresso Customer Care solution.

This is a new-concept software platform that includes an automated background investigation and root-cause analysis, 360 degree service visibility, as well as a unique automatic remedy and guided support concept, via various channel interfaces.

One of its key features is that it enables agents to use artificial intelligence (AI)-driven suggestions to resolve issues fast and with a single tool.

As it wraps around existing IT systems and visualises data for customer service agents, it simplifies agents’ everyday tasks, making them easier to comprehend. This also makes it easy to deploy and harness the full potential of the Cempresso platform, which was instrumental in the selection of Bulb as a vendor for its implementation.

As already explained, to get real-time insights into the objective customer experience and where issues have occurred, A1 Hrvatska wanted a solution that could easily connect with the Agama API and enable easy access to DTV metrics and statistics. This data is crucial in the everyday troubleshooting process.

What makes this collaboration unique?

By combining these two powerful products, A1 Hrvatska’s customer care agents get clear insights into the DTV service. They can identify and solve some issues before the customer is even aware of them and view clear guidance on solving any customers issues that are raised.

Prior to this, they were forced to interpret the data themselves and come up with possible corrective actions that might help.

THE RESULTS

  • The Cempresso Customer Care dashboard seamlessly connects to existing systems and prepares real-time gathered data for the agents’ usage. This simplifies the way agents view the current situation at a customer’s premises, as everything is only a click away on a user-friendly dashboard.
  • Providing real-time metrics and information regarding the DTV service also makes it easier for customer service agents to understand issues reported by end-users.
  • Cempresso Customer Care enables fast issue resolution through a step-by-step guided workflow that uses Agama’s on-request gathered KPIs and metrics.
  • With seamless integration of Agama and Cempresso, agents can view important data through a single screen, rather than switching between two standalone applications to solve each customer call complaint.
  • The agents are able to view statistics for different types of services, such as Live TV, VoD, timeshift, catch-up, and start-over.
  • Cempresso interacts with Agama’s Analyzers and extracts STB QoE in real-time, when customer service operators open the customer view dashboard.
  • This new solution allows simple ad-hoc fixing of customers’ issues and, where necessary, the operator can decide whether the customer needs an on-site technician due to installation problems.

CONCLUSION

Combining the Cempresso Customer Care platform with Agama’s client probes, A1 Hrvatska benefits from a solid integration that allows it to collect ad-hoc data sets in order to diagnose customer issues.

Furthermore, this solution is powered by two companies that have proven expertise in network analysing and monitoring, and also in providing customer-care solutions.


Most importantly, A1 Hrvatska believes the integration between Bulb’s Cempresso Customer Care tool and Agama’s solution was one of the key factors to the success of its "New Form of Life" campaign.

ABOUT AGAMA

Agama Technologies specialises in empowering video operators’ business processes with awareness that can drastically lower operational costs and improve customer satisfaction. With extensive experience and an industry-leading solution for monitoring, assurance and analytics of video service quality and customer experience, Agama helps operators to implement a data-driven way of working to assure optimal service quality, improve operational efficiency and increase customer understanding.

For more information, please visit us at:

www.agama.tv

ABOUT BULB

Bulb Technologies is a software development company that has been supporting digital transformations in large companies for over a decade now. Its software products automate operations departments and transform old ways of working (manual, error-prone, slow, etc.) into new modern digital ones. Today, its clients are some of the leading service providers, including companies in Deutsche Telekom, Telekom Austria, and United Groups. Bulb Technologies provides solutions for telecom service management, customer support process automation, and knowledge management.

For more information, please visit us at:

www.bulbtech.com

AMC Networks implements ‘many to many’ digital transformations

Josh Berger
Three Media

David Hunter
Three Media


The new content demands that emerged during the pandemic have underlined the validity of the US-based entertainment company’s powerful new media supply chain, which was designed by AMC Networks in partnership with leading consultancy Three Media and implemented with a number of best of breed vendors.

As the number of viewing options has increased, there has inevitably been a great deal of discussion about consumer habits and the need to ensure that everyone can watch their chosen content in the highest-possible quality. But to date, there hasn’t been nearly as much evaluation of the implications these changes are having for entertainment companies and their technology service providers.

All of which made the insight gleaned from a recent IABM TV interview with entertainment company AMC Networks (AMCX:NASDAQ) especially valuable. Available in full on the IABM website (https://theiabm.org/in-conversation-with-amc-networks/), the interview sees IABM Head of Membership Engagement Lisa Collins speaking to two key AMCN personnel – EVP Chief Technology Officer David Hunter and Senior Vice President, Media Operations Josh Berger – about the design and delivery of the company’s ambitious new “enterprise-wide media supply chain”.

Observing that the creation of “great content” is at the core of its business, Hunter says that in AMCN’s “global technology operations our goal is to make sure that we deliver that content to our passionate consumers wherever they are.” With new platforms and delivery mechanisms emerging all the time, the old ‘1 to many’ distribution model is no longer appropriate, hence “it has been up to us to shift our operations from ‘1 to many’ to a ‘many to many’ model.”

“We were facing rapid growth in non-standard platforms,” adds Berger, “and we realised we needed a different approach to the way we service our distribution points and viewers across our ecosystem. It used to be a TV-first business model,” but now the company is managing an ever-expanding number of platforms hosted on nearly every possible device that can play back video. “We had no choice but to create a media supply chain factory that encompasses every processing step.” This includes rights management, master media acquisition, scheduling, show preparation, and distribution onto platforms so that their subscribers and viewers are enjoying their content on the platform of their choosing.

With the awareness that increased automation would be integral to its new infrastructure, the AMCN team set about an intense period of process and systems review and design in close conjunction with Three Media. Serving as lead consultant for the new AMCN supply chain, Three Media is a long-established advocate of carefully planned and managed digital transformation, working on a wide variety of projects in the broadcast and production sectors.

The result of AMCN’s deliberations is a new enterprise-wide media supply chain, informally referred to as Platform ADAM (Advanced Digital Asset Management), developed to make every facet of the content process more efficient, the new infrastructure “fully digitalises our operations and connects across rights and standalone systems, [yielding] a seamless ‘comes in once, goes out many’ process, from which we have been able to gain tons of efficiencies,” says Hunter.

‘Data-driven’ digitalisation

Acknowledging that the effective use of data is integral to media’s digital transformation, the new AMC Networks supply chain foregrounds a data-driven approach. Hunter recalls, “We worked with IBM on a service bus solution to connect all our systems with the data exchange, [meaning we no longer need] humans manually doing that in the digital silos.” The new infrastructure also includes best-of-breed solutions from Evertz (distribution playout), Avid (post-production), Xytech Systems (work order maintenance), WideOrbit Program, and Symbox, with the last-named providing “the logic and the conditions we need to orchestrate the sequencing of data workflows across our connected systems, so we do not stumble over ourselves.”

The transformation process has also seen AMC Networks collaborate with new and existing service providers to gain intelligence about their post production processes and “really help reduce some of those manual workflows around edit preparation, break points, graphics, and profanity [which are precursors to] getting into edit,” says Berger. The AMCN teams embraced this new technology early on, with ADAM becoming a major contributing factor to the company’s ability to drive the business forward during the pandemic, “Remote editing has been another success, with everyone ‘remoting in’ from wherever they are located.”

AMC Networks no longer uses legacy production methods. “It’s a digital workflow only” confirms Berger. As a result of the overhaul, “we have data feeding our systems [fulfilling our top goal] for the year of becoming a more data-driven company.” By connecting media supply chain data with their Business Intelligence tools, AMCN can begin to collaborate with internal departments such as Research, Strategic Planning, Marketing and Finance in order to support strategic decisions based on a broader set of information and connecting internal supply chain data with external data coming back to AMCN from their distribution landscape.

In keeping with the fast-moving nature of today’s media, the new supply chain is also enabling fresh elements of AMCN’s offerings as they come to market. For example, the company is currently making waves with its recently introduced streaming service, AMC+, which is designed as an audience-focused package of curated premium content.

“[Platform] ADAM is not finite,” confirms Hunter. “It will continue to grow and expand as the company does. We are always looking for innovative ways to get content into the hands of those who want to see it. It’s not set in stone and that is something we really like about the new architecture, where the data exchange, workflows and business processes enable content delivery as seamlessly as possible.”