IP monitoring in complex data transfers

IP monitoring in complex data transfers

Articles

Thought leadership articles by IABM and our members
Articles taken from IABM's journal and at show papers
To submit your article email marketing@theiabm.org

IP monitoring in complex data transfers

Journal Article from Qvest

Mon 17, 01 2022

Hartmut Opfermann

Senior Solutions Architect, Technology Consulting, Qvest


Digitalization has triggered a technical revolution among TV broadcasters too: with transfer rates of on average 100 Gigabits per second, huge volumes of data are nowadays coursing through broadcasters’ fibre-optic cables. But with the move away from analogue technologies come new challenges for live broadcasts: even minor disruptions in data transfer over IP networks today quickly damage image - and cost money. IP monitoring puts programme providers in the picture.

Challenge with live broadcasts

When millions of people around the world were eagerly awaiting the first kick-off of the European Football Championship on 11 June 2021, probably only a few were aware of the technical developments behind today’s razor-sharp live images. From the moment the ball starts rolling at the latest, the lines run hot in the broadcasting centers around the globe. This is because TV spectacles such as this major sporting event have long been down to a complicated interplay of digital IT technologies. This is especially true for live transmissions with transfer rates of ten Gigabits per second or more with Ultra High Definition (UHD) transmission quality. Then, during a regular soccer match, several Terabytes of data course through the lines of the broadcasting stations - and then the network infrastructure of the broadcasters also has to deal with “kick and rush”.

Even minimal interference or even interruptions in the transmission of signals in real time can sometimes have a fatal effect on the broadcaster’s reputation. Worse still, an involuntary disruption, for example during a commercial break, can really cost money. Unimpaired transmission quality is particularly relevant for crowd-pullers such as major sporting events - whether before, during or after a soccer match, for example. And moreover, the reliability of TV transmissions is just as important as the content, and especially for live events, the transmission technology is a decisive factor if both image-damaging and economically burdensome consequences are to be avoided.

“Ghost match” during data transfer

A typical problem so far has been that the IP technologies adopted from the IT industry require monitoring of the state of data flows so that faults during the transmission of uncompressed live video signals can be quickly analyzed and remedied. Unlike in the streaming environment, where lost data units can usually simply be forwarded without affecting viewers, packets in a real-time transmission then run offside forever. But hardware defects too - for example in a switch, a cable or the laser in the fibre optic interface - then become a “ghost match” for broadcast IT administrators. This is where IP-based network topologies come into play: in the past, the broadcasting industry used SDI technologies to transport a single unidirectional signal over SDI cables, but with IP-based transmission, multiple bidirectional data streams can be transmitted over a single cable. Among other things, this enables more camera feeds, higher resolutions, virtual reality functions and live production directly in the studio or venues.

IP monitoring: preventing wasted time with the right flow

IP monitoring solutions enable broadcasters to analyze flows in the wide area network (WAN) and thus improve troubleshooting. In practice, two different IP monitoring methods have become established: NetFlow and sFlow.

Netflow is a technology originally developed by Cisco in which devices such as routers or layer 3 switches export information about the IP data stream within the device via UDP. It is well suited for billing IP traffic on Internet routers. UDP datagrams can then be received, stored and processed by a NetFlow collector. This accumulated information can be used for traffic analysis, capacity planning or analysis in the context of quality-of-service strategies.

As a counterpart to this, sFlow (Sampled Flow) has become established in recent years. This is a packet sampling protocol designed by the InMon Corporation that has found wide acceptance in the networking industry. The decisive difference to Netflow is that Netflow exports statistics, while sFlow exports sampled packet headers from which the statistics are generated externally.

sFlow can be embedded in any network device and provides continuous statistics on each protocol (L2, L3, L4 and up to L7) so that all traffic on a network can be accurately characterized and monitored. These statistics can be used for overload control as well as troubleshooting, security monitoring or network planning. The advantage is that this reduces the amount of information that ultimately has to be processed and analyzed. This leads to a low load on the CPU and the data line.

Search For More Content


X