A Brief Discussion on JPEG-XS Light Compression Remote Production

A Brief Discussion on JPEG-XS Light Compression Remote Production

China Media Group
Chenghu Guo
Xin Jin
Beijing Cinctech Technology Co., LTD
Bo Wang、Xiaokun Dong

 

Chenghu Guo

Abstract:

Remote production has become an important technological development direction in the broadcasting industry in recent years. JPEG-XS technology can be regarded as a very representative light compression technology. In the recent years of remote production trials, JPEG-XS light compression technology has been tried in intranet, leased line and SD-WAN public network environments. In this article, the feasibility of combining JPEG-XS light compression technology with SD-WAN public network technology is demonstrated using the example of a live broadcast report on the 2024 General News Channel’s “New Archaeological Discoveries at Wuwangdun”.

Keywords:

Remote Production, JPEG-XS, SD-WAN

In recent years, the broadcasting industry has increasingly adopted IT technology, IP-based systems have become more mature, and the advantages of IP have gradually become apparent. IP-based technical solutions not only solve the problem of signal loss during long-distance transmission, but also greatly simplify the tedious steps of embedding and de-embedding video and audio. IP-based technology also provides a more convenient platform for system customisation and expansion. By using IP-related technologies, we can quickly adapt and expand the system to meet today’s diverse content production needs.

In the development of IP-based technology, uncompressed technology has been CMG News Channel’s direction of technology development in recent years. Uncompressed technology means high-bandwidth signal transmission and signal processing that can ensure the best picture quality, which is particularly important when broadcasting to large screens in 4K or even 8K. However, ensuring picture quality also means that a huge amount of bandwidth is needed to transmit the signal, which greatly increases the pressure on the transmission network.

To strike a balance between picture quality and transmission bandwidth, shallow compression technology has begun to attract widespread attention. The essence of shallow compression technology is that it can significantly reduce the transmission bandwidth with a slight loss of picture quality, while keeping the encoding and decoding delay within an acceptable range. This technical feature is very suitable for some program production scenarios. JPEG-XS is one of the best shallow compression technologies.

In May 2024, CMG news channel produced the program “New Archaeological Discoveries at Wuwangdun”, which was the first time the JPEG-XS shallow compression technology was used in live broadcast production. The following is a brief introduction of how JPEG-XS Light compression is combined with SD-WAN technology to achieve remote production, based on the production of this program.

Introduction to the technical characteristics of remote production

The features of JPEG-XS light compression technology that appeal to CMG News Channel are high image quality and low latency. JPEG-XS technology can achieve a latency of milliseconds, which is even less than the time it takes to display an image. At the same time, the JPEG-XS light compression technology can maintain image quality to the greatest extent possible without loss by optimising the algorithm, and has the characteristic of multiple iterations. In addition, JPEG-XS light compression technology also supports the encoding and decoding of high-dynamic, wide-colour-gamut images such as HDR and BT.2020, making it perfectly suited to the current 4K/8K technology development trend.

Another highlight of the JPEG-XS light compression technology is the flexible definition of the compression ratio. Depending on network bandwidth, codec device and content characteristics, users can flexibly set the compression ratio on the encoding device. For example, the latest generation of GrassValley’s LDX150 camera used in this archaeological activity supports direct output of JPEG-XS IP streaming signals, and the compression ratio can be adjusted between 5:1 and 20:1. This program was produced in high definition, and the bit rate of each high definition (1080/50i) JPEG-XS signal ranged from 60M to 240M.

After using the JPEG-XS light compression technology, a new challenge became how to efficiently and economically transmit the lightly compressed signal over the network. In early tests, the lightly-compressed signal was mainly transmitted over bare fibre or a leased line. This transmission method essentially connects the front and back systems directly via fibre to form a relatively closed intranet environment. While this method is secure and reliable, it has significant drawbacks. It requires a large investment and is inflexible, as the fibre or leased line must be rented each time production starts, and not all sites have the conditions for a leased line. This transmission solution does not really deliver cost reductions and efficiency gains due to its high demand on network resources and high cost. Only by solving the transmission problem can the convenience of remote production be truly realised.

In contrast, SD-WAN networks offer a more economical and flexible solution. SD-WAN networks use existing public network resources to achieve efficient and cost-effective transmission of lightly compressed signals through intelligent routing and traffic optimisation technologies. This approach not only reduces transmission costs, but also improves network reliability and flexibility to better support remote production.

Introduction to the JPEG-XS Remote Production System [u1]

u2] The JPEG-XS Remote Production System can be divided into two parts: the front and the back.

Front system

The front system [u3] is housed in a compact 14U flight case and can meet basic production needs. It uses a Grass Valley LDX150 camera with an HPE300 power supply that can power three cameras in 2U of space. The LDX150 camera also has built-in XS codec functionality, which can output JPEG-XS IP stream signals directly from the camera head. At the same time, audio from the camera head’s microphone, intercom, tally and OCP control signals are all connected to the front core switch via the service optical port on the camera head and then to the rear system via the SD-WAN public network link, providing full channel functionality. With high-density IP-based cameras, the system can be quickly expanded to meet multi-channel production needs.

The flight case is equipped with a core switch and a management switch. Due to the small number of front-end devices and the high stability of the core switch itself, the core switch has a dual power supply. Taking into account the cost of system construction and system security, we decided to split different VLANs on a core switch to achieve a logical primary/secondary relationship for service data. The Grass Valley LDX150 camera has two 25G optical ports and supports the 2022-7 primary/secondary redundancy mechanism. Therefore, we have divided main service port of the camera into VLAN20 and the backup service port into VLAN30 to achieve the isolation of service networks in a core switch and achieve the effect of 2022-7 redundancy.

The front system is also equipped with a synchroniser to ensure that all equipment in the front is under a unified synchronisation benchmark. In addition, the flight case is equipped with a talkback panel and a four-wire converter to allow internal communication in the front and two-way communication between the rear system and the front system.

Rear system

The core of the rear system is the HD baseband system in Studio 14 in the Fuxinglu office area. To adapt to the interface with the JPEG-XS shallow compressed IP stream, a core switch, network gateway, XS codec card and other equipment were added to the rear production system to convert the JPEG-XS shallow compressed IP stream into the baseband signal. The JPEG-XS signal captured at the front is decoded into an SDI signal by the XS decoder card and then fed into the baseband system for post-production. Meanwhile, the PGM signal from the baseband system is encoded into a JPEG-XS signal via the network card and the XS encoding card, and then sent to the front as a head-end return signal.

The OCP control panel and CCS ONE camera control unit of the rear system are connected to the management switch, which in turn is connected to the core switch at the rear to allow remote control of the camera and tally functions.

Some points to consider when using remote production

Synchronisation issues

The program ‘New Archaeological Discoveries at Wuwangdun’, produced by CMG news channel, used the following network architecture during the live broadcast: front-end corporate network – front-end SD-WAN device – public network – back-end SD-WAN device – back-end corporate network. In an IP-based system, the first task is to solve the PTP (Precision Time Protocol) synchronisation problem. Technical tests have shown that under current technical conditions, PTP signals cannot be transmitted over SD-WAN, which means that the front and back-end systems may be under different synchronisation benchmarks. To solve the synchronisation problem, the simplest and most effective solution is for the front and back systems to lock their time via GPS. If GPS locking is not available, the front and rear systems will be in asynchronous states with each other. This requires the rear decoder to be able to decode asynchronous signals and to embed the rear system’s synchronisation benchmark into the signal during the decoding process[u4].

Service signal problems

Once synchronisation is resolved, the next step is to debug the service signal (i.e. the audio and video signal). In an IP system,[u5] service signals are usually sent in multicast form. In order for the rear system to receive the signal from the front system, the rear IP device must send an IGMP (Internet Group Management Protocol) request to the front. This process also involves unicast transmission, so the unicast and multicast addresses of all IP devices in the front and back systems must be planned in advance and correctly configured in the SD-WAN device. The software processing power of the SD-WAN is used to enable effective unicast and multicast communication between the front and back systems. During configuration, the communication direction (front-to-back or back-to-front) must be specified. An incorrect direction will result in communication failure.

It is also necessary to ensure that the management signals between the front and rear are unobstructed. Management signals mainly include the camera’s iris control and the tally signal. In the Grass Valley camera system, the CCS ONE camera control unit is the core of the management signal routing. Due to the public network environment, a separate public network licence must be purchased for the CCS ONE camera control unit to ensure stable transmission of the iris control and tally signals over the public network.

Communication issues

Intercom is a very important system in studio production. In a remote production, where the director’s team and camera crew are in different locations, smooth communication is particularly important. If we rely solely on traditional methods such as mobile phone calls, it may be limited by various factors such as network stability and power exhaustion, which may prevent us from ensuring the security and reliability of the call. So, for this live broadcast, we used advanced IP technology to ensure that the remote production team had the same call experience as a traditional studio production.

The director’s team in the back can seamlessly connect to the talkback panel in the front, the cameraman and the presenter’s wireless earpiece via a professional talkback panel, providing full communication between the front and back. Our studio system uses Telos Infinity Series IP communication panels. The communication panels in the front system are connected via IP to create an internal communication system with the front cameramen’s and hosts’ ear-return devices. The front communication panel is connected to the director’s communication panel in the back via an SD-WAN network, ensuring that the director in the back can communicate clearly with everyone in the front in real time, greatly improving the efficiency of remote production collaboration and the quality of communication.

Camera position matching

For some programs, it may not be appropriate to use only channel machines for shooting. Therefore, in remote production, channel machines are often used in conjunction with guerrilla machines. While the channel machine uses JPEG-XS light compression encoding, the guerrilla machine typically uses a 5G backpack and SRT encoding to return the signal. The difference between the two encoding technologies will inevitably cause different levels of signal delay and different picture quality effects. Therefore, how to balance the division of labour between the various machines is also a problem that needs careful consideration.

After repeated testing and exploration, we have found that for large spatial scenes within a program, the guerrilla camera with 5G backpack is more flexible, enabling efficient shooting in complex environments and greatly enhancing the breadth of scene coverage and the immediacy of dynamic capture. For the need to present fine image detail in the program, the channel camera has the advantage of using JPEG-XS shallow compression technology. With its low latency and high fidelity image processing capabilities, JPEG-XS shallow technology effectively balances image quality and data bandwidth requirements, which is critical for capturing and preserving the subtle visual elements in the scene. The high resolution and colour reproduction capabilities of the channel camera, combined with the benefits of JPEG-XS technology, significantly improve the clarity and expressiveness of the program’s image detail. At the same time, the low latency provided by the JPEG-XS light compression technology allows the director to switch between camera positions with ease and confidence.

JPEG-XS Technical Index Test

After the broadcast of “New Archaeological Discoveries at Wuwangdun”, we tested some key indicators of JPEG-XS technology.

Image quality

We tested image quality at different compression ratios according to network bandwidth conditions. At a compression ratio of 20:1, there is a clear loss of image quality on the monitor, as evidenced by an increase in noise.

We asked the Radio and Television Planning Institute of the State Administration of Radio and Television to conduct a comprehensive test of JPEG-XS compression encoding. The test covered scenes in three common video formats: 1920×1080/50i, 1920×1080/50p and 3840×2160/50p. By selecting distinctive scenes of different types, we conducted a detailed comparative test of the loss of picture quality at different compression ratios.

For the 1920×1080/50p and 1920×1080/50i tests, we focused on the technical indicator PQR (Perceptual Quality Rating). PQR converts the perceptual difference between the tested video and the reference video into a score that represents the viewer’s ability to ‘notice’ the difference in the video. The test results more accurately reflect the picture quality as perceived by the viewer. The relationship between the PQR and picture quality is roughly as follows

PQR=0: the reference and test images are identical.

PQR<=1: the viewer cannot tell the difference between the reference and test images.

2<PQR<4: the viewer can tell the difference between the reference and test videos. This range is typical of high bandwidth, high quality broadcast MPEG encoders and is generally considered to be excellent to high picture quality.

5<PQR<9: The viewer can easily tell the difference between the test video and the reference video. This is often the result of a low bitrate MPEG encoder in consumer video equipment and is often considered good to fair picture quality.

PQR>10: The difference between the tested video and the reference video is large and is considered poor picture quality.

In the 1080/50i scene, we compared the material recorded by a commonly used studio video recorder with the material encoded with the JPEG-XS codec. The results are as follows:

 

Item The name of the image Recorder

DNxHD 120Mbps

JPEG-XS

10:1 120Mbps

PQR_Y PQR_Y
1 Flowerbed 1.9 3.1
2 Turntable 0.8 0.9
3 Basketball 1.3 1.9
4 Leaves 1.5 1.8
5 Birdcage 1.3 1.7
6 Studio 1.7 2.2
7 Beijing opera 1.4 1.9
8 Volleyball 1.0 1.4

 

As can be seen from the table above, in the 1920×1080/50i scene, the JPEG-XS codec with a compression ratio of 10:1 is slightly inferior to the VTR, but still in the range of high quality codecs (PQR<4).

In the 1920×1080/50p scene, we tested two different JPEG-XS compression ratios and obtained the following results:

Item The name of the image JPEG-XS

20:1 115Mbps

JPEG-XS

10:1 230Mbps

PQR_Y PQR_Y
1 Beijing opera 2.3 0.7
2 Complexion 0.7 0.3
3 Night view 1.9 0.5
4 Bamboo leaves 3.2 1.3
5 Folk dance 1.5 0.5
6 Sports 1.4 0.5
7 Play in the park 4.2 1.8
8 Blooming flowers 3.7 1.4

 

As you can see from the table above, the effect of the 10:1 compression ratio is much better than that of the 20:1. The PQR value is close to 1 in most scenes, which is in the visually lossless category, and the effect of 20:1 also seems to be in the range of high quality codecs overall.

For the 3840×2160/50p test, we used two indicators for evaluation, SSIM (Structural Similarity) and VMAF (Video Multimethod Assessment Fusion):

SSIM is an indicator that measures the structural similarity between images and is often used to assess the similarity between before and after image distortion. The value of the SSIM indicator ranges from 0 to 1, and the closer the value is to 1, the closer the compressed or processed image is to the original image.

VMAF combines three indicators: Visual Quality Fidelity (VIF), Detail Loss Measure (DLM) and Temporal Information (TI). It generates a score for each frame and uses an averaging algorithm to calculate the final video score. A score of 95 or above means that the difference is extremely difficult to see with the naked eye; 93-95 means that subtle differences can be seen but are perfectly acceptable; and below 91, the difference is usually more obvious.

The test results show that image quality is excellent at all compression ratios.

 

Item The name of the image JPEG-XS

6:1 1480Mbps

JPEG-XS

10:1 890Mbps

JPEG-XS

16:1 556Mbps

JPEG-XS

20:1 445Mbps

SSIM VMAF SSIM VMAF SSIM VMAF SSIM VMAF
1 Beijing opera 0.988045 100 0.978799 100 0.968025 100 0.96114 99.99895
2 Complexion 0.994263 100 0.991585 100 0.98716 100 0.984024 100
3 Night view 0.989941 100 0.979466 100 0.9677 100 0.961396 100
4 Bamboo leaves 0.991173 100 0.981627 100 0.967106 100 0.958404 100
5 Folk dance 0.968585 100 0.933169 100 0.908078 100 0.897538 100
6 Sports 0.989136 100 0.973964 100 0.956193 100 0.947777 100
7 Play in the park 0.982455 100 0.96611 100 0.942407 100 0.925748 99.99934
8 Blooming flowers 0.976105 100 0.959082 100 0.933238 100 0.917215 100

 

Delay

The Wuwangdun archaeological site in Huainan, Anhui Province, is almost 1,000 kilometres away from the CCTV Fuxinglu office, yet the transmission delay is only about 60-80 milliseconds. This means that after the camera footage is decoded into a baseband signal by the back-end system, the delay is less than two frames. In addition, we also tested the relative delay between the two cameras and found that there was almost no delay difference between them, which ensured that the director could smoothly switch between multiple camera positions using JPEG-XS technology, avoiding alignment problems in the program flow.

The above test results confirm that the JPEG-XS shallow compression technology has the characteristics of ultra-low latency, and the delay between each camera position is basically the same. In contrast, the delay using 5G backpack transmission usually reaches about 3 seconds, and the delay between multiple backpacks is not fixed, which requires the production team to plan and design in advance during commissioning.

The use of JPEG-XS shallow compression technology allows the director in the background to see the shooting screen in near real time, so he can accurately capture every key screen switching point, providing strong technical support for producing more exciting and compact TV programs.

Transmission Bandwidth

When the compression ratio is set to 10:1, the actual bandwidth requirement of an HD signal is about 120M. After testing, we have found that if the bandwidth provided by the operator is less than 150M, the signal will occasionally appear black. The public network environment sometimes has unstable factors. In order to ensure stable signal transmission, we still need to reserve more than 40% redundant bandwidth[u6].

Conclusion

This archaeological activity at Wuwangdun was an important practical application of remote production technology. This live broadcast used a hybrid mode of the channel machine with JPEG-XS technology and the portable shooting equipment with 5G backpack[u7], making full use of the advantages of the two types of equipment and technology. The characteristics of flexibility, high image quality and low latency were perfectly presented to the audience through the director’s precise scheduling, making the audience feel as if they were there in person and deeply feel the unique charm of the Wuwangdun relics.

Of course, JPEG-XS light compression technology still has many aspects that require further in-depth research, such as how to more effectively reduce the impact of network jitter on signal transmission, and how much redundant bandwidth should be reserved in the public SD-WAN network to ensure transmission stability. These issues require our continued exploration and research in the field.

This live broadcast not only proved the feasibility of this technical solution, but also met the current development needs of broadcasters to reduce costs and increase efficiency. The combination of JPEG-XS shallow compression technology and SD-WAN network technology is a useful exploration in the field of remote production. It is believed that in the near future, remote production technology will be more widely used, and the advantages of remote production technology, such as low cost and high quality, will gradually become apparent. At the same time, it will promote the light transformation of traditional broadcast television systems. We will, as always, pay attention to the technological development of remote production, boldly carry out the practice of remote production in program production, fully release the production potential of existing production facilities, and contribute to the transformation of the content production methods of headquarters.

 

Yospace – Monetising Euro 2024 with dynamic ad insertion

Yospace – Monetising Euro 2024 with dynamic ad insertion

Paul Davies, Head of Marketing, Yospace

The 2024 edition of the UEFA European Football Championship was one of the most streamed sporting events of all time. The month-long pan-European tournament attracted a global audience. The spectacle of nations pitted against each other in a series of make or break knockout matches had viewers on the edge of their seats.

Yospace monetized the event with dynamic ad insertion across four continents, the data from which provided a unique global view of streaming trends and their impact on advertising as each moment of drama unfolded.

In this article, I’m presenting two matches from the tournament that demonstrate some of the biggest challenges in streaming and monetizing major events using addressable advertising. They highlight some of the pain points to prepare for if you’re running an advertising-supported live streaming business.

With streaming set to become the primary method of watching and monetizing live television, the scale that rights-holders need to support is only going to grow from here. Hopefully, this article will provide a valuable guide to the digital advertising challenges that lie ahead.

Quarter Final: Portugal 0-0 France (France win on penalties)

The match: With footballing superstars Kylian Mbappe and Cristiano Ronaldo facing each other, there was a lot of hype for this last 16 knockout tie. Unfortunately, it was a disappointingly dull spectacle as France struggled to find the form that had earned them the billing of tournament favorites. There were no goals during the 90 minutes of normal time nor the extra 30 minutes of extra time. A penalty shoot-out added some late drama to what was otherwise a very forgettable game.

Viewers: The audience built slowly from the start and grew during each successive period of the match (first half, second half, extra time, penalties) as the prospect of a single goal became more decisive to the outcome. Viewer numbers increased sharply during extra time and reached a crescendo for the end of the penalty shoot-out.

Advertising: The slow build of viewers during this game meant that the ad views became progressively higher for each break. The peak audience came during the penalty shoot-out and the ad break right before was the most popular of the match, attracting 42% more viewers than the previous ad break. In a win for broadcasters, this ad break was unscheduled – it took place beyond the usual set of ad breaks you’d see in a 90-minute football match – so it resulted in extra ad revenues on top of what had been planned for.

Unscheduled breaks pose questions for the adtech. With audiences rising at a rapid rate, the dynamic ad insertion system had to ensure many millions of addressable ad requests were processed in sufficient time to deliver maximum fill rates. Otherwise, the ad server may not have had time to process all the requests, resulting in timeouts and blank slates. The rights-holder would have missed out on the most valuable ad revenue opportunity of the match.

Semi Final: Spain 2-1 Germany (after extra time)

The match: Germany, the tournament hosts, were on the verge of going out when Florian Wirtz’s wonder goal levelled the match in the 89th minute and took the game into extra time. With the game destined for a dreaded penalty shoot-out, Spain recovered to score the winner in the very last moments of the extra time.

Viewers: The CTV audience grew steadily throughout the match as the drama unfolded. Mobile was more changeable, with sudden surges in viewership cming after each goal, as casual fans switched on anytime they heard of a big moment. Mobile traffic reached a peak with Germany’s last-minute equalizer.

Advertising: The ad break at the final whistle of normal time suddenly increased in value as it came moments after Germany’s last-gasp goal to keep them in the game. The goal drove the audience for the ad break up 33% compared to other games in the knockout stages of the competition.

This increase is in stark contrast to matches that were decided during normal play, without dramatic endings, which typically saw a 25% decrease in ad views at full time compared with half time. A game that seemed destined to end at 90 minutes with a lower value ad break suddenly burst into life and delivered the biggest uplift for an ad break in the whole tournament.

To effectively monetize major sporting events, dynamic ad insertion systems must be able to cope with scale (often across multiple geographies) and extreme fluctuations in viewing habits.

Moreover, because dynamic ad insertion sits at the intersection of streaming tech and adtech, if well architected, it can play a critical role in managing and optimizing the performance of the ad server.

As the TV industry moves towards an all-IP future, the scale we saw during Euro 2024 is set to increase substantially. We will see even greater fluctuations in traffic as matches on a knife-edge are streamed by the vast majority of viewers.

Applying a dynamic ad insertion strategy that delivers maximum value through addressability, while ensuring the best viewer experience during the most thrilling moments, is vital to future success.

Vizrt – Innovating Sharp HealthCare’s Communications and Conferences with Fluid Sound and Vizrt

Vizrt – Innovating Sharp HealthCare’s Communications and Conferences with Fluid Sound and Vizrt

With more than 19,000 employees across California, including its corporate teams and home-based employees, as well as over 2,700 affiliated physicians in clinics and hospitals, Sharp HealthCare wanted to revitalize its internal corporate communications and education capabilities through video.

The construction of the new Sharp Prebys Innovation and Education Center (SPEIC) created a community hub for enhanced collaboration, innovation and lifelong learning, and marked a prime moment to transform the health system’s broadcast AV capabilities in partnership with Fluid Sound. The organization needed an advanced, user-friendly set up that facilitated hybrid functionality, enabled live broadcasting with high production value, and offered seamless communications.

With four acute-care hospitals, three specialty hospitals, three affiliated medical groups, and extensive headquarters with medical simulation labs, a 375-seat auditorium, and four floors of conference facilities, Sharp HealthCare needed the right technology to connect across its dispersed employee base, including its leadership. Alongside a broader goal to broadcast its medical advancements and innovation externally, the organization needed to rethink its approach to video streaming, along with its entire live video production strategy linking all Sharp HealthCare sites and beyond.

Enter TriCaster® and NDI for internal collaboration

Sharp sought a solution to achieve broadcast-quality production for employee communication and education activities, ranging from town halls and community events to training sessions and leadership communications.

“When you’re a large health care organization, communicating effectively across a broad range of constituents can be challenging. When you remove that requirement to have everyone in one room, on one day, at one time, and you have the ability to bring someone in from any location or organization, it’s one of the most exciting features,” says Don Courville, CTO at Sharp HealthCare.

With the support of Fluid Sound, Sharp HealthCare’s new 100% NDI IP-based audio-visual workflow has connected employees with ease, regardless of where they are located. Colleagues unable to attend meetings in-person can easily join from anywhere in the world using a web browser on a computer or via a smartphone using an NDI Remote link. This has been vital in connecting leadership and employees, with immediate success brought by a new virtual coffee talk event with the CEO and COO in a virtually intimate setting.

“This network-based video production solution has transformed how Sharp leadership communicates both internally and externally, and the ROI is significant when compared to renting hotel ballrooms and hiring out 3rd party production teams,” shares Dennis Pappenfus, CEO, Fluid Sound.

At the heart of this new workflow lies a TriCaster, Vizrt’s most powerful video production system, based within the control room at Sharp’s new 375-seat auditorium. TriCaster enables the straightforward integration of remote participants via Microsoft Teams, WhatsApp, FaceTime and more. A single operator can switch between and blend multiple feeds such as presentation slides, live audio and video streams from ten remote-based participants, live video NDI-enabled cameras in Sharp’s medical simulation labs, and more, to create a broadcast-quality hybrid production.

Any of these feeds can be external to Sharp, coming from anywhere across the globe, or fed directly from another location within the organization. For the first time, Sharp can also include direct video streams from state-of-the-art spatial computing tools such as Apple Vision Pro, supporting first-person perspective medical demonstrations to be broadcast to an auditorium audience – and to those tuning in online.

Seamless switching from the TriCaster has also enabled creative takes on town halls and employee communications such as HR updates and executive meetings. With TriCaster’s LiveGraphics capabilities, Sharp can integrate animated titles, looping effects and add presenters with green screen to improve audience engagement and increase retention of the content presented.

Broadcasting medical expertise to the world

Having recognized the success of TriCaster and NDI for internal communications, transforming simple collaboration into full-scale, high-quality broadcast events that increase employee engagement, Sharp looked to further boost their return on investment by using the technology to forge deeper connections with external audiences.

“Our ability to communicate with large audiences – and not have all of the expenses and logistics associated with it – is definitely a winning combination for Sharp,” says Courville.

Within months, Sharp solidified its position as a global health care innovator by facilitating large-scale hybrid events, utilizing the solution’s remote presenter capabilities to incorporate participants in person and virtually, reducing the travel costs associated with bringing presenters on site.

With TriCaster, Sharp is now broadcasting these events to thousands across the globe.  Sharp’s TriCaster installation facilitates straightforward live streaming to the web with broadcasts, and recordings, taking place in High Definition and 4K Ultra High Definition.

And, with TriCaster’s integrated video servers, those in control of production can facilitate playback, replay and live editing without the need for any additional hardware. This has enabled Sharp to transform their Executive Board meetings, with these conferences being live streamed, recorded and available as high-quality clips for the very first time.

“TriCaster and NDI have helped us reach more audiences with tailored content than we’ve ever been able to before, helping us to achieve our goals of innovation, education and community outreach,” concludes Courville.

 

VisualOn – Enhancing viewer experience and streamlining operations: how VisualOn Native+ Players help service providers deliver high-quality streaming efficiently

VisualOn – Enhancing viewer experience and streamlining operations: how VisualOn Native+ Players help service providers deliver high-quality streaming efficiently

As streaming video continues to surpass broadcast and cable TV to become the mainstream media choice, OTT service providers are facing an increasingly competitive landscape. To succeed, providing an exceptional user experience is crucial for retaining customers and reducing churn. The media player on the wide range of client devices plays a pivotal role in ensuring a seamless, high-quality experience. However, service providers face several challenges when it comes to media players:

  1. Addressing the device, format, content and service fragmentation issues.
  2. Supporting effective monetization.
  3. Securely protecting contents.
  4. Reducing latency for live and interactive programs.
  5. Improving user experience KPIs, such as startup time, buffering ratio, failure to start, etc.

The common approach: native players

Many service providers initially opt for “free”, “ready-to-deploy” native player solutions, such as ExoPlayer for Android and AVPlayer for iOS. These players are pre-integrated with the underlying operating systems, support essential platform-specific functionalities, and offer a quick start for streaming services.

However, the total cost of ownership of using native players may not be as low as it seems. Service providers often need to maintain a highly skilled internal development team to:

  1. Customize the native players for the specific needs of the service.
  2. Debug and fix bugs, test and qualify releases in a timely manner.
  3. Keeping up with the rapid technology advancement and changing market landscape.
  4. Integrate and support 3rd party technology.

This results in a continuous investment of time and resources, making it challenging for service providers to transition away from their internal native players. Changing players can be risky, often leading to initial stability issues that disrupt the service.

VisualOn Native+ Players: future-proof solutions for today’s video streaming

VisualOn’s Native+ Players, ExoPlayer+ for Android and AVPlayer+ for iOS, are designed to address these challenges. These players are built to relieve OTT service providers from the complexity of maintaining native players, allowing them to focus on their core business objectives.

Native+ Players are fully API-compliant extensions of the native ExoPlayer and AVPlayer, combining the benefits of native technology with the added advantages of professional support, advanced features, and future-proof capabilities.

Key benefits of VisualOn Native+ Players:

  1. Seamless transition: For service providers already using native players, Native+ Players provide an effortless transition, ensuring no disruption to service or user experience.
  2. Professional support with SLA: Service providers receive guaranteed Service Level Agreements (SLA), including timely bug fixes, fully-qualified releases, upgrades, and ongoing support for new devices and platforms.
  3. Advanced features:
    • Low Latency (CMAF): Optimized for ultra-low latency streaming for live events and interactive content.
    • Enhanced Playback: Features like 360° video, synchronized multiview, i-Frame playback, and WebVTT thumbnail scrubbing enhance user engagement.
    • Built-in Support: Pre-integrated support for 3rd-party ad insertion, analytics, and DRM, saving time and resources on integration.
  4. Integration with VisualOn products: VisualOn’s User Experience Analytics helps optimize service workflows by providing real-time performance insights. Additionally, VisualOn Remote Lab enables remote testing, debugging, and collaboration.
  5. Custom development: Service providers can leverage custom development to future-proof their streaming services, shorten time-to-market, and have more control over their product roadmap, ensuring they stay ahead of the curve in technology.

VisualOn’s Native+ Players free service providers from the burden of maintaining complex media players, empowering them to focus on their core strengths and grow their business.

Customer success story: transformative streaming solutions in action

One real-world example of VisualOn’s impact is its decade-long partnership with Norlys, Denmark’s largest energy and telecommunications provider. Over the years, VisualOn has collaborated with Norlys to deliver a suite of advanced, multi-platform streaming solutions tailored to Norlys’ expanding needs. Together, they implemented cutting-edge multi-DRM support to ensure content security and seamless, cross-platform access and integrated Samsung Smart TV support, widening Norlys’ reach across Denmark.

Through VisualOn’s Analytics Server, Norlys benefits from real-time performance insights essential for maintaining and enhancing Quality of Experience (QoE). Leveraging these insights, Norlys can proactively optimize key streaming metrics, such as rapid startup times, reduced buffering, and consistent video quality across mobile devices and smart TVs alike. This partnership has empowered Norlys to exceed viewer expectations, ensuring the high level of reliability and satisfaction crucial to staying competitive in the OTT landscape. Norlys’ success exemplifies VisualOn’s capacity to deliver scalable, high-impact solutions that elevate the user experience and provide ongoing tools for optimization and growth.

 

The future of high-quality, cost-efficient streaming

VisualOn’s Native+ Players are more than just media players—they represent a comprehensive solution for service providers aiming to enhance their streaming capabilities and deliver an exceptional viewer experience. By leveraging innovative technology, operational efficiency, and future-proof features, VisualOn empowers providers to maintain high-quality streaming while reducing operational costs.

As the streaming industry continues to evolve, VisualOn remains committed to helping service providers succeed now and adapt to the future, ensuring that they stay ahead in an increasingly competitive market.

VisualOn Streaming Media Platform

Veset – Keeping content secure when using cloud playout

Veset – Keeping content secure when using cloud playout

Mārtiņš Magone, CTO, Veset

Getting content delivery right is a top priority for broadcasters and media providers, with security being a critical, non-negotiable aspect. Broadcasters are increasingly turning to cloud playout to manage the complexities of delivering content to a diverse range of platforms and devices, and so it’s critical that cloud playout systems operate to the highest security standards. Not only do broadcasters need to protect their content and channels from unauthorized access, cyber-attacks and data breaches, but these security threats are constantly evolving as attackers adapt and seek to exploit different vulnerabilities in broadcast systems. And so, for broadcasters wanting to use cloud playout, the inevitable question arises: how can they ensure their content remains secure?

Managing evolving threats

Broadcasters need to keep content secure as it moves through the media supply chain to prevent it from being accessed by individuals without permission before it is broadcast or delivered to the end user. The consequences of failing to do this are significant, from major revenue losses to reputational and brand damage. Content needs to be kept secure while in storage and also during transport, feeds need to be protected from being captured and streamed without authorization, and Pay-TV channels need to be protected to prevent them from being made available using illegal decoders or decryption keys, or on prohibited platforms.

Additionally, measures need to be in place to protect TV channels from being hijacked and replaced with inappropriate, harmful or fake content. This has happened many times over the years and has even happened recently. Earlier this year, a TV streaming service in the UAE was reportedly hijacked by state backed Iranian hackers to broadcast a deepfake newsreader reporting on the war between Israel and Hamas. And in May, hackers reportedly hijacked at least fifteen Ukrainian TV channels and replaced them with a broadcast of the Russian Victory Day parade that took place in Moscow.

As technology continues to evolve, so do the threats to content security.

Criminals are constantly developing new tactics to bypass security measures, making it an ongoing challenge for broadcasters and service providers to stay ahead. Broadcasters must ensure that their security measures are robust enough to withstand new and emerging threats by employing a range of security measures.

Adopting a multilayer approach

 To ensure that both content and channels remain protected, broadcasters need to ensure robust security measures are in place. This typically means taking a multi-layered approach to security and implementing a variety of tools and techniques including encryption, access control systems and Digital Rights Management (DRM) tools. Encryption is key to protecting content throughout the entire process, ensuring that in the event that content is intercepted, it can’t be accessed or used by unauthorized parties. Strong access control mechanisms are also essential, allowing broadcasters to ensure that only permitted personnel can access, modify, or broadcast content. This includes the use of multi-factor authentication (MFA) and user-specific permissions. DRM uses tools and technology to control access to copyrighted material and enables copyright holders to manage what users can do with their content, such as preventing editing, saving or sharing content.

Monitoring, whether automated or manual, is another important tool – this time to help identify when security measures have been breached. Some organizations are also exploring the use of AI-driven tools and real-time monitoring to help identify vulnerabilities, security breaches and illegal streams.

It’s also important to have a robust disaster recovery plan in place so that in the event of a security breach resulting in service interruption or loss, broadcasters can quickly switch to the backup system so that channels and services can continue uninterrupted, while the breach is investigated and acted upon. Cloud-based playout solutions offer a highly cost-effective and scalable option for disaster recovery, enabling broadcasters to keep channels on air even in the face of significant disruptions. In the event of an outage, cloud-based backup systems can be deployed quickly providing broadcasters with the flexibility to respond to disruptions in real time.

Understanding why cloud playout is secure

Cloud playout offers significant security advantages over traditional on-premise playout systems precisely because it is based in the cloud. The security measures employed by major cloud providers such as Amazon Web Services and Microsoft Azure are incredibly advanced. These providers invest heavily in building secure infrastructures that meet the stringent security requirements demanded by the financial sector, defense organizations, and other high-sensitivity industries. This makes cloud-based working inherently secure, and provides a level of security that is difficult to achieve in on-premise or private infrastructures.

Cloud providers employ multiple layers of security, including advanced encryption, strict access controls, and constant monitoring to detect and mitigate potential threats. So, whether content is stored in the cloud, transported using a transport service for live video such as AWS Elemental MediaConnect, or distributed using a content delivery service (CDN) such as Amazon CloudFront, it is kept secure.  When using cloud-based solutions, broadcasters can be confident that their content remains secure throughout its journey from contribution to distribution to the end user.

Prioritizing security

 One way to ensure that cloud solutions and services, including cloud playout, operate to the highest level of security is by ensuring that only cloud tools and services certified to recognized security standards such as ISO 27017 are used. Cloud playout solutions should employ robust security measures including encryption and access control and should be regularly updated to ensure that systems are continually protected from evolving threats. Additionally, training for playout operators and other associated personnel is also essential to minimise the risk of human error, which can be the weakest link in a security system.

Maintaining content and channel security is an ongoing battle and broadcasters need to arm themselves with the best weapons possible. The benefit to using cloud playout is that broadcasters have access to some of the most advanced security measures available today. By prioritising security and leveraging the tools and features offered by cloud providers and solutions, broadcasters can safeguard their content and channels, ensuring that they continue to deliver uninterrupted, high-quality services to viewers.

 

Quickplay – Streaming TV’s new power couple: Generative AI and CMS

Quickplay – Streaming TV’s new power couple: Generative AI and CMS

Naveen Narayanan, Vice President, Product Management, Quickplay

 The longtime challenge of content discoverability in streaming is impacting revenue generation and viewer retention more than ever as new services and/or new bundles entice users to stray. One way to discourage them from shopping for a new platform is by ensuring you’re serving up what they want to watch, most importantly when they themselves may not know.

Today’s content search experience

Search today is often prone to delivering inaccurate results, limited personalization, keyword reliance, time wastage, and navigation complexity. The challenge of content discovery for viewers has become greater in the age of abundant streaming platforms and vast content libraries. As the number of available options has grown, users often find themselves overwhelmed and may abandon the search before finding something suitable.

In fact, Accenture finds that 72% of consumers “report frustration at finding something to watch,” up six percentage points from the previous year. If viewers are watching less or, worse, watching on other platforms, revenue will be impacted. The time is now to invest in more effective ways to enable viewers’ consumption of relevant, available content.

The biggest challenges for direct-to-consumer streamers

Three trends in the streaming marketplace point to the importance of discoverability for streaming services:

  • Competition for Eyes: The competition for viewers’ attention is hotter than ever as new streaming services come online. Growth is particularly strong in the free ad-supported TV (FAST) tier: Kantar reports that in Q3 2023, “adoption of FAST services outpaced video on demand streaming, two-fold.”
  • Subscription Churn: Churn has nearly tripled over the past four years. Poor content discoverability is often a factor: Variety finds that services with the highest share of households that watch only one of their top 50 programs are more likely to see them stop watching after one month. And if a household isn’t using a service, they’re unlikely to keep paying for it.
  • Content spend: In today’s high-interest rate environment, companies are tightening their belts — and content is on the chopping block. While Disney has gotten the most buzz over plans to cut content spending, other legacy media companies are also expected to trim spending, according to

By making it easier for people to find content that already exists, streamers are positioned for efficient revenue growth – capturing and retaining viewers without having to spend more money on content.

The industry needs advanced content discovery search tools and AI to expedite search and discovery and enhance consumer experiences. Large Language Models (LLMs) have emerged as a solution, offering natural language interpretation, contextual understanding, semantic grasp, conversational interfaces, reduced search time, personalized recommendations, and cross-platform discovery.

However, standalone LLMs face challenges such as lack of personalization and difficulty in adapting to streaming platform constraints. This is where the clever coupling of Generative AI and CMS really shows its symbiotic power.

 Generative AI + CMS = content discovery advancements

Only 10% of consumers report that streaming services recommend content that is “very consistent with my past viewing and potential interest.” Moreover, with millions of titles available, the 2023 State of Play report from Nielsen’s Gracenote noted that streaming viewers are spending a record 10.5 minutes/session deciding what to watch.

Substituting viewing time for search time increases subscriber engagement, loyalty and monetization. This is where Generative AI combined with a powerful CMS can make a world of difference.

Generative AI can connect viewers with content they love, which is pivotal to the entertainment experience. It is proven to offer remarkable capabilities in generating diverse forms of content, including images, videos, text, and even whole television channels. It involves advanced algorithms that autonomously create new content based on patterns and data fed into the system.

The CMS serves as a streamer’s central repository, managing the catalog of content assets. It encompasses comprehensive descriptive metadata, such as titles, descriptions, tags, and moods, alongside business rules and any relevant content restrictions. It seems logical that these tools be integrated and cross-leveraged, however it is a massive technology undertaking.

Combining Generative AI capabilities with a CMS drives the creation of spot-on storefront rails and delivery of pinpoint-accurate search results, making it faster, and more accurate than ever before for viewers to find appealing content to watch.

With this combined approach, more informed, data-driven decisions are possible across the entire content management lifecycle such as:

  • Augmenting content metadata with additional insights, including micro-genre descriptors, emotional tags, and embeddings to strengthen recommendations
  • Determining natural content breakpoints to identify binge markers and ad-breaks that can improve usability and expand monetization
  • Auto-generating content carousels based on themes extracted from the catalog or trending content across a variety of engagement metrics
  • Personalizing content carousels based on individual taste preferences, content analysis, and consumption patterns
  • Providing a natural language interface for assisted discovery, including catalog browsing, filtering, and cross-referencing assets.

Ultimately, the marriage of Generative AI and CMS simplifies the actions needed for both programmers and viewers. For programmers, it offers the peace of mind that their technology solutions are continuously learning user interactions and then adapt recommendations over time, contributing to a continuously evolving and personalized content discovery experience. For viewers, it dramatically elevates the entertainment experience by enabling them to more quickly discover content that meets their personal desires.

Why now?

As we approach 2025, the evolution of the video streaming marketplace will be amplified. More AI advancements and implementations will make competition not just stronger, but will require faster innovation to retain subscribers. The combination of Generative AI and a powerful CMS will equip the most future-thinking tier one service providers with the solutions needed to ultimately align with the evolving expectations of savvier users and viewers.

Paul Treleaven wins SMPTE Excellence in Standards award

Paul Treleaven wins SMPTE Excellence in Standards award

Paul Treleaven, IABM’s Technology Specialist Consultant, has recently been honored by SMPTE with its prestigious Excellence in Standards award, which recognizes individuals or companies who have been actively involved in advancing Society standards activities and processes. The official citation for Paul reads:

SMPTE awards the 2024 Excellence in Standards Award to Paul Treleaven, in recognition of his continuous efforts to ensure prompt dissemination of information within SMPTE to partner organizations and the general public. Treleaven ensured timely and effective information exchange and collaboration with SMPTE partner organizations as Liaison Chair. He produced quarterly meeting reports and gave the public an invaluable window into ongoing SMPTE activities. He has also contributed to developing transparent, effective, and accountable standards as chair and as secretary of several technical committees. SMPTE acknowledges Treleaven for his dedication to the standards process.

At the ceremony, SMPTE expanded on its citation on Paul’s many contributions over the years as follows:

“This award recognises the tireless work Paul does in keeping the information flowing smoothly between SMPTE, our partner organizations and the public. His contributions to creating clear and accountable standards as chair and secretary of various technical committees further highlights his commitment to the society.

“Paul produces orderly meeting reports and also provides vital glimpses into SMPTE activities. His contributions to creating clear and accountable standards as chair and secretary of various technical committees further highlights his commitment to the society. Kudos to Paul for making the standards process not just effective but also more transparent and accessible – and maybe a little more fun than it used to be.

“As both an end user and an active developer, Paul’s being recognized tonight for his 52 years of diligent work with advancing SMPTE standards globally. Paul has worked tirelessly to balance the detailed work of standards with the need to be transparent to all stakeholders involved, including the general public. Starting his career in broadcast engineering with the BBC, he designed the interpolation system for the Ace four field standards converter. Paul then co-founded Avitel, designing and manufacturing video, audio and timecode distribution and processing equipment. A SMPTE fellow, Paul has served as an international region governor, the chair and Secretary of SMPTE technology committees and standards liaison chair. He is the technology consultant for the IABM, representing them at SMPTE, AES standards meetings and AV technology committees. He is also a member of the UK national standards body, IEC TC 100. Paul loves camping, hiking, scuba diving, flying and, of course, traveling, which he does frequently for standards meetings and between his home in Arizona and the UK. Congratulations and thank you. Paul.”

Paul was typically modest in his response: “When I received the award email, I thought it was asking for nominations – I was shocked when I realized I’d actually been given the award. SMPTE, thank you very much.

“Really, I must give a lot of credit to the IABM who have sent me to SMPTE standards meetings for over 20 years; for their vision that Standards benefit our whole community – suppliers and consumers alike.

“I remember my first SMPTE standards round at Dolby, San Francisco in 2002. I knew no-one and had to adjust from having been a standards user to being a standards maker! I’m grateful to Merrill Weiss, Bill Miller and Hans Hoffmann for showing me the ropes and making me feel welcome that time.

“And, of course, I will thank my wife, Debbie, for all the tolerance and support she has given me.

“Cheers, SMPTE.”

Oxagile – Samsung Tizen TV Apps for All: Tips for Low-End TVs

Oxagile – Samsung Tizen TV Apps for All: Tips for Low-End TVs

Are low-end devices gaining ground or fading away?

The TV market is facing a challenge as consumers hold onto their devices longer, averaging 6.6 years before upgrading. Many older Smart TVs are no longer supported, leading viewers to rely on set-top boxes for better experiences. This trend is especially strong in North America, where demand for new TVs remains low.

For content distributors, supporting streaming apps on older devices could be a smart move, as a significant portion of viewers still use them. Adapting Smart TV apps for legacy systems can expand reach and improve accessibility for those with outdated tech.

Source: Statista

Samsung’s Tizen OS remains dominant, prompting a growing demand for inclusive streaming solutions across various devices. Meeting this demand not only enhances viewer satisfaction but also positions brands as forward-thinking leaders in media consumption. But what should you consider when developing and maintaining streaming apps for Samsung’s lower-end TVs?

If you’ve identified a significant portion of your audience still using low-end devices, this could present a valuable market opportunity. To capitalize on it, you’ll need to tailor your app’s functionality for these devices.

What challenges may arise? Based on our experience, we can guide you through potential pitfalls and provide effective solutions. Oxagile’s JavaScript Engineer, Alexander Sakov, shares key challenges and best practices to tackle them

 

#1 Hosted or packaged app, that’s a dilemma

Samsung’s Tizen OS presents a challenge for apps hosted on older versions (2.3 and earlier), as these do not support hosted apps. In such cases, you may need to convert your app to a packaged format.

A key consideration: Hosted apps allow for faster updates since they don’t require approval, but when working with Samsung TV, you’ll need to go through their approval process, which can cause delays.

If you’re targeting Samsung users, you’ll need to decide between hosted or packaged app storage. While hosted apps offer some flexibility, a packaged app is necessary for submission to Samsung’s app store. This involves creating a package that links to your server, following Tizen’s app configuration guidelines.

Once the configurations are complete, you’ll receive the index file for your app, which is the packaged version of your hosted streaming app. This version is then submitted to Samsung’s team for approval and release to the Store.

A quick note: You must also obtain approval from the Samsung team for the conversion of your app from packaged to hosted.

#2 Adapting the UI capabilities to low-end TVs

When developing Smart TV apps for Tizen, it’s important to consider the compatibility of older TV models with streaming technologies. For example, Samsung devices released between 2015 and 2017 do not support the combination of DASH streaming and PlayReady DRM. If you’ve configured this streaming setup, be aware that your app won’t play videos on webOS 3.5, Tizen 3.0, or earlier versions.

No worries, you can address this playback issue; you’ll have to change the configurations of streaming and DRM. For instance, changing PlayReady to Widevine will help.

#3 Aligning with Samsung’s Content Security Policy

Inline scripts are blocked by Samsung’s Content Security Policy due to security risks, as they can be exploited by hackers to redirect users to malicious sites. To comply, you must remove all inline scripts.

A key challenge with existing Tizen apps is identifying and rewriting inline scripts without breaking functionality, especially when core features like scrolling depend on them.

When building a Tizen app from scratch, it’s crucial to avoid inline scripts. Instead, use external .js files, which are more secure and harder to manipulate when minified.

#4 Eliminating performance blockers

While low-end devices may have limited hardware capabilities, it’s essential to prioritize smooth streaming experiences over more advanced UX features. Here are some technical aspects that impact the overall performance and playback on low-end TVs.

CSS animations

Animations in streaming apps, while visually appealing, can significantly decrease performance, particularly on older devices. For instance, CSS animations like loaders can lead to increased load times and sluggishness, making the user experience less enjoyable. To address this, we may need to avoid animations, both CSS and JavaScript, resulting in a more basic interface that may sacrifice smooth transitions. Additionally, to accommodate newer devices while optimizing for older ones, we might have to support “if” code versions, slightly complicating development and maintenance flows.

Re-rendering processes

Re-rendering can hurt performance on older devices with limited memory, like TVs with around 500 MB of RAM. While modern browsers handle updates well, older processors struggle, especially with large caches or complex layouts. For example, Electronic Program Guides (EPG) with many DOM elements can overwhelm these devices.

One solution is using Canvas, which combines graphics into a single DOM element and renders shapes with JavaScript. This reduces memory usage and improves performance by minimizing the number of elements the browser must manage. While it requires more complex coding, this approach can significantly boost responsiveness, ensuring smoother performance across various devices.

Exclusive insight: For LG’s WebOS 3 and older versions on low-end devices, Canvas may not improve performance and could even slow down the app due to its complex processes.

A better approach could be lazy loading, which loads content in segments for improved performance. Another effective method is using a library like React Virtualized, which optimizes performance through virtual rendering. This technique only renders visible rows in a large list, reducing the number of DOM elements and minimizing performance overhead. Essentially, React Virtualized displays only the necessary rows while using CSS to indicate hidden rows, creating a smoother experience.

Final words

While understanding your audience’s content preferences is essential, it’s equally important to consider their TV usage habits. With the trend showing that consumers typically keep their TVs for about six years, you must weigh the benefits of supporting low-end devices against the risk of alienating a significant portion of your audience. This presents a dilemma, but Oxagile’s Smart TV app experts are always here to offer guidance and best practices.

 

The article was originally published at: https://www.oxagile.com/article/samsung-tizen-tv-apps-for-all-tips-for-low-end-tvs/