CEO of Synamedia's Quortex
Have you ever thought of delivering streaming video content in the same way your local pizza shop prepares its orders? They make it fresh, on-demand and just-in-time. Not only is it more efficient but it also reduces energy and waste. Plus, who wants to eat cold pizza?
With streaming video, the same efficiency benefits as pizza delivery apply alongside huge sustainability advantages.
All it requires is a completely new way of architecting video delivery. No big deal.
What is just-in-time (video) delivery?
82% of all Internet traffic is predicted to be consumed by online videos including live and on-demand streaming services in 2022, according to Cisco’s Visual Network Indicator. However, in some cases, streaming video can have a higher carbon footprint than linear delivery due to its unicast nature. A modern and highly efficient video delivery mechanism, such as just-in-time delivery, becomes critically important to deliver the best ROI and set a sustainable future with a lowered carbon footprint.
When video delivery via the Internet was first implemented, its technology requirements were assumed to be the same as those required for traditional pay-TV delivery. What has become more apparent over the decades is that user expectations vary depending on the content they’re viewing and devices they’re viewing it on. Internet-delivered video simply shouldn’t be transported in a one-size fits all approach the way traditional pay-TV would be. Adjustments that are made with viewer experiences in mind would unlock savings across multiple areas, including infrastructure and energy consumption.
How can this be addressed? The answer is that the traditional push model of video being processed and transported to sit in the CDN until one day someone might request the content must be completely updated. There must be a just-in-time approach that provides exactly the resources required to deliver a specific piece of video content at any given time. If no-one is watching a channel, it simply frees up those resources while reducing energy usage.
This methodology provides time to market and considerable energy and cost advantages over existing cloud approaches and ensures that every deployed resource has a purpose. For example, for long tail content unique, just-in-time technology reduces cloud costs up to 67%.
Does this new “oven” for just-in-time delivery exist today?
Yes, it has been developed by Synamedia’s Quortex. Its multi-tenant SaaS technology builds video streams on-the-fly, based on the end users’ requirements and matched to viewers’ locations, devices and time zones. It quickly adapts to fluctuating audience demand, unpredictable network, infrastructure context and limitations, and automatically scales cloud resources up and down, reducing waste by using spot instances that take advantage of spare cloud capacity at a fraction of the typical operational cost, while maintaining the quality of experience. With it, video content is delivered when a viewer wants it, how they want it and in high quality – just as we like our pizza: with the toppings we want, saving the ones we don’t for those that do.
Why is this just-in-time more sustainable?
While it is eye-catching to use figures about energy savings, there are no standard formulas or data points to ensure we are not comparing apples with oranges. To get a realistic sense of energy costs savings video delivery must be evaluated from beginning to end. Let’s start with the CDN.
CDNs were historically designed to deliver web content. As streaming video emerged, technology developers fine-tuned the CDN for video with optimized caches that ensure that ABR content is cached as close as possible to the viewer for delivery at scale; such is the case of Synamedia’s FLUID EdgeCDN.
Additionally, technologies such as Low Latency DASH, Low Latency HLS (LL-HLS) and the High Efficiency Streaming Protocol (HESP) have reduced the latency of live streaming to six seconds from 20-60 seconds, in line with broadcast, as evidenced on Synamedia’s VIVID Low-Latency OTT solution.
Thanks to massive advancements with technology, content providers can now optimize the latency for different applications within a service, making it easy to launch new monetization applications, and save compute and storage resources.
How do you measure the cost or energy usage?
It’s no secret that there are challenges when it comes to quantifying CDN energy usage and much of this is due to a lack of consistency across the industry. The three main discrepancy factors to consider include:
- Energy consumption of different technologies are totaled and tracked in an inconsistent way. Instantaneous load and memory impact CPUs and the industry has yet to decide how to measure – whether it’s measuring transport by volume or by capacity.
- There are networks that connect internal and external servers. In traditional, non-shared environments, servers can be turned off to lower energy costs. However, servers that are shared internally (multi-tenancy) and externally (connected over the Internet) are connected via a network and that network cannot be shut down.
- Content that is thought to be more essential than others and in some cases, is mandated to be delivered. Think of natural disaster warnings for example. This content is traditionally stored in cache on an edge server and future requests for that content will be served up via that cache which can reduce energy usage because it’s not downloaded from its origin every time. The challenge here is that the cache uses energy as well.
To overcome these challenges, an industry-wide consensus needs to be reached. Organizations like The Greening of Streaming are focused on helping create unified thinking around end-to-end energy efficiency in the technical supply chain that underpins streaming services.
As the industry standardizes its measuring methods savings will become evident, meanwhile we will continue to deliver the “hot pizza” how and when it’s demanded!