Game on: What’s possible in low latency streaming?
By Chris Wilson, director of market development, sports, MediaKind
Altman Solon’s 2021 Global Sports Survey found sports fans globally watch an average of 3.5 hours of live sports every week. On the surface, this is a promising indication of the value sports fans receive from watching their favourite teams and players in competition. However, the underlying challenge for sports broadcasters and service providers is to deliver relevant, high quality streams to audiences to prolong their engagement – particularly younger fans – as they increasingly opt for streaming platforms to source their live content.
If sports broadcasters and service providers are to realise a streaming world where their customers can stream live without limits, they must first overcome issues of scale and latency. When it comes to the quality of the streams, sports fans have a short supply of patience. Once they’ve found content that’s relevant to them, they expect seamless, reliable services every time. Top tier sports rights owners can no longer justify streaming services that are less than five-nines quality of service.
Reducing end-to-end latency must be a priority for our industry. It’s not uncommon today to stream a sports game whereby the latency delays can be anywhere from 45 to 60 seconds, if not more. Forty five second delays were never tolerated in live broadcast delivery; it’s now time for the industry to embrace the new technologies that are in place and to eliminate these lengthy delays from the live streaming experience.
Setting the parameters for low-latency streaming
Definitions of ‘latency’ differ between audiences, so it’s important to understand what type of latency applies to which situation. End-to-end latency is the most widely considered within sports broadcasting, which covers the entire broadcast chain from camera to screen. For sports streaming, latency is benchmarked in three ways: equivalent to broadcast (between six and 15 seconds), equivalent to social media posting (one to five seconds), and near real time data feeds (sub one second).
“Top tier sports rights owners can no longer justify streaming services that are less than five-nines quality of service”
Each scenario needs progressively lower latency. Although this can impact the quality of the service and increase infrastructure costs, it also unlocks new use cases such as overlayed player stats and real time betting. Video-on-demand (VoD) is relatively easy in this respect. However, doing this for real time, high quality live sports is harder to execute. Erasing all delays in live video streaming is wishful thinking, as there will always be some element of delay in streaming video. This is because the live sports content must be captured, produced, distributed, encoded, packaged, stored, and then transitioned to content delivery networks (CDNs) for delivery to the end consumer.
An element of delay is always increased with online video streaming due to the way the video is processed. The broadcaster receives the video and audio data segments individually and processes them separately. However, this calls for a precise balancing act because the processing becomes sub-optimal if the video files are too small. Yet if the files are too large, the processing takes too long as each segment must be generated in its entirety before the next phase of the broadcast chain can be initiated.
Opening the door to broadcast-quality streaming
Deploying an ultra-low (sub-second delay) latency solution often doesn’t make sense from an return of investment (ROI) standpoint, as the high cost of implementation almost always outweighs any direct benefit. This is particularly true for large scale global events like the Super Bowl, whereby small reductions in latency don’t impact viewer numbers and ad revenues.
These solutions are also typically based on WebRTC technology, meaning they can’t be scaled via conventional CDNs. This makes them an expensive option needing custom processing in the service provider’s network. Therefore, broadcasters and rights holders must question what an acceptable latency looks like on a case-by-case basis. An exception could apply to specific use cases, such as enabling users to pay a premium for ultra-low latency or deploying it for betting purposes. This method means the broadcaster doesn’t need to overprovision at the edge and can gradually introduce the service. However, this approach can also lead to a downgrade in the reliability of the service, which in turn means a reduced quality of experience overall.
For many, an acceptable end-to-end latency will be reduced to sub-seven seconds (equivalent to broadcast-type transmission). It means broadcasters can eliminate the risk of social media or push notifications ruining key moments for the fan. The rapidly growing sports betting industry also has a keen interest in meeting the latency challenge. If betting companies are to attract and retain customers in this market, streaming providers must offer reliable, broadcast quality video with the lowest possible delay so that they can support interactive betting software. This means viewers can be confident they are betting on honest, reputable coverage of proceedings on the court, in the stadium or inside the arena.
Direct-path technology becomes table-stakes
There are two core elements streaming providers must include in their low latency solutions. The first is the use of common media application format (CMAF) low latency coding with HTTP/1.1 chunked transfer encoding, which eliminates unnecessary delays between, for example, a packager and a CDN. A traditional ABR format means the entire segment must be completed to know its length, and it needs to be signalled for transfer to the subsequent stage.
A traditional ABR system might segment this in six to 10 seconds. Each time a segment is processed or forwarded (including by the client), that segment duration is added to the cumulative latency, so the latency increases progressively through the chain, in increments of segment length. By bringing together CMAF low latency and chunk transfer encoding, service providers can now initiate the process of moving that data through the delivery chain at an earlier stage. That’s because they only need to wait for a fragment of the segment to arrive before being able to forward it onto the next stage in the chain.
Employing ‘direct path technology’ radically changes the way data is connected from the encoder to the packager, removing buffer delays and paving the way for a huge reduction in time moving content from one video processing function to the next. Traditionally, the connection between an encoder and a packager would have been a multi-rate constant bitrate type connection delivered in real-time, requiring buffering and incurring latency.
Streaming providers should also carefully consider the synchronisation of live feeds, especially where direct feeds, such as alternate cameras, are fed straight from an event or stadium and viewed alongside programming and production feeds that would be subject to additional delay. This highlights the need to aim for a predictable and repeatable delay that all feeds can be timed against.
Meeting the expectations of today’s sports fans ultimately comes down to delivering live content at scale to large numbers of people while balancing their expectations of high quality and low latency. The technology is ready and available for broadcasters, content owners and streaming providers to deliver exceptional live sports experiences that align with their budgets. In doing so, they can meet the expectations of their audiences, retain the loyalty of super fans, and place themselves at the forefront of live sports delivery innovation.