How new technologies and careful planning can ‘fix the live stream’
High profile glitches with live streamed sports indicate that, if the net isn’t broken, it is in need of a fix. The Mayweather-McGregor bout in August was targeted by 239 pirated streams (identified by security specialist Irdeto) but many viewers turned to illegal sites when the official pay per view stream failed to keep pace with demand. SVOD sports aggregator DAZN had to manage the ire of NFL fans in Canada when audio and video problems dogged the launch of its service in the territory (in September). Twitter’s live stream of 10 NFL matches last season was considered pretty successful in terms of quality but suffered badly from negative reaction to Twitter’s own integrated social feed, which was often many seconds ahead of the video.
“Last Super Bowl 110 million people tuned in to watch the Patriots’ comeback win,” says Conrad Clemson SVP & GM, service provider platforms at Cisco. “It was watched live online by 2 million people. Two million out of 110 million—that’s small. Why? Because the Super Bowl experience is simply better on satellite or cable delivered in HD.”
He relayed his experience trying to stream Boston Red Sox games this summer while travelling abroad. “Sometimes the video wouldn’t start. Other times it would pause. And sometimes the resolution would be so low you couldn’t tell what was happening. Consumers have come to expect a pretty high standard for video experiences.”
Cisco plans to fix this with Cisco Media Blueprint, presented as a set of IP-based infrastructure and software solutions that help media companies automate much of the workflow in the cloud.
Customers on board with this approach include outside broadcast provider Arena TV (which has based its IP-only UHD trucks around a Cisco IP switch); BBC Wales (which is building the corporation’s first all-IP broadcast hub in Cardiff around Cisco fabric); Sky New Zealand; CANAL+, Fox Networks Engineering & Operations, and NBCU.
This makes the problems with live streaming sound simple, but in fact they are anything but…
Live stream complexity
“The internet does not have enough capacity to stream (unicast) to everyone,” says Charlie Kraus, senior product marketing manager at Limelight Networks. “It grows in capacity every year, compression is improving every year – but so does traffic. Most CDNs, including ourselves, work with optimisation strategies to provide the best we can do with that amount of bandwidth available.”
Internet delivery is subject to many contentious factors, including CDN capacity, congestion on the internet to the ISP peering point, lack of Quality of Service to the end device, Wi-Fi connectivity at home and for mobile networks, occasional poor connectivity, due to either poor network coverage or too many users on the network.
For live use cases, additional issues exist. Most commonly, resources in the network might not be sufficient. A major event such as the World Cup or Olympics must be planned one year ahead by Content Delivery Networks (CDNs).
“For mass events like a local basketball game in U.S., the network sometimes collapses as the consumption is unbalanced,” says Thierry Fautier, Harmonic’s VP video strategy.
It varies between countries, but it’s becoming increasingly difficult to categorise typical bottlenecks and day-to-day limitations when it comes to live streaming.
CDN Akamai still sees the majority of last mile networks (from the exchange into the home) running contention ratios. “It means that, depending on the volume and scale, you will always hit a bottleneck there,” explains James Taylor, director of media services EMEA. “However, thresholds vary by county. The UK, for example, is able to service huge multi-terabit events with no systemic issue, which in other EU markets, such as Italy, 1-2 Tbps is a lot for the local infrastructure to handle.”
The other prevalent area for bottlenecks lies at the point on which content is ingested wherever it is hosted. Explains Taylor: “You get efficiencies when a single piece of content or live event is streamed, but there are incremental loads put on the origin in a non-linear fashion, and if that isn’t actively designed and thought through, then issues on scale events will occur, impacting all users as a result.”
On top of that users now expect a ‘broadcast-like’ experience when watching streamed content. Premium sports content attracts large audiences, which stresses the distribution network. The content is usually detailed and highly dynamic, which requires HD to be delivered at a high frame rate (50p or 60;), necessitating higher bitrates, at least when watching on large screen TVs. Higher bitrates add to the network load.
When it comes to video fidelity, Akamai research finds that a viewer’s emotional connection to content at 5Mbps generated 10.4% higher emotional engagement than for viewers watching the same content at 1.6Mbps.
Latency challenges
As important as overcoming these issues is increased latency. As Peter Maag, CMO at Haivision puts it, “live sports depends on the immediacy of the programme to assure contextual experience with all information delivery (to second screen, social media, etc.). If the programme stream is delayed by over 5 – 10 seconds end-to-end, the experience falls apart.”
Harmonic reckons the latency of an end-to-end OTT distribution system is typically between 20 and 70 seconds behind that of broadcast. Fautier points to new streaming formats like CMAF which he says will allow the industry to get much closer to the 3 to 5 second delay typically experienced in a broadcast chain (Harmonic demonstrated this at IBC with Akamai.)
“The gap is definitely closing between streaming and broadcast capabilities,” says Chris Michaels, communications director at Wowza Media Systems. “Online streams are increasing in quality and stability. But scalability will be a challenge while we have it at the cost of latency, and sports fans won’t accept that for long.”
The lawsuit against ShowTime takes this issue to a whole new level. Zack Bartel of Oregon paid $99.99 to watch the Mayweather-McGregor fight only to find his expectations dashed, as this extract from the suit filed against the cable co. outlines:
“On August 26, 2017 at 6pm PST, like thousands of other fight fans across the county, plaintiff turned on defendant’s app in anticipation to watch the Mayweather fight. To his extreme disappointment and frustration, plaintiff (and thousands of other consumers) quickly learned that defendant’s system was defective and unable to stream the Mayweather fight in HD as defendant had advertised. Instead of being a ‘witness to history’ as defendant had promised, the only thing plaintiff witnessed was grainy video, error screens, buffer events, and stalls.”
“This demonstrates that it’s not just about providing a good video experience – it’s about viewers missing out on a major bonding event that millions of people had been eagerly awaiting,” says Stuart Newton, VP strategy & business development, Telestream. “Paying a lot of money for content that doesn’t arrive on time and in good shape obviously doesn’t sit well with viewers, but this lawsuit takes dissatisfaction to a whole new level. Content providers will now have to invest more in quality assurance and risk mitigation if they want to continue moving premium content over.”
The challenge of figuring out where the video feed is going wrong is highly complex, and takes a combination of passive monitoring (watching ‘on the wire’) and active testing of the availability of streams in different regions.
“Ideally, we want to provide an early warning system for video problems such as bad picture quality, accessibility errors, buffering and outages,” explains Newton. “This includes testing immediately after the content is produced and packaged, and then periodically at multiple geographic locations after it leaves the content delivery networks (in data centres, on premise or in the cloud). Sampled coverage testing at the edge of access networks – whether broadband cable, Wi-Fi or cellular – must also be part of it.”
Monitoring, accountability, scalability
“The trigger source could be the existing monitoring system, artificial intelligence (AI) from cloud-based client analytics, or a trigger from equipment or a virtual function in the network itself,” says Newton. “Whatever the trigger mechanism, the ability to be able to diagnose root cause and analyse impact severity in near-real time will be a major factor in not only detecting, but dynamically repairing video delivery problems in future. This will allow better scaling of the systems, and at the same time provide more intelligence for targeting and reducing latency across the networks.”
Is it then possible, or desirable, to pinpoint the exact point of failure during a particular live stream and therefore for the rights holder to hold that vendor or service partner to account?
“It is possible with the right due diligence,” says Taylor. “Over time it will likely become mandatory for a vendor or service provider to be held to account. The challenge is that it’s not a like-for-like comparison between traditional TV and online. Today, the inability to measure a user’s quality at the point of consumption for TV distribution in real-time means the focus is on availability of a channel or specific programme. OTT also has multiple third parties and technologies involved that are very interdependent, resulting in a much more complex problem to solve.”
Increasingly, the CDN is also seeing that quality is subjective, and social platforms and direct feedback from viewers are becoming a growing source of insight.
“Being able to scrape negative Tweets and feedback from a customer’s social feed and parse out the insights can enable issues to be flagged as they arise,” says Taylor. “Social media has great potential as an early warning system for poor streaming quality.”
Recently, Akamai’s platform peaked at 60Tbps, which is the equivalent of 20million concurrent 3Mbps streams. During the 2016 Olympics it delivered 3.3bn streaming minutes to over 100m unique users. In the scheme of total TV viewing this is still quite small compared to the hundreds of millions of concurrent streams going on around the world at any moment, but the internet as a medium for video distribution has shown it can scale.
Newton stresses: “If the industry can work together to enable more transparency and interaction across the video delivery chain, we will be able to avoid, or at least rapidly mitigate, premium event problems for future viewers.”