Sports OTT Forum: Streaming protocols adapt to era of constant change
Will CMAF bring harmony to a multi-stream, multiprotocol world?
What’s next in streaming protocols? The alphabet soup of CMAF, HLS, DASH, and more were the focus of a discussion at last month’s SVG USA Sports OTT Forum.
David McLary, vice president of video technology at NBC Sports Group Digital, laid out the challenge: offering a video platform that can serve a number of needs, from an authenticated simulcast of a broadcast stream to VOD content or, as in the case of NBC Sports Gold, a direct-to-consumer subscription service.
“We’ve seen a lot of growth in that over the last 18 months or so,” he said. “You can buy specific packages with things like PGA TOUR, IndyCar, and there’s a bunch of new packages that we launched recently. We also have a Playmaker business, which is kind of a white-label B2B service where we partner with folks like F1. We have the same video platform underneath all three of those business lines, and we need a good foundation so we can do the cool stuff over-the-top.”
Navigating hundreds of standards is difficult, and that is one of the reasons NBCUniversal is looking to CMAF as well as other standards and protocols. McLary said CMAF offers a middle path that can solve issues like latency, common stream, and platform support. When, for example, delivering an HLS stream to a DASH device, tradeoffs need to be made, but CMAF moves the industry toward delivering only a single stream.
In a perfect world, there would be a way to offer one stream that can serve different types of devices, video players, and platforms. That is where CMAF comes in, and some big companies — such as Viacom, Disney, and Amazon — are already embracing it (AWS Elemental has implemented CMAF support across on-premises and cloud-based systems, and Akamai also supports low-latency streaming using CMAF). But, currently, HLS (Apple’s solution to adaptive-bitrate delivery) and MEPG-DASH (Dynamic Adaptive Streaming over HTTP) protocols still need to be used, and even RTMP (Real Time Messaging Protocol) needs to be supported for Flash-based clients.
“One day, we will have a single stream that we can deliver to all devices,” McLary predicted. “As basic an idea as that is, we are so far from it right now. I would love to be in the broadcast world, where I can just shoot my stream out to an IRD everyone gets.”
James Wilson, director, engineering, Aspera platform, IBM Aspera, said that Aspera has always erred on the side of value over cost.
“What we look at doing is providing solutions that allow choice,” he said. “If we can provide an abstract interface for TCP or HTTP that can accelerate whatever the technology du jour is, then that’s exactly what we want to do. We want to be in a place where we can be easily integrated wherever. That’s how we make those decisions.”
Wilson added that CMAF is about reducing the overall amount of data that has to traverse the network to optimise the available resources. “To me, CMAF is about encode once and then use in many formats,” he said, adding, “There are a lot of disruptive technologies pushing the next level, the next generation of that. There are companies talking about content fabric and identifying things at a segment level across the web. That is something very interesting to watch out for.”
A constant thought, McLary added, is how to deal with scale and to find the best solution for the demand.
“This year, we don’t have a Super Bowl, we don’t have an Olympics, but we’re going to end up streaming tens of thousands of events,” he explained. “We think about the scale of the delivery platforms we have to support and the number of customers. It goes back to reliability vs. latency, performance vs. feature set. We end up weighing all those things, but I think scale is always at the forefront of our mind when we’re making those decisions.”
Addressing standards, Fritz Seifts, principal architect, Limelight Networks, said that the decision begins with figuring out what you are building for: is it the future or to optimise operations today? The push and pull of what is a standard, what will become a standard, and how that impacts decision-making is a constant battle. Standards are evolving and consolidating, and transit protocols are trying to overcome an internet that was never designed to do what it is doing today.
“It is a completely hostile environment that we’ve been trying to make this thing work,” he explained. “The inefficiencies at scale with something like TCP are pretty evident at this point.”
Wilson said that Aspera tackled TCP bottlenecks in its early days and agreed that those working on networking computers via TCP in the 1970s could not have imagined where things would be today.
“The thing that we’re seeing is that data movement over time is constantly changing,” he added. “The way that we’re moving data and the way that we’re delivering data two years from now is going to be a completely different traffic pattern than today.”
Seifts noted that the tactical side of things and the strategic side of things inform each other as to what to do.
“We’re also informed by our customers as well, saying we don’t like you to support this, we need you to support this, can you support this, so forth and so on,” he said. “There’s the business logistics of customer needs, but, internally for us at the CDN, the formula divider is always, can we do this at scale?
“Sure, we can do it live, and, sure, we could maybe do it for one or two customers. How do we do it from a global standpoint? The amount of investment that we have to make just in the R&D aspect alone, so we don’t blow up the CDN and tank every customer is near astronomical for us. We have to be very tactical in how we apply a lot of our engineering resources, which is why we’re so intent on reducing standards.”
Wilson added that flexibility in architecture is opening up new ways to respond more quickly and efficiently.
“Right now, there’s a huge shift to cloud-based workflows,” he said. “At Aspera, we’re deploying 35 data centres globally for our operations and then looking at how to empower people to use that technology and that global network. Am I now deploying these things in containers? Am I leveraging flexible cloud infrastructure that allows me to change my technology decision? Outlaying for infrastructure 10 years ago meant being stuck with a decision. Now there are new options.”
Wilson noted that blockchain technology might be used to better audit the workflow and more easily find out where to pay attention and solve issues.
“Another piece is integration and being able to integrate a framework with many different DRM schemes,” he said. “Maybe investing in an interface that allows me to be able to, in a modular way, change the DRM scheme, instead of so deeply integrating it with my appliance or my equipment, is the direction that we all need to be talking about.”
Wilson explained how Aspera’s FASP technology become bidirectional so that it can replace TCP. “We’re looking at being able to deploy HTTP gateways that drive any streaming protocol over any means and distance and integrates seamlessly with any endpoint or piece of equipment that can actually speak those streaming protocols.”
Latency: How much at what cost?
As more and more consumers tune in to live sports coverage via an OTT service, latency becomes a bigger issue. There is the desire to be as close to live as possible (and there is also the potential for streaming to be a big part of gambling directly from a smartphone).
“There’s a lot of different definitions of latency based on the situation that you’re in, the context that you’re watching,” said McLary. “The latency for a digital-only exclusive event probably is different than latency for a real-time gambling app. And, since all of the technologies that we have right now that address latency are very different and have very different underpinnings with different requirements, it can go all through multiple partners, multiple delivery systems. It’s important to identify the context that you want to attack latency in, so that you can apply the right solution. I’ll have a different latency target for a gambling app than I’ll have for something that needs to match broadcast latency.”
Seifts said that solving latency comes down to the quality of the experience one wants to deliver. “If you’re watching a game and you miss a goal but everybody’s cheering for the goal [on a Twitter feed], that’s a low-quality experience that is the outcome of high latency.”
When a stream is ingested in one location and delivered across the globe, latency depends on where the content is being consumed. An event delivered globally via streaming (and not a simulcast of a live broadcast stream) may be more accepting of longer latency whereas others may need to keep the stream timed up as close as possible to a broadcast stream.
“What are you trying to achieve? What does latency mean for you?” said Seifts. “It’s ultimately going to come down to what the customer is doing, what they’re trying to deliver, and on what scale. And the interesting challenge for a CDN is that we have to make it all fit for everybody. When I ask, “What are your latency requirements? Do you even know your latency requirements?,” it is a business-level decision that the technology can sort of inform and help enforce.”
An eye on security
And then there is the need for security and its impact on things like browser integration. “Browsers are constantly working against vulnerabilities that would allow data to leap from the operating-system level through the browser and out any doors,” said Wilson.
“I have had concerns about WebRTC and the day that browsers are just going to turn it off because it’s going to be able to be exploited. But one thing that I see happening is that, by using web extensions, we’ll see a future where we can actually just build extension frameworks for the different protocols that solve these challenges. You would be able to extend your own browser in a very simple way, as Aspera has been doing for a while, but without the use of a desktop client.”
Predictability is ultimately the goal, Wilson said. For Aspera, that means defining the threshold for delivering a file and then applying the appropriate technology.
All the steps taken to create predictability, however, can go out the window when it hits the unpredictable world of the open internet.
“Guaranteeing that predictability is really what the challenge is,” said McLary. “You end up with a tradeoff between getting it there as fast as you can or [its being] stable and reliable. And that tradeoff is what a lot of these low-latency proposals right now are trying to deal with, trying to figure out where on that spectrum we want to be.”