Sports Audio Summit 2022: Project Stryker on supporting regional esports with ‘follow the sun’ production in the cloud

Riot Games’ lead engineer and guru of all things audio, Alex Rybalko, on stage at the Sports Audio Summit 2022 with SVG Europe editor and head of SVG Audio, Heather McLean

There is just no stopping the rise of esports. It is a no-brainer, being massive amounts of fun, completely inclusive across borders and totally accessible.

Opening the SVG Europe Audio Sports Audio Summit 2022, held in London on 7 December, Riot Games’ Alex Rybalko gave an eager audience an insight into Riot Games’ latest innovative move: Project Stryker.

As its lead engineer, Rybalko talked about Project Stryker’s ‘follow the sun’ production model and how it supports gamers, viewers and staff. He spoke about how Stryker supporting Riot’s regional teams to create better content despite the complexity of the audio set up, and how it leverages the cloud to do it in a cost-effective way.

According to German consumer data company Statista, the worldwide esports audience size reached 532 million people in 2022, and by 2025 there are expected to be over 640 million. For perspective, that is more people than live in the EU and the UK combined.

Broadcast almost exclusively on over the top (OTT) channels, esports trailblazers like Riot Games are not only picking up the mantle, they are reinventing it.

Long time in the game

Riot has been in the game a long time and already has a history of challenging what is possible, such as employing remote production models long before the model became established.

In the summer of 2022, Riot Games officially kicked off Project Stryker with the launch of its first Remote Broadcast Centre (RBC) in a 50,000 square foot former nightclub in Dublin. Now serving as a central broadcasting hub for both regional and global live esports across all Riot’s titles, Project Stryker will eventually orchestrate media flows between three global RBCs.

With support for League of Legends, Valorant and Wild Rift, each RBC is located eight hours apart to scale up and to enable 24/7 production facilities for any event, wherever it is. The company started building the second, located in Seattle, in the fourth quarter of 2022.

On stage at the Sports Audio Summit 2022, Rybalko stated: “We’ve been doing big esports using remote production for over a decade, so it’s nothing new, but it was only ever with League of Legends,” said Rybalko. “That game has been at the forefront of esports for that entire time, but now we’ve added more games we need to be able to scale.

“We already have existing workflows and teams in place all over the world which take care of local esports production, but to consolidate all of it at capacity we created Project Stryker. It’s a ‘follow the Sun’ model, with three remote hubs located eight hours apart to provide 24 hour coverage.

“Dublin came online in the summer and we’ve already done a couple of big global shows. The idea is not to replace what we already do – and do well – but to add more capacity and enable our regions.”

At the press launch of Riot Games’ Project Styker in Dublin, July 2022. [Left to right]: Trevor Henry, LEC shoutcaster; Scott Adametz, director of infrastructure engineering; Allyson Gormley, general manager Project Stryker Dublin; and John Needham, president of esports, Riot Games

Creating a direct riot

The company uses its own Riot Direct service to transport all the feeds. Riot Direct is a super low latency transport which has been in place since 2014, originally created to improve the player experience so Riot did not have to rely on third party ISP routing.

“Riot Direct is our own global ISP,” added Rybalko. “The capacity was already there so it was a natural fit for Project Stryker; all our remote production is already powered by that network.

“The interface between our data centres and the cloud is straightforward. Riot Direct connects straight into AWS…there is definitely a benefit to being one of AWS’s largest customers! A lot of that stuff already exists by default and the broadcast side of things are a natural fit on top of the systems we already have.

“A lot of what we do is across streaming platforms like Twitch and YouTube; we distribute to these from both private and public cloud. We use the JPEG-XS compression format, which is SMPTE ST 2022-7–compliant for resilience, and connect to AWS using the AWS Media Connect transport service and AWS Media Live for encoding. We use Nimble Servers for distribution to our regional partners as well, and we encode and distribute from there.”

All audio is uncompressed 2110-30. For ease of use Riot usually take in two to four streams of MADI, unicast over the WAN, and native 2110-30 within its facilities.

This mature combination of technologies enables Riot Games to support all its regions with additional functionality, which Rybalko said levels the playing field and creates a better product.

“Our EMEA region broadcasts in up to 17 languages, and all of our language partners receive a world feed and the tools they require, but often regions feel underserved because they don’t have access to the top-of-the-line tools that some of the bigger facilities have,” said Rybalko. “In a connected world they can access all this tech. The production hub is a remote broadcast centre so by definition everything we do is remote, which means all the equipment is in a data centre and not at the facility.

“By centralising all our technology and adding it to the cloud, we can provide these tools across the board; we can raise the baseline of production to a higher standard so that everyone draws from the same pool and all our content is better. When we have all three centres up and running it will mean that people can sleep but the equipment doesn’t have to.”

Alex Rybalko from Riot Games talks audio at the Sports Audio Summit 2022

Multi-language content

In addition to providing more access to better tools, Project Stryker’s remote capabilities also help its regional teams to spin up for multi-language content. Audio workflows for esports are notoriously complex, and Riot Games is no different. But for language feeds, like those 17 EMEA languages, it is a lot easier to add scale in the cloud as it does not require that core complexity.

“From an audio perspective it’s hard it is to take in the whole scope. The main challenge is the sheer complexity. We have a minimum of 128 individual audio channels and we manage everything from player comms and in-ear mixes for players’ headsets, to in-venue production for the live audience. It’s all part of the show and we use all of it in live production to help tell better stories,” continued Rybalko.

“We also have complex augmented reality (AR) workflows with LED screens and mixed reality (XR) stage activations. We need to manage audio cues and delay to be in time with things that happen in real life as well as remotely, and we are looking at individual delays for 40 to 60 channels at any one time. Wireless camera and AR add different delays, we have player POV, in-game action, out-of-game action; it all must be managed.

“We also have an onsite presence at all the events, usually a third party partner who provides the equipment for things like signal acquisition. We transport all the audio channels to the Stryker facilities across 2110-30 IP. We have no baseband whatsoever in any of our facilities, so once it is in the ecosystem it can be distributed globally.

“All our shows are combinations of on-prem and cloud systems, for the main feeds inhouse and for language partners, and all our distribution is in the cloud, both our own private cloud and public cloud. There’s a lot of scale there.”

Transitional period to centralised production

Project Stryker is already breaking new ground, but Riot Games is still in a transitional period between its traditional remote broadcast model and Stryker’s fully centralised production model, and they are still finding out what is the best way to develop. There is more to be done, and that might be in the cloud, on-prem, or combinations of both.

Rybalko stated: “The number one goal is always to add value for our viewers and our players. Whether that is in the cloud, hybrid or fully on-prem doesn’t matter. We’re adding a lot of cloud components such as all our distribution to CDNs and our language partners but doing things in the cloud for the cloud’s sake is not always productive.

“It should always be where it makes sense. Adding scale for 17 language feeds makes sense, because we can spin something up in AWS and the workflows are already there, but FPGA hardware isn’t going away anytime soon. Some things are not yet possible in the cloud at the scale we work at.

“We always try to push the limits with every show we do, we always try to introduce new components. It means that it will never get easier, but that’s really positive because it means we are coming up with innovative ways of looking at production. Our workflows are very non-traditional, but that’s what makes them cool.”

Rybalko concluded: “On-demand production and scalable production as an additive component to an already complex show is great, and the ability to spin up regional feeds or provide regional tools in the cloud is something I see continuing to develop.”

 

Subscribe and Get SVG Europe Newsletters