Riot Games unveils world-leading Project Stryker esports broadcast facility in Dublin

Minister for Housing, Local Government and Heritage Darragh O’Brien (left) with Scott Adametz in the Technical Operations Centre

Riot Games has officially opened the first of three Remote Broadcast Centres (RBCs) making up Project Stryker, Powered by AWS, in the presence of Irish Minister for Housing, Local Government and Heritage Darragh O’Brien.

Three years in development, the 50,000-square foot Dublin RBC of Project Stryker will serve as a central broadcasting hub for both regional and global live esports across Riot’s trio of esports titles: League of Legends Esports, VALORANT Esports and Wild Rift Esports.

Broadcast feeds from live esports competitions happening around the world can be sent to the Dublin RBC where content is produced, broadcast and distributed in multiple languages to esports fans. It is equipped with a TOC, six remote casting insert stages, six production control rooms (PCRs), six audio control rooms, multiple bullpens for observers (in-game camera operators), graphics, replay and editing, allowing it to broadcast six live events at a time around the globe.

“It seems simple, but to be able to take away the barrier between your facility and the cloud, and not have to sacrifice quality or latency, is a game changer”

“The analogy I use is that it is really, truly, like the NFL adding Major League Baseball,” Riot Games director of infrastructure engineering Scott Adametz told SVG Europe. “The same company producing both. Oh, and also National Hockey League. Or maybe FIFA adding rugby. We had to understand that this was a completely different set of skills that we needed, a different set of capabilities.

“The first principle here is being game-agnostic. It can’t be built to satisfy any one game; it has to be dynamic. When we only had League of Legends, and it was a global event, we didn’t have regional events. We had this nice and ebb and flow. But as we added new titles, the regional capacity evaporated as they couldn’t produce different sports.

“So suddenly, they were not an option. We knew this was coming, which is why we said it had to be a centralised facilty, built upon the latest and greatest technology, which would potentially last ten years into the future,” said Adametz.

The three RBCs making up Project Stryker, backed by an €18.5 million investment, are strategically located eight hours apart to create a ‘follow-the-sun’ broadcast model to support live esports productions 24/7/365 for the Riot Games Esports Technology Group (ETG). During a live esports event, a small contributor kit onsite sends live feeds back to the RBC through Riot Direct, the global private internet service provider powering every Riot game packet for every player around the world. The feeds are then routed to control rooms for show productions across multiple languages.

View of one of the six production control rooms in the facility

In addition to content production, each RBC will also serve as a centralised storage and shipping location for global competition hardware, prioritizing quality control for products used by esports pros at Riot’s global esports events. Content storage is another element of Project Stryker with each geographic location also housing a data centre and media content vault where content is archived correctly for future discovery and viewing.

The Dublin RBC will generate over 6,300 broadcast production hours annually. The facility, located in a former nightclub in Swords close to Dublin Airport and Dublin Port, is Riot’s second footprint in Dublin, joining its city centre office with 165 employees specializing in business operations, localization and player support. It is expected the new RBC will create a further 120 jobs in Dublin.

Project Stryker Dublin executed its first global event earlier this month with Wild Rift Icons, and is currently supporting VALORANT Masters 2 now underway in Copenhagen. The next RBC is currently in development in Des Moines, Washington (in the greater Seattle area) with an estimated completion date of early 2023. The third RBC will be located in the APAC and is slated for full operations by Q1 2024.

A Riot Games partner since 2020, Cisco’s partnership is key in helping Riot Esports modernize its infrastructure and expand capacity for new titles and ideas with its Cisco IP Fabric for Media for the entire network stack. As part of a new global partnership with Amazon Web Services (AWS) announced this week, Project Stryker is now ‘Powered by AWS’ as its official Cloud Artificial Intelligence (AI), Cloud Machine Learning (ML), Cloud Deep Learning (DL), and Cloud Services provider.

Riot worked with systems integrator NTC (National Teleconsultants) to create the fully IP-based facility in Dublin. Built on SMPTE ST 2110 standards with heavy use of JPEG XS compression, the Dublin RBC provides Riot the flexibility for future expansion.

“We can have parallel productions — one being done on a frame in a data centre and another being done in a virtual private cloud on an instance – and have identical outputs”

Nevion is providing the solution to orchestrate media flows between the remote venues and the RBC, across Riot’s Riot Direct WAN. This is built around the Virtuoso software-defined media node and VideoIPath orchestration and SDN control software.

The Virtuosos will be deployed in the SMPTE ST 2110 enabled data centres, and in the mobile contribution kits to be taken to event locations as needed. The media nodes will provide several media functions to transport flows across the network, including SDI/SMPTE ST 2110 adaption, JPEG XS low latency video compression, MADI processing and transport and IPME (IP media edge) functionality for LAN to WAN hand-off, multicast to unicast conversion, and flow protection.

Riot first deployed Nevion Virtuoso with JPEG XS in remote production of League of Legends World Championship Final in 2019 – one of the world’s first uses of the compression technology. This year, Riot completed the successful integration of MADI over IP/SMPTE ST 2110 between Los Angeles and Reykjavik during the League of Legends World Championship in Iceland.

Riot Games has chosen Grass Valley K-Frame production switchers for the facility, along with Calrec Artemis and Type R audio consoles, Riedel intercom, TAG VS multiviewer software running Cisco UCS server hardware in the data centre, EVS XT VIA replay production servers, and BirdDog PTZ cameras for the insert stages. NEP Ireland is Riot Games’ partner providing technical staffing and support at the facility.

“Having mostly lived in Dublin for the last six months in preparation for this launch, I cannot think of another place that has been so supportive,” said Adametz. “It met all the criteria, and then some.

“During the construction we had to make changes, and had to move walls sometimes. And I never once heard – from contractors, construction workers, architects – anyone ever say anything other than, ‘we’ll get it sorted’.

“It has been so collaborative and, honestly, such a refreshing experience though the build. We could not have built this facility, with all the unknowns we faced, in any other place. I’m convinced of that,” said Adametz. “Dublin has been a joy.”

At the press launch (L/R): Trevor Henry, LEC shoutcaster; Scott Adametz, director of infrastructure engineering; Allyson Gormley, general manager Project Stryker Dublin; and John Needham, president of Esports, Riot Games

The opportunity to start over

“The genesis of this idea was VALORANT,” continued Adametz. “This was at a time when we didn’t even have a name for it. But it dawned that if this game was even a fraction as successful as League of Legends — which was our entirety of esports at the time — how would we produce another esport? How?

“So, we looked back at the ten years of League of Legends as a game from which an esport grew – and of course esports lend themselves to be broadcast. What would we do differently, if we could start over? What could we use that we learned from ten years of organic growth, and prepare ourselves for the next decade? And out of that thought process came the idea of Project Stryker.

“Instead of building one facility – or, burdening our regions that existed already, adding more production to their limited capacity – we came up with this idea of not burning out one region or one facility. Being able to follow the sun: this was a core first principle. Being able to throw productions between geographies and follow daylight hours, which follows people’s quality of life, but also allowing us to build in redundancy and capacity, where it’s as simple as adding a nightshift to double our capacity.

“So we took the world and carved it into time zones, with eight hour shifts, and then started a research project to figure out where in the world we should be building. We could have built for example in Death Valley, which would have been cheap – but it wouldn’t have had connectivity and a labour pool. We need to staff six simultaneous control rooms. It takes a lot of people to staff six simultaneous events,” he said.

“Through our data and our own research we came up with a list of cities, starting off with a list of 139 cities around the world. We then narrowed that down to top five in each region – and that pointed us towards Dublin and Seattle.”

Data centre remote production

“All of the equipment in the PCRs is effectively remote control equipment. There’s no actual heavy processing hardware in this building; that’s all in our data centre across town. Everything you see here is being fed across links to and from our data centre, directly to our screens,” said Adametz.

“The data centre, which is located about 30 kilometres from here, is a cage in an interconnection facility. There is no central equipment room and there’s no bunch of servers here. The contribution kits are the only ‘servers’ in this building. Everything runs in the data centre.

“The reason is that this facility is not designed to be 24/7. We want to turn this facility off, we want people to go home and be with their families. Because we’ll have two other facilities to take over. And while this facility is dark and everyone is at home, all that equipment is still available as a resource pool to contribute to other shows from other Stryker facilities around the world.

“So it’s additive. It’s not that if the power went off in this building we would lose access to all the control rooms, switchers, video processing, encoders and decoders: it’s all in the data centre. This [building] is just an extension.

“The contribution kits run on JPEG XS, which is not even a ratified standard. We’re using a way to compress video and send it on a network that – as far as I’m aware – has not been done in production. Or at least on this scale. We were the first to use JPEG XS – in fact, Fergal, you wrote the SVG article – in Paris and the first to send it across the Atlantic in support of a production. That was a test towards building this facility.

“Our 2019 League of Legends World Finals in Paris in November 2019 was our last global event before the pandemic changed the world. It’s good to carry on the conversation here. That was us testing for this project, with some of the prerequisites we needed to do, in order to take a bold step towards a different way to produce.”

Scott Adametz in the Dublin TOC: “This facility is not designed to be 24/7. We want to turn this facility off, we want people to go home and be with their families”

AWS Cloud Digital Interface

“We’ve been using Riot Direct since the early days, and it is just a tremendous asset. And it came out of doing what’s best for our game players, and then we expanded it to add video production. But, we knew that in order to have global collaboration across time zones and across vast distances, it had to be fast,” said Adametz.

“Traditionally we’ve used H.264, light H.265, and some J2K. But that’s three different standards and formats, each configured in a different way. We didn’t have consistency. We had to ensure that one region could send to another with the certainty that it would work. And this is where we started to look at another option that would be faster, still have visually lossless quality, with forward error correction and could travel over a WAN if necessary.

“Talking to some of our partners, JPEG XS was floated as what might be that Holy Grail. And, to be honest, it was in the SMPTE 2110 spec, so I didn’t feel that we were taking too much of a leap for 2110-22 as eventually there would be some consensus as it keeps it in the same time base and I can still use the same encoders and decoders to do uncompressed or compressed and add metadata and ancillary data – all of the things that would normally be in our spec.

“I am ambitious that the third RBC will not need a data centre. We’ll probably still have a production centre, but I’m not convinced that it will need a data centre”

“Our JPEG XS, encoded by Nevion, can be natively sent into an AWS Cloud Digital Interface (CDI) workflow here as XS and then processed in pristine visually lossless quality into any distribution format for archive, storage, metadata enrichment. CDI is how AWS is able to incorporate sources like 2110, uncompressed, within public cloud,” he said.

“So we actually don’t have to encode to send to AWS. We use the native feeds we already have in the building and simply route them. We’re not having to make another rendition in order to incorporate workflows in AWS. They have solved the last-mile piece; solved the problem of getting feeds into AWS and to then work with them.

“I’m not using RTMP, I’m not having to compress it down to some output quality that’s super-low. I can keep it in its native JPEG XS, the way it arrived in this building from our encoders in the field all the way into an AWS workflow without ever having to re-encode it. That has never been done before.

“It seems simple, but to be able to take away the barrier between your facility and the cloud, and not have to sacrifice quality or latency, is a game changer. This opens up the possibility of running virtual switchers. Instead of having a switcher frame filled with FPGAs and ASICs, we can run switcher instances. They receive the same video that our switchers here receive.

“We can have parallel productions — one being done on a frame in a data centre and another being done in a virtual private cloud on an instance – and have identical outputs. No difference in quality, or latency. And then we can spin up as many of those as we want, and not be worrying about having enough physical space or having all the people work here. They can work anywhere, at that point.”

At the press launch (L/R): Trevor Henry, LEC shoutcaster; Scott Adametz, director of infrastructure engineering; Allyson Gormley, general manager Project Stryker Dublin; and John Needham, president of Esports, Riot Games

To Seattle and beyond

“We always had the ideas that these facilities would be the same,” Adametz told SVG Europe. “We’re not trying to build three different facilities: we want one way to work that spans the globe, and it happens to be in three locations. Three time zones.

“Seattle is truly a copy and paste of Dublin, along with the learnings and lessons we’ve come to along the journey. And as we bring up the third facility we will have learning and lessons from the first two.

“I am ambitious that the third RBC will not need a data centre. We’ll probably still have a production centre, but I’m not convinced that it will need a data centre.

“And that’s because through this new [AWS] partnership I think we will find ways to offload workflows into public cloud and grow and shrink as needed and on demand – and that will power the third. And we will take the learning and lessons and expand capacity for these two, where there are data centres, into public clouds.

“As we add productions and add capacity, and start to have demand suck up that capacity, they will be predominantly using cloud-based and virtualised resources. We’ll still have the production facilities, because broadcast happens in groups and teams in places like this. But the tech doesn’t have to be in the building — we’ve already proven that – and I don’t think it has to be in a physical data centre, for the long term.”

Subscribe and Get SVG Europe Newsletters