Mind the gaps: IBC Accelerator Connect and Produce Anywhere on understanding where it is on the journey towards a containerised workflow utopia

IBC Accelerator, Connect and Produce Anywhere (CAPA), created a living lab at Formula E’s London E-Prix at Excel recently. Its goal over the course of the weekend of 20 to 21 July was to test how this evolving project could work at scale, and also to understand the questions and gaps in its development towards creating a containerised workflow utopia.

Last autumn when CAPA was set to begin its second round as an IBC Accelerator, it had to define and pick a format for the accelerator, so it built its proposal for the project around NDI. However, BBC R&D senior technology transformation manager, Ian Wagdin, says while NDI is at the core of the project, it is important that what is being studied is flexible enough to work with anything thrown at it.

He says, speaking to SVG Europe at the media compound at Excel while Formula E cars whizz around the track, “now NDI is a brilliant codec or a brilliant workflow for small scale events, but actually a lot of our champion partners and the broadcasters – so people like Sky and ourselves [BBC], and TV2 and Channel 4 – we’ve also got big IP stuff, and in the real world you also hit other codecs and other formats, so I think here [at Formula E] we’ve got about three or four different codecs [coming in] from radio cameras, so we need to build something flexible, not based on a single workflow”.

He adds: “This is the first time that we’ve actually bought anything in [on a test for the accelerator] that isn’t an NDI source. We’ve got some SDI coming in, we’ve got some 2110, we’ve got some HEVC, and some of that is being converted to NDI in order to get the cluster to work.

“But what we’re seeing is a general trend to move to more format agnostic solutions so that we can actually mix and match what we need to do,” he continues. “There are advantages and disadvantages to everything, but if you’ve got a localised cluster, perhaps you can work more uncompressed, therefore you can drive your latency down; you don’t have the latency problems of sending feeds over to somewhere else, because quite often you send that over the internet and then as soon as you send it, you’ve got to put some sort of protection around it and that adds latency and delay.

“Whereas because everything’s running locally on the cluster here [at Excel], it’s almost working in a local area network rather than the internet. Then we’re just connecting into that and controlling that network.”

Living lab

At Excel, the accelerator was testing its ability to bring in multiple sources at a large scale event, “which is something that we haven’t done before on previous tests,” notes Wagdin.

He says this tests marks the first time the system has been scaled up to be able to accommodate an event as large as a Formula E weekend: “It’s moving us really from what we’ve done, what we showed at MPTS, what we showed at the EBU this year, which was really small scale running in low cost way, to now scaling it up to what something the size of what Formula E looks like, which is far more complicated. You can’t carry all of your audio feeds embedded in a video stream. You have to think about that. We’re looking at how we might do comms, and we’ve got some graphics running on the cluster as well.”

Part of the media compound for Formula E’s London E-Prix on the weekend of 20 and 21 July 2024

The test at Formula E was a good trial for the accelerator in terms of understanding how far it can go, and also for the participants in the project to learn what is needed to move it forwards. Wagdin explains: “What I’ve said to the team is this is a living lab. This is not about putting out a polished output out, something like Formula E; I can’t get even close to what the guys are doing here.

“What I can do is to learn about these deployments in the field, looking at quite simple things; the key thing is we want to deploy our software in the same way, regardless of whether it’s running on edge, cloud, or on premises in our own broadcast centres. That sounds really simple, but when you’re deploying software in the cloud, there’s a whole bunch of nascent technologies and libraries that you can just call on because you’ve got that connectivity there, and it can just pull all the data down and make things happen.

“It’s really understanding where we are on the journey, how ready some of this technology is, and exactly where the gaps are in the technology, but also where the gaps are in the knowledge. We want to get to this point where we can be a little bit more flexible in how we deploy these things.

“What’s really important about this is that the participants and the manufacturers are really wanting to understand the containerised workflows and understand the demands of what we want,” continues Wagdin. “It enables us to test certain things and push the manufacturers to develop products that are more open in this way, and it’s showing them what is required to move forward and bringing them together.”

He adds: “I think Formula E here has got over 150 video feeds. We’re not taking anywhere near that. Our cluster isn’t that big. We’ve got 12 or 13 running on our vision mixer. But it’s also fair to say that we haven’t run on the  Distributed Cloud edge compute cluster yet at capacity, so we don’t know what our breaking point for the number of formats is, and where it might start to struggle and all of that kind of stuff. I was kind of hoping we might get there [to breaking point] over the next couple of days but it’s looking like we probably won’t, because it’s performing pretty stably.”

Challenging for manufacturers

On where this technology is leading, Wagdin says: “It’s going to be challenging for the manufacturers. I don’t think for much longer we’re going to be wanting to buy big hardware boxes with lots of sunk carbon in them that go into buildings or trucks and have to have dust covers because they don’t get used; I want flexible compute. I just want to be able to send data centres on wheels out to locations. I want push button configurations for five cameras, 10 cameras, 20 cameras. I want flexible storage, better identified storage, more accessible storage to go to multiple teams doing multiple things, rather than just being a video centric workflow.

“It needs to be more of a digital first production. We need to recognise that we’re not just producing for TV anymore,” concludes Wagdin. “The only way we can do that is by having more software based resources that give us more access to that content in a more accessible way to lots of teams who can just access content on their laptops or even their phones and create content. And the only way you do that is by software, not by plugging an SDI cable into the back of the vision mixer.”

CAPA will be presenting some of its learnings at the IBC Show in Amsterdam in September

Subscribe and Get SVG Europe Newsletters