Blue sky thinking: Sky Sports’ Alex Judd on the challenges of using cloud in sports broadcasting audio

Alex Judd, technical specialist and A1 at Sky Sports

After gaining a BA in Music Technology from Thames Valley University in 2005, Alex Judd joined Sky a year later as a sound operator. His career at Sky saw him move into a sound supervisor role, a position which included planning all the audio aspects of production for Sky’s 3D broadcasts, and then as a team leader.

This means that Judd has been part of broadcast audio’s evolution through a series of tumultuous changes, from 5.1 to 3D to immersive audio, and the switch to IP workflows and remote production. All of this means he is ideally qualified to talk about the next paradigm shift in audio production methods – mixing in the cloud. Here we talk to Judd about what cloud means to him.

_______________________________________________________________________

From a broadcast perspective, what does cloud mean to you?

Personally it means re-imagining signal flows within public-based cloud providers’ architecture.

We’re talking the internet, not dark fibre here. The ability to potentially move production tech costing to an OPEX model allows for a much more appealing start up value investment. By proving the possibility of this we are allowing more flexible entry points and allowing more live content and productions to get to air.

From a tech perspective it is the evolution of Remi production whereby we separate the processing to a virtualised platform sitting in a public cloud provider.

From a broadcast industry perspective, what are the perceptions of working in the cloud? Is it remote control of a physical processor cloud or does it have to be processed on the public internet? Does remote access of on-prem and edge processing from a PC count?

Cloud is a data centre, whether your company owns it and it’s on your premises (on prem), or it’s in one of the public cloud providers’ data centres. I think what we’re really interested in is the virtualisation of production elements such as vision mixers, sound mixers, intercom and MCR functionality for X86 architecture which allows the flexibility to process signal flows in all of the realms above.

Cloud is really something that has been coined to mean x86 based in public cloud.

From an audio perspective, what are the challenges of large scale mixing in the cloud?

The challenges are clear; syncing large quantities of feeds accurately and processing delay.

A lot of this is very manual at the moment. We are yet to see tech solutions working on every element of the chain, from commentator’s vison monitors to program TX feeds, and obviously in vision lip sync. There’s a huge dearth here to be figured out.

The issue is also that we are currently implementing multiple workarounds to accommodate broadcast specific requirements in typical DAW’s.

There are also constraints on bandwidth and audio compression at play in certain parts of the chain. That means that you have to be careful in the planning of the audio chain; using multiple compression codecs managed by different manufacturers, or different settings, may result in chunks of lossy compression at work on your audio.

How much of an issue is latency for cloud, and why?

Latency is a concern for any cloud-based virtualised developer as there are multiple places where it can go wrong.

Let’s look at a relatively simple example. Say we take two commentators voicing a feed in separate locations; first, you have the latency of the cloud images from the venue to get to the director who is cutting the images. Then you have the latency of the two commentators talking to one another. The images they are watching also need to be in time with one another so the commentators don’t trip over each other. On top of this, they also need to be able to communicate in real time, and we also have the latency of the camera return and communications back the venue crew which need to be factored in so as not to impact on whip pans.

This can all be worked around. But as you scale up, so do the workarounds, and that can make it tough.

Are operators comfortable with mixing on a screen UI or will there always be a need for physical hardware?

It’s really down to the individual operators. Currently, we’ve found touchscreens impossible for mixing. It’s very difficult to use them in the same way as a tactile surface such as a fader bank.

I think when you’re mixing and you start to do the hundreds of micro adjustments you need to do when mixing, currently a physical fader is just better. Where I see potential for innovation is for the individual screens on a console not being needed as they are virtualised; that could work with a touchscreen.

As the tech improves, I stand to be wowed and corrected, although I’m still waiting for my hoverboard like they had in Back To The Future. Let’s see!

Broadcast manufacturers are already going down this path, with audio mixers available in the cloud. How practical is this for live broadcast use?

It really depends on the scale of production you are trying to do. A simple balancing of international sound and announcers could be fine. In some instances, we’re actually producing a full Remi via public cloud. This requires additional features you would expect of a more traditional broadcast style console.

Where do you see the adoption of cloud going in the next five years for live broadcast mixing?

I see it only growing and creating additional value services and content for an ever-growing marketplace.

We want to produce and consume in a much faster way than we have previously. Virtualisation in conjunction with cloud allows companies to realise their creative ideas without having to bet the bank, so it allows more ideas to get to the screen.

In terms of the tech, we are seeing more audio mixers coming into the market. It’s really great to see that but we need to ensure we are aware of the whole picture in terms of the challenges we face.

Overarching control and the user experience must be considered paramount to avoid the inevitable barriers when engineers look to utilise virtualised broadcast offerings. It still needs to feel like an audio mixer with all the best features, not like you ordered a car and you’re having to learn to drive with a joystick and levers.

Is Sky Sports currently utilising any cloud infrastructures?

Sky is currently utilising public cloud for various productions. To date we have covered multiple events such as Indy Car, COP26, and netball, to name a few.

What are Sky Sports’ plans for the adoption of cloud technologies?

A big question; where to begin?

We have adopted lots of cloud-based tech on a broader scope. We will continue to work with developers to produce software which is cloud deployable and x86 based so that we can harness the potential offerings for on screen benefit; whether that is in public cloud or a data centre is really company-specific and relies solely on a business decision.

As engineers and operators we want the flexibility offered by virtualisation, with defined simplified set up and use. I view virtualisation as a tool that allows us the flexibility to grab processing power for defined use cases. At the moment we see quite a lot of hybrid ways of working with virtualisation as the physical rack systems merge with virtualised workflows.

Virtualisation will only get better from where we are today. This could eventually lead to much bigger productions to be realised this way.

 

Subscribe and Get SVG Europe Newsletters