Live content management & distribution: To keep or not to keep, that is the question
By Tom Blake, commercial director, Imagen
“How many cameras?”
There are up to 30 at Premier League matches, 42 at the World Cup and now Sky have just announced eight at each EFL game under their newest broadcast rights deal. And that’s just for football.
It’s unlikely that every feed from every camera will be archived, but whereas it was once just the highlights that were kept in SD, that became the world feed of full match coverage, then a clean feed plus dirty feed; and we are now seeing increasing demand for multiple angles to be recorded often at bitrates of >200mbps.
Our understanding is that around 10TB of output per match was being recorded at the World Cup, even more with extra-time.
That would have meant that this one tournament alone would have generated over 25% of the preceding 97 years in the history of the event.
When HD footage is being recorded at 220mbps and 50 frames per second, you’re looking at 450GB per file. For UHD it increases to 250mbps and 600GB per file. Multiply that by dozens of cameras and many matches or games, and you’re getting into the territory of PBs of content captured and stored per season.
This data has to travel across the internet at the very least between states, and often between countries, consuming enormous amounts of bits and bytes, electricity and bandwidth, and generating vast amounts of carbon, only to sit on disc in perpetuity – with the associated financial and environmental cost. The vast majority of this content, perhaps more than 90% will most probably never ever be viewed again!
I struggle to see how this is sustainable, both in an economic and an environmental sense.
So, what is a pragmatic approach to dealing with this? Who would make the decision NOT to keep?
I was recently at a New York SVG event on a panel with senior technology and operations execs from the NBA, MLB and NHL debating the age-old “how big is your archive?” question, but the conversation took an interesting twist.
We discussed how to find a happy medium between storage on-premise and the cloud, file-format and codec considerations, retention policies and the question of keeping or discarding physical tapes. But the key debate centred around finding an answer to the storage volume question, how to “keep but contain” the storage explosion.
I’ve been discussing this with several of our clients since then and we seem to be rounding to a consensus around archiving just the world feed at a super high bitrate and taking a more pragmatic approach for multicam.
For instance, rather than keeping 18 camera angles recording with several hours of footage at 200mbps, why not record an SRT feed directly into your cloud MAM? The archive gets populated in real time by the live feed, with all the associated benefits of being able to clip, edit and share instantly or even distribute onwards via IP. But with SRT supporting high bandwidth streaming at around 20mbps, the storage volumes are multitudes smaller, the overall costs reduce by about 90%, yet the quality remains good enough for broadcast.
What’s your view? What’s your organisation doing to address this issue? There will be other ideas which I’d love to hear. Please do share them with me and let’s keep this debate going.