Seismic shift: MRMC on the potential of robotic camera systems to next generation remote production
By Marius Merten, manager robotic imaging, MRMC
The impact of remote production has been seismic. Accelerated by COVID and the necessity to stop moving people into and out of infection risk environments, it has moved from the merely experimental phase to ubiquitous requirement with astonishing rapidity.
The arguments in support of remote production – lower costs and decreased carbon footprint in particular – were always going to see it become the dominant form of production, but few would have guessed it would have penetrated Tier 1 sports so quickly.
As COVID slowly transitions from pandemic to endemic over the next months, the industry has a great opportunity to start to implement next generation remote production too. Rights holders have signed off on remote and are comfortable with the technology; OB and production companies are experienced in its operation and regularly put intercontinental workflows together that would have been unheard of a year or so ago; and the twin demands of cost and carbon savings are only going to become more central to future business plans.
Cloud and virtualisation
So, how do we optimise what we have? Where are the inflection points that we can use to make remote production work with even fewer people and vehicle mileages than today? Moving further into cloud-based workflows is one area, and there is some fascinating work being done on fully virtualised live workflows by broadcasters around the world that is slowly heading to top tier territory. But another area is probably the most easily achievable and that is to examine the camera end of things.
You can virtualise editing, replays, switching, graphics and more, but you can’t virtualise cameras. To provide broadcast quality signals at HD – and certainly the UHD HDR that Tier 1 sports increasingly require – you need physical camera equipment. You need the bodies, you need the lenses (you especially need the lenses), you need the connectivity at the stadium, you need a truck to plug them into and squirt the signal back to your production hub. You also need people and trucks to take them to the venue, people to rig them, and people to operate them. They are expensive, bulky, resource-hungry, and absolutely essential.
So, one of the key aspects of establishing next generation remote production is reducing the footprint on all this. Some companies are doing this by doubling up roles and ensuring that the majority of their crew out on a production are rigger/operators, but other measures are also required.
There is the potential of stitching together a large picture from several fixed cameras and then flying virtual cameras around in the 8K-or-more space created as a result. This is interesting technology, but the results are not really suited for high end broadcast; the quality cannot match that of an optical lens system and colour matching issues persist between the cameras. At best this is a tech for lower tier sports or enhanced coverage of a warm-up area, for instance.
Robotic camera potential
Where we see a huge amount of potential at the very highest level is in robotic camera systems. These can help reduce the footprint of productions in several different ways. They can first be remotely operated. Provided the connectivity between the event and the production hub is low latency enough – and it really should be nowadays for all but the most out of the way venues – many camera positions can be operated from base. Crowd shots, beauty shots, cameras used for analysis, player tracking or social media to ramp up fan engagement; these do not need an operator to be physically present to produce great content. There are new PTZ cameras on the market that are more than capable of getting superb quality shots that would only have been possible with the very highest end broadcast units only a few years ago.
That, though, is only the start. We have already seen all-terrain robotic dollies provide a COVID-secure alternative to Steadicam, but we can take that further. Robotic systems can be fitted to established camera positions and existing kit too and provide the full functionality any broadcast would expect from a normal camera operator and a standard broadcast camera.
Essential camera operators
The important point here though is that camera operators remain essential and some of them will always have to be on-site. Likely no Tier 1 sport wants the main camera positions to be run by an artificial intelligence (AI); this is a specialist field that requires both technical knowledge and an acute awareness of the particular sport involved.
However, there are solutions that work even here. Camera One or Two, for instance, can be slaved to mirror precisely the movements that the human operator on One or Two is making, allowing for one person to operate both positions.
We’ve looked at our technology and the way that current top end sports broadcasts are put together and estimate that many events could easily scale up to a full 30-plus camera broadcast with somewhere between five and 10 human-operated camera positions.
This is true next generation remote production. Smaller crews, smaller vehicles, smart automation, and more people at base working in efficient production hubs producing two or three events a day.
As an industry, we’ve made a great start on the technology, and the workflows have come a long way in a very short space of time, but there is still more that can be accomplished to make sure that the next generation of remote production cuts both costs and emissions even more than before.