Developing at pace: EditShare on connecting consumers with content
By Sunil Mudholkar, EditShare VP product management.
If there is one thing that is clear from the top sports story as I write this, with all eyes turned on the Middle East, it is that fans want ever-increasing amounts of detail and insight in the coverage, alongside background and colour from around the event.
It is now commonplace to expect some degree of remote production to enable all this. I would argue that what we see now is remote production 2.0.
While the industry had been kicking ideas around for a while, remote production was forced to the top of the agenda by the pandemic. We all had to do what it took to stay on air. Solutions were thrown together with remarkable speed, and by and large worked well though maybe sacrificing economics to get on air.
Time to reflect
Now we have time to reflect. The approach to remote production is all about efficiencies. How can we deliver more engaging content without adding to the OPEX resources required? In particular, how can we empower talent on the ground to be able to deliver stories and packages with ease and accuracy?
The first major trend is towards hybrid workflows. Broadcasters and production companies generally have existing and extensive capital investments in technology, and equipment which is perfectly functional, not to mention well understood by operators and engineers. But there will be occasions when you need to add capacity, or functionality, or provide access from multiple locations. Today that can be achieved with a hybrid architecture which makes covering occasional but demanding events practical.
If we look at the 2022 FIFA World Cup, we see traditional onsite production, but we also see staff around the globe working on coverage. What is important is that, thanks to modern cloud techniques, they need not be in proximity to the equipment.
Cloud editing is transformative either by proxy or high-res. Content is available in the cloud, which also hosts your preferred editing application. With a relatively modest broadband connection and desktop emulation software, you have the same editing experience that you would have seen if you were on-site and in an edit suite directly connected to the on-prem infrastructure.
We have been talking about this level of remote production for some time. Now we have a real, solid, primetime showcase.
This level of content access, through software applications and IP connectivity, leads to the second trend I see. While we are developing new workflows, we are also moving away from the traditional broadcast view of originating everything in the house production format ― which might be uncompressed HD or UHD ― working with those signals, and deriving outputs at the end.
Now we should be expecting support for non-traditional video capture techniques, and delivery to non-broadcast platforms. And if you are going to shoot on an iPhone or a GoPro, to publish on Instagram, Twitter or TikTok, why do you need to go through broadcast standards along the way. Modern workflows should make this as streamlined as possible, while monitoring and managing it as part of the main production.
In parallel, broadcast sports coverage is now appearing on streaming platforms over the top (OTT). The autumn rugby internationals, for example, are only available on Amazon Prime (they also hold the rights to Thursday night football in the States). Again, the workflows have to be familiar to the production staff, and interface seamlessly with CDNs for delivery.
This leads to the third of my major trends, which is that we need to grow our use of automation to get the media out faster. Evolution has meant that the established broadcast infrastructure has existed in a separate silo to the online OTT services.
The broadcast workflow was built around a production asset management platform. For streaming there was a content management layer. The next step is to provide smarter automation to demolish the silos, integrate the metadata, and provide more value to all the alternative delivery systems.
Another technology is finally coming into practical use here. Adopting artificial intelligence (AI) to generate useful metadata, for instance around scene or face detection, is a reality. Generating more metadata goes right to the heart of our challenge to generate more content: it drives new content for multi-platform services and social media.
It delivers smarter editing, whatever the destination, because the metadata is generated, parsed and shared automatically and very quickly. Whatever the editor has to do, the job is much quicker because the relevant clips are loaded and ready, named sensibly.
For this to work, the architecture has to be able to accept content from any source, in any format, and add the metadata to bring it context and make it useful. Because the workflow has to apply to everything from a full outside broadcast truck to a stringer with an iPhone (and no technical or operational support) it must all depend upon intelligent automation.
One final thought on content flow. Sports fans love to see today’s action put into context; the last time this play was used, previous performances by star players, how match-ups played out in the past. The archive is a vital asset, as is ensuring that everyone who needs it, for whatever output, has access to it.
Existing archive infrastructure is now supplemented with online storage, using a platform like Amazon S3 object store. For this to be of value in production, there has to be intelligence in synchronisation between on premises and cloud storage, so that the right material from the right sports is readily available when it is needed, in a form which is simple to search.
Technology is developing at pace. All of this is now readily available and can be deployed to support your workflow. The result is a new and exciting opportunity to create better, more rounded and more accessible sports coverage, providing an enhanced viewer experience, at lower costs.