Sienna enables remote Olympics workflow for Televisa
The Sienna MediaVortex workflow will be used by Televisa for the Summer Games in London this year. It will link the large Sienna system originally built on site for the Beijing Games four years ago (which is now permanently installed at Televisa in Mexico City), with a new smaller Sienna system which will be despatched to London with a small crew. The smaller London based Sienna rig will provide 20 channels of Ingest and four channels of playout, along with hundreds of Terabytes of storage and archive and a rich Media Asset Management system proactively linked to the Sienna infrastructure in Mexico. The connection will be based around a 100mBit data connection, carrying proxy media propagated from London to Mexico as well as control interfaces for the London systems.
Avoiding the need to send editors to each event saves enormously in costs and logistics and can make or break the ability to cover any given event. For an event like the Summer games, the expense of sending an entire team of editors half way across the globe, housing them for weeks, and building facilities for them to use there can be very significant. Conversely, having those people work at their regular desks with fixed facilities, and sleep in their own beds, saves money, time and logistics, and even allows those people to work concurrently on events from multiple locations.
The system in Mexico is based around the core of the original Sienna system Televisa purchased to use on-site for the Beijing Games. This system includes 6 racks of storage and servers with 22 Channels of Sienna PictureReady Ingest and 8 Channels of Sienna VirtualVTR Playout, running on Apple XServes with AJA Kona cards. The System is managed by the Sienna Media Asset Management layer. This system uses PCI based fibre channel connectivity to the storage.
Meanwhile, operators in London will use Apple MacMini quad i7 servers for 20 channels of Sienna PictureReady Ingest and four channels of Sienna VirtualVTR Playout, with AJA ioXT video interfaces connected via Thunderbolt. The ingest and playout systems use Promise Thunderbolt adaptors to connect to the 240 Terabytes ActiveStorage SAN. Several XServes are used for the Media Asset Management and the Sienna Taro system used to transfer media from P2 cards. Sienna also manages a 228 Terabyte disk based archive from ActiveStorage.
Both systems were built and are maintained by Sienna Systems Integrator Simplemente.
Operators in Mexico will remotely control the Ingest feeds in London using Sienna IngestControl, and Loggers in Mexico will work with proxy video flowing across the MediaVortex connection to log each event, as it happens. Loggers will use Sienna MediaSearch Web MAM interface with a jog shuttle controller and an iPad running Sienna Production Palette. The iPad provides a touchscreen logging interface with keyword grids for each sport, speeding up logging and increasing accuracy and consistency of metadata.
Each time a log entry is made in Mexico, the conjoined asset in London inherits the same logging information, and vice versa. Editing can take place on either end using all logging data.
Editors in Mexico will use Sienna ImpulsEdit to work with proxy media conjoined across the WAN connection, using the logged markers to speed media selection and searching. ImpulsEdit is a web based cuts assembly tool which works in real time, even with ingesting media still in record at the remote location.
Having completed a remote proxy edit in Mexico, editors have a one-click workflow to instruct the London Sienna to render a finished Hi Res package. This package can be loaded into a baseband playout channel using a remote connection and then played out as part of a show, or across the line down a live satellite connection. Alternatively the finished Hi Res asset can be transferred via MediaVortex to Mexico.
The remote challenge
It is already fairly common for broadcasters to take feeds via dedicated satellite connections, to bring the video feeds into the home facility, and this can work well in some cases. However, the costs of multiple, long duration satellite connections for High definition video is also very expensive. Also, it’s generally not practical to do things like send B-Roll rushes from roaming camcorders across dedicated satellite connections. For these reasons it has usually been necessary to continue to send editors to the event, and use a single Satellite feed to bring just completed content back to the home station, often as a live completed bulletin production.
With the rise in internet bandwidth, it might also be tempting to assume that this solves all your problems, and you can somehow magically add remote operations from the other side of the planet via wide area computer networking. Unfortunately it’s not as simple as that, particularly when significant distances are involved. The factor which complicates this idea is network latency – a short time delay which is incurred each time a packet of data passes through another router, repeater or other device which is connecting point A to point B, and in many cases the data doesn’t travel in a straight line so the distances (and latency) can be magnified.
As an example the typical latency on a local area network inside a broadcast facility might be less than 1 millisecond. However, once you start working over any sort of distance this very quickly rises and might go as high as 300 or 400 milliseconds for a trip from Europe to some parts of Asia, and even up to 200 milliseconds from Europe to the US. Any major increase in latency can transform TCP performance on a network from fast to painfully slow. You might see your actual data transfer performance drop from 100mBits/second down to less than 1mBit/second over these sorts of latency – what’s more, simply adding more bandwidth doesn’t help much since the latency dictates the ‘Bandwidth Delay Product’ and becomes much more important than the actual connection speed.
So, the high speed long distance networks exist, and might deliver the promise to enable remote sports workflows, but they don’t really work in normal TCP networking operations, such as those used by virtually all media systems designed to work on a Local area network. You can’t just get an ingest system in one location to write directly to the storage in another location with a large latency. Protocols like CIFS/SMB, AFP, FTP and even HTTP don’t work efficiently over large latencies. TCP uses relatively small packets of data to send across the network and typically in a single connection transaction, each packet needs a positive reply before the next can leave. As a simplistic example, if your latency was 200mSecs round trip, that might only allow you to send 5 small packets per second, regardless of the bandwidth of your connection.
A number of well known WAN accelerated file transfer products exist and are widely used for point to point transfers of files. However – these file based transfer tools can not transfer growing QuickTime movies, and are not reference-movie aware. As such they can not be used to transfer QuickTime ingests as they are happening, so a more integrated solution is required.
Solving the Problem
The answer is to redefine the protocol used to carry the data, and make it work efficiently across a high speed connection, even when there is a high latency. You must also ensure that this new protocol is sufficiently integrated with the media workflow to allow the connection to appear seamless. Live sports coverage requires tools like Edit during ingest, so your special connection needs to support that too, allowing ingesting materials to flow across the link as they are written, rather than waiting for the ingest to end and using a traditional file transfer. In most cases these redefined protocols will use UDP rather than TCP with logic to manage the high latency connections, and using UDP you can start to realise the true bandwidth of your link.
Having solved the latency, the next challenge is really all about the bandwidth. Even with transatlantic data connections which can get you (at a price) something like a 100mBit connection from A to B, this still isn’t enough to carry baseband or high resolution media traffic for an entire event like the Summer Games with lots of simultaneous feeds. To address this you need to use proxy edit workflows, where only the proxy data is fed across the link. In cases where the latency is not too great, you can sometime allow a remote editor to directly work on the system at the event, using just a proxy viewing editor, without transferring the files locally first. However, once you get past a certain latency, it starts to become impractical to do anything which requires random access to files over the connection.
The obvious solution is to transfer the proxies across a wide area network connection, and store them where the editors are located. The editors then work with local data and performance is fine. However, you then have the challenge of how to conform the proxy edits back into Hi resolution. Let’s also not forget we are in a real time production environment working across multiple time zones, and these types of process must be done automatically and driven remotely.
A solution developed to address all these challenges is Sienna MediaVortex, as part of a split site Sienna Media infrastructure. Sienna introduced the concept of a Distributed Media Cloud where media is not stored centrally as you might normally expect with a Cloud infrastructure, but instead resides across multiple linked locations where each has seamless access to all the rest, giving the impression of centralised media.
MediaVortex allows content in the Distributed Media Cloud to transfer between the locations across high latency connections, in an automated process triggered by user demand for a given piece of remote media. When an asset is localised in this way, it can create the concept of a conjoined asset where multiple sites have their own local copy of the media for an asset, but these remained conjoined in the MAM layer such that logging that media in one site, causes the logging information to automatically propagate to all the other sites sharing this asset. This simple function solves one of the major challenges in sports, which is the labour required to perform rich logging on content. Using Sienna MediaVortex, a logger at the home facility can easily perform logging on content being recorded at the remote event, with that logging data appearing automatically at both ends ready for remote or local editors to use. MediaVortex also understands growing media files, and can throttle its transfer intelligently to transfer an ingesting media asset whilst it is still recording – even if that asset uses multiple linked essence files.
The conjoined asset concept is also the key to auto conforming proxy edits back at the event location where the high resolution media exists. An edit based on various assets can be reformulated using local media by referencing the data which conjoins the media at the 2 sites.