Analysis: Perform automatically selects, transcodes and distributes sports highlights
Perform Media Group, the multi-national sports media company, operates one of the largest transcoding systems in the industry. “We create and distribute sports content that millions of fans around the world interact with every day,” said Adam Smith, its senior workflow integration engineer. It currently manages more than 20,000 diverse media files per day, which it has enabled by moving to new data-driven workflows using Aspera Orchestrator to automate the selection, extraction, processing and distribution of match highlights.
Perform is parent company of data specialists Opta, the Goal football site, ePlayer on-demand sports platform, Omnisport sports news service, several data and video providers to the betting industry, and numerous other sports websites, including several broadcaster’s sites. It has more than 1,400 staff in 26 countries, delivering to 2,700+ distributors, with the results seen by more than 180m fans. It has 250+ rights contracts and covers some 43,000 live events per year, producing more than 150,000 VoD clips.
It has implemented its Orchestrator workflows over the past two years, to not only handle its huge transcoding workload but also to automate live data analysis and sports production, and is now expanding on them to add automatic feed clipping.
Faster file transfer
Aspera was set up a decade ago to enable speedy file transfer, which is also being used by Perform. “We can get up to 95% utilisation of your bandwidth, no matter what the distance is,” claimed Andrea Di Muzio, Aspera’s director of professional services. It has since branched into management and automation, with its Orchestrator platform, a graphical tool to design, execute and monitor file-based workflows. This integrates with most other vendors in the media market, including other file transfer platforms.
Orchestrator has been “at the core of what Perform has tried to implement over the last couple of years,” he said, to address “the explosion in file sizes” caused by increased resolution, in multiple formats, and to cater for a huge rise in demand, particularly to second screen applications.
The problems Perform needed to address were: to support the growth of on-demand content; improve its capacity to scale and support different service level agreements; and reduce its costs. To do this it uses: The Aspera Connect Server for ingest; and the Faspex Server to distribute content (internally and to the final customer).
Any content coming in can be transcoded to multiple delivery formats, as required by customer profiles, routing content through different transcoding platforms (such as Flip Factory or Pro-Carbon) and managing the Wait Queue based on duration, target format and priority, to ensure it meets its SLAs. It automatically produces all the different deliverables and verifies the content.
If something is wrong, it tries to correct the content, or informs an operator. It means operators no longer “always have to look at the outcome of a transcoding job”, but are informed if something is wrong, explained Di Muzio. If the queue is too busy, and it looks possible that Perform wouldn’t meet its SLA, then Orchestrator will use a cloud transcoding service.
“It makes the entire platform very cost efficient,” as it can be designed for an average peak and offload to the cloud only for exceptional loads. This is aided by using the FASP fast transfer protocol, so there is little extra delay by using cloud-based transcoding. “Perform can now process a lot more content than before, and, especially, process that content reliably,” he told a seminar at the recent Broadcast Video Expo in London. “The idea is having a platform that is sustainable, that is self sufficient, and can scale.”
The next step is to push the platform “to another level, by allowing Perform to do automatic feed clipping and distribution,” said Di Muzio. During a match, the video is recorded directly from the live feed and data about what has happened is stored on several internal Perform systems. That data triggers a workflow in Orchestrator that will auto-clip events, such as a goal or yellow card, and will then take care of transcoding and delivery. As it re-uses all of those parts of the platform already in place, auto-clipping didn’t take long to implement.
The system now ingests and transcodes more than 20,000 media files every day, with more than 1,000 different transcode combinations, and has allowed Perform move from having “a huge, crazy control room” to an administrator with a laptop being able to control almost everything, with operators only having to become involved if something goes wrong, or, if something in a queue has particular urgency, an operator can easily increase its priority on a graphical dashboard. “This allows us to respect SLAs very well,” he said.
“It doesn’t matter how extreme the processing power requested by Perform is, the platform can scale very, very well and very easily.”
Di Muzio believes that this implementation marks “an industry first for heterogeneous file transcoding management. As far as I know, nobody else has done it. What Perform has today they can decide to use whatever transcoding vendor they want and they have an overall layer where they can manage all this based on priorities and SLAs, failures management and operator’s decisions – all with a single platform.
“We’ve been able to reduce the support burden and costs by only acting when it’s really needed and not having to constantly monitor the platform.” Of course, “there is always a need for a human being, and when this need pops up, you want to be alerted that there is something that you need to do and have an easy way of doing it, without having to leave your platform.”
Since Orchestrator was introduced in 2013, it has provided Omnisport’s video news desk with “smooth and trouble-free ingest management,” according to Omnisport newsdesk editor, Richard Canham. The system works closely with Dalet to raise an alert if something fails to meet technical requirements or if there is an error in the transcode. “This has helped increase productivity on the planning desk, as they are now afforded the time to focus on other areas of their work,” and he believes that “the ability to filter footage and be notified when supplier-delivered footage falls outside the minimal technical parameters is a great benefit to the team.
“The ability to identify the source of the problem makes for easier and clear communication when addressing an issue with the supplier and has benefited the overall quality of our HD service.
“Orchestrator automatically decides on the best encoding profile to suit the source media; automatically invoking frame rate and field order conversion as well as aspect ratio. This has ensured that the ready-to-edit files are always processed correctly, regardless of the specification of the source,” he said.
Charlotte Mann, senior staff editor, Omnisport, added: “Before we had this system in place, ingesting footage from various cameras and media cards was time consuming for both editors and editing equipment. For rushes filmed in the afternoon, to be ready for a morning edit, we would have to allocate several edit suites for the different ingest procedures, and book in someone to work with this overnight.
“Now, it’s as simple as dragging and dropping all video files into one place, and we know they will all be ingested onto the system correctly. It means we can spend our edit time actually editing, rather than ingesting, which has been fantastic,” she said. “The Orchestrator process of flagging files that are not to spec before they enter our system also gives us a heads up as to whether to not use footage from certain sources. This gives us more time to make informed decisions about how to proceed.”
“When we were setting out to design the workflows for this, we needed to be able to make workflows that could adapt to all of our clients’ requirements. We didn’t want to make workflows for each individual,” explained Smith. Even if the journey between ingest and distribution varies for each client, the overall workflow is similar. “We can turn each part of the journey on or off depending on what the client requires,” he added.
“We try to build our workflows as component pieces that we can put together like Lego blocks to achieve a coherent, complete end-to-end workflow.” One that it has been working on is a way of taking a trigger from a data feed and then extracting the relevant information from that. “Once we have the information about the match, we can query our in-house Perform feeds to find out if the match is required for auto clipping, who wants it, and whether we need it for our ingest platforms or our edit machines.” It can then create titles and descriptions, and use Orchestrator to interact with other applications, such as ScheduALL, so that it gathers all the information needed to know whether to extract it and submit it to its ingest or transcode farms. It will also flag up any problems and tell operators where to look to check them.
A further workflow takes all the data the first has ascertained and uses it to call an API on the recorder to extract the media file, re-wraps it if necessary, queues and prioritises it, and can pass it on to further editing (Avid or Dalet). A further workflow submits the file to the transcode platform and mimics what the editor had been doing previously, so “nobody had to redesign any in-house systems.” It polls the databases to get the correct language IDs and any other requirements, which are bundled into an XML file. The file is then validated via another sub-routine, which checks everything meets its in-house requirements.
There is only so much transcoding capacity, so it then checks the duration and priority of every file. “We have a lot of high-turnaround news items that need to get out quickly, but we also have a lot of full match content, and they take a lot of time to transcode,” explained Smith. If a lot of matches get submitted in a row, as on a Saturday afternoon, “it could saturate the platform.” If a news item has to wait too long, its SLAs could be broken, “so we always leave enough space for urgent news content to jump in.” The CPUs are always maxed out, but the queue can always be jumped.
The system also checks the result of every transcode. If a transcode has failed because of a server problem, it ensures that no more files are queued for that server (which did happen in the past, meaning jobs were held up). Orchestrator tells the operator which server is out, and why (such as if it has no network connection or the application it needs isn’t running).
A transcode engine should do its own verification of a file, “but it didn’t always tell the truth about the reliability of the file. It might say it had been completed, but then you go and check it and it had been corrupted, or there’d be zero kilobits in the file or no audio.” So, they added checks in Orchestrator to ensure files have video and audio information, and that both match.
The system also spots where a file has caused the transcode engine to hang. “Some types of files could cause a hang, and it would sit there indefinitely,” but Orchestrator calculates how long a transcode should take on a particular machine, and will alert operators if it takes too long — and if it is broken, Orchestrator will move the file elsewhere and make it a higher priority.