Inside TAMS: How Time-Addressable Media Stores could redefine sports workflows

A penalty shoot-out goes to the wire, and within a minute the clip is in replay, on social media and on news sites around the world, in different aspect ratios, frame rates and formats – but it hasn’t been delivered by satellite, streamed as SRT or sent as a file. It’s made use of TAMS – the Time-Addressable Media Store API, a new approach set to transform how content moves from capture to clip to consumer.

Formulated by BBC R&D, and embraced by AWS, Drastic, Reuters, Techex and other industry players, TAMS won the IABM Industry Partnership Award at IBC2025. This month’s On Air event hosted at Ravensbourne University in London used TAMS as the backbone of a 26-hour continuous live broadcast involving students around the world. But why is TAMS such a big deal and how does it work?

In the pre-digital era, media travelled either as a live feed (satellite, microwave, cable) or on tape. When broadcast workflows went digital we created direct substitutes: feeds became SRT streams, and tapes became files. On the viewer side, early proprietary streaming formats like Flash Video gave way to HLS and DASH, with their adaptive bitrate ladders. These segment the video into short ‘chunks’ distributed over standard internet infrastructure, reaching players that dynamically reassemble the stream at whatever quality level the connection can support.

TAMS takes this idea and applies it to contribution and storage. Instead of storing video clips as discrete files, or sending live feeds in transport streams, it segments both into addressable chunks. Rather than the bitrate ladder of streaming, the chunks are grouped by sources, which can be any number of audio, video, text or metadata components. Instead of having to duplicate these sources for different editorial or technical versions, the store can incorporate referenced flows. And rather than a manifest file, there’s an open-source API format which any application can implement to contribute, process or consume any part of the media by referencing the timeframe.

A living database

In practice, this means that a single TAMS store can function like a living database for media sources and flows. Editors, AI tools and automation systems can all reference the same moment in time without having to duplicate or render new versions. The same segment of video, audio or metadata can be analysed, reframed or reused without ever being copied or transcoded. This is what makes it truly cloud-native – it treats media as structured, time-indexed data rather than collections of files.

So far, most documented TAMS use cases have focused on news. Reuters in particular sees significant potential efficiencies; a lot of news events involve hours of waiting for a short newsworthy moment – for example a jury verdict, a key exchange in parliamentary footage, or a VIP arrival. Instead of distributing hours of continuous live video for multiple affiliates to record so they can capture those key seconds, TAMS will allow journalists to quickly clip the moment they need from a central source.

That same principle of “reference, don’t replicate” could reshape how sports broadcasters and rights holders think about contribution and archive. Every camera feed, every mic track, every data source becomes a time-addressable asset that can be accessed instantly without relying on traditional shared filesystems, MAMs or large file transfers. TAMS complements other emerging open standards such as DPP’s Live Production Exchange (LPX), the EBU’s Digital Media Framework (DMF), and ISO’s Network-Based Media Processing (NBMP), giving vendors a consistent model for exchanging and orchestrating content across systems.

Live sport doesn’t involve as much waiting around as those news use cases – even over five days of test cricket, there are audiences keen to watch continuously. TAMS won’t provide the super low latency we might want for live viewing, but those key moments are really important and fans expect instant analysis. Whether it’s a catch, a goal or a winning sprint, audiences want to see the replay, the angles and the data – how fast were they? How close was it? And if we can get some of these clips onto social media platforms fast, the immediacy can grab attention and attract engagement, pulling in new viewers and subscribers.

With its concept of time-indexed sources, a TAMS store becomes a database of components that we can rapidly draw together. All the camera angles. The microphones that picked up the thud of the ball below the roar of the crowd. But also the real-time data on participants, movement, speed and timing. We might have already seen the moment in real time, but in a few seconds an AI agent could analyse it, add more data to the store, produce a slow-mo from a new angle, and frame a vertical version with on-screen stats for TikTok.

Similarly, creating a highlights package today means moving large files between systems and teams – clipping, re-encoding, and assembling everything into an edit to be published. With TAMS, those steps become largely virtual. Editors or automation tools can define highlight sequences simply by referencing time ranges from the original sources. The resulting flow can be rendered or streamed on demand, without copying or re-encoding the media. That means faster turnaround, no duplications eating storage space, and instant tailoring for every platform – from news clips to broadcast compilations and vertical mobile edits.

Similar output from today’s workflows relies on having enough specialist storage and processing hardware, bandwidth and people, but the cost, complexity and friction mean that even for the biggest productions the turnaround speed and creative scope still have to be limited. Large teams with substantial kit are needed on-site, with associated set-up, take-down and travel overheads. Video is collected on SANs and indexed by file name, requiring specialist knowledge and training to retrieve the right file quickly at the right moment. For smaller events and lower-tier sports, budgets constrain what’s possible.

For outside broadcasts or remote productions, TAMS could replace racks of on-site storage with a single cloud store, accessible simultaneously by logging, highlights, and replay teams. Instead of multiple edit suites pulling from separate drives, you have one scalable, time-addressable store accessible from anywhere. Not only does this mean cost savings and better output, it makes more, higher quality coverage viable – simultaneous games, lower tiers, niche sports.

Integration work is already underway between early implementers and established production toolsets, ensuring TAMS can complement as well as replace existing infrastructure. At the Ravensbourne event the entire output was ingested into a single cloud-hosted TAMS store, giving editors and automation tools instant access to any moment as it happens. Clips for social media and edited segments will be generated by students around the world without waiting for file transfers or transcoding.

As trials move from proof-of-concept to real-world workflows TAMS could redefine how broadcasters manage highlights, archive and rights – making time, not files, the common currency of production. What happens when real-world sports workflows meet this model? How can broadcasters, vendors and streaming platforms put it to work? Those are the questions Part Two will explore.


Comparisons have been drawn between TAMS and Quantel’s FrameMagic system, which originated in the 1990s. While both approaches focus on precise, time-based access to content, TAMS represents a modern, cloud-native evolution that is scalable and API-driven.

FeatureTAMSQuantel FrameMagic
Core ideaTime-based access to media objects and associated metadataFrame-accurate access within shared storage
ArchitectureCloud-native, distributedOn-prem SAN
API accessRESTProprietary
ScalingLimited only by hyperscaler capacity (in practice, no limit)Tied to hardware
Typical useRemote/cloud workflowsStudio post environments

Subscribe and Get SVG Europe Newsletters