By Ian Wagdin, VP technology & innovation, Appear.
The sports video industry does not need more acronyms, but it is about to get two that really do matter. Dynamic Media Facility (DMF) and its media exchange layer (MXL) are the moment standards meet software. Together they promise a common fabric for data, timing and control, so synchronous video and asynchronous compute can finally share one system instead of being connected by one-off integrations.
If we treat them seriously, 2026 is not just another tech cycle, it is the point at which the sports video industry starts to behave like a software business from stadium to studio. Most venues today are islands – trucks or fly-aways roll in, everything is patched together for a few hours, then rolled back out. Even permanent IP builds are often locked into a single vendor stack.
A DMF-era venue should look more like a platform:
- Containerised apps for contribution, protection and healing, processing and monitoring
- Shared timing and identity so every packet and every frame are addressable
- Workflows that can be spun up for an event and torn down after.
Ultra-low latency has been treated as a nice-to-have for demos or an optional upgrade, but that attitude doesn’t hold once workflows are genuinely software-defined. Remote production, VAR systems, interactive fan products, and in-venue apps all depend on predictable timing across hybrid networks. If a single link slips, the entire chain falls out of sync.
DMF’s architecture model and MXL’s role as a bridge between media networks and compute are designed to make latency budgets both explicit and enforceable, rather than leaving them as a best-effort promise. Once those guarantees become real, tolerating arbitrary or unpredictable delay in any part of the system stops being merely inefficient and starts looking reckless. Add to this the work being proposed in the AMWA/EBU joint taskforce (JT-DMF) to address timing and orchestration and we will see integrated workflows that work in software, no matter how you choose to deploy it , and we have a system that meets the demands of the future rather than echoing those of the past.
Hybrid is not a phase, it’s the destination
The point of the shift to IP was never to recreate SDI patching in software. It was to reach a world where all the tech involved, from cameras, GPUs, CPUs and the cloud, all participate in one production fabric.
Some functions will stay near cameras for latency and resilience. Many others, from replay encoding to quality control (QC) and versioning, belong on elastic compute. DMF matters because it assumes this hybrid reality and gives timing and metadata a chance to work as they should. The industry mistake would be to treat hybrid as an awkward interim state, instead of the permanent normal it has become, and will continue to be.
Expect to see plenty of DMF and MXL logos on booths long before you see working products. Vendors will contribute code to the reference implementations and tools that make the standards usable – and that’s my dividing line between hype and reality.
Real openness shows up as:
- Code level participation in DMF and MXL projects
- Public, documented APIs
- Proven multi-vendor tests that can be repeated outside trade shows.
Vendor lock-in shows up as:
- Beautiful control surfaces that only speak to their own stack
- ‘Standards compliant’ products that mysteriously demand proprietary orchestrators
- Monitoring tools that refuse to share metrics with anything else.
If a supplier cannot show you real activity in the DMF ecosystem or let you run their functions under your own orchestration, you may be buying a shinier island rather than building a software-defined future.
Orchestrated services, not fixed pipelines
Previously, galleries, trucks and MAM systems were built for peak load and wired once, locking in both the available capacity and the level of operational risk. Today’s emerging model treats production as a set of micro-services: from replay, graphics, encoders, editing, QC to packaging for onward distribution. These can be deployed as containers on-prem or in the cloud, orchestrated per event, and shut down when the tournament is over.
To make this vision deliverable, sports operators need governance that looks like modern software operations, including:
- End-to-end monitoring and observability from contribution to distribution
- Clear incident response ownership across partners
- Change control for workflows that are assembled dynamically
- Security that treats video, control and data holistically.
Again, none of this is possible if key components live in closed dashboards. Between now and 2026, rights holders, broadcasters and service providers can tilt the industry one of two ways:
Towards elastic, best of breed workflows:
- Specifying DMF concepts, not vendor product names, in tenders
- Requiring container-ready software with open APIs and documented timing and metadata models
- Making participation in DMF and MXL development a factor in procurement
- Investing in observability and security
- Designing commercial models around event-based and usage-based consumption.
Or back to polished lock-in:
- Through buying single vendor stacks that control orchestration and monitoring
- Accepting ‘DMF-ready’ as a marketing phrase instead of demanding proof
- Treating hybrid operation as an annoyance rather than the default.
The sports video industry has already spent a decade limping from SDI to IP. DMF and MXL provide an opportunity to finish that, with standards that match the software reality in stadiums, data centres and cloud regions.
It’s up to all of us to work together to make sure that 2026 will be remembered as the year that live sports video technology stopped being a collection of technical islands and became an elastic, programmable ecosystem from stadium to studio. This will lead to increased audience engagement and better monetisation of assets so being a part of the journey means there is opportunity for all.