By Alex Redfern, CTO, EVS.
Artificial intelligence has moved well beyond the experimental stage in live sports production and is now commonly used across today’s workflows. From speeding up routine tasks to enabling creative output that would have been unthinkable just a few years ago, the impact is undeniable.
The economic argument is equally compelling. Production budgets continue to tighten while audience expectations keep climbing. For many teams, AI has become the lever that lets them do more with less and maintain quality in the process.
AI already powers many of the tools trusted by sports broadcasters around the world, and plays a big part in our vision for the future of live production. But as we see its adoption grow, we’re also seeing a shift in the nature of the conversations we’re having with customers.
The question is no longer whether AI is capable of delivering high-quality results; the industry already knows that it can. Instead, the debate is shifting towards trust; trust in the provenance of content, trust in the accuracy of algorithms, and trust that ‘real time’ means what it should in environments defined by milliseconds.
The need for certainty
When is it appropriate to rely on AI? How do we ensure AI-generated imagery enhances storytelling without misleading viewers? And how much transparency is required when AI influences editorial decisions? These are the questions the industry is grappling with today.
In high-stakes moments, broadcasters expect complete confidence that any visual element informing a decision is accurate. When an algorithm synthesises motion or reconstructs missing information, concerns about misinterpretation become entirely understandable.
Nowhere is this more sensitive than in officiating. Whether determining if a ball was in or out, if a foot crossed the line, or whether contact occurred before the whistle, stakeholders require absolute certainty that the images guiding those decisions are both authentic and reliable.
This scrutiny is justified, but it’s not always applied consistently.
For example, in officiating and referee review, computer-generated graphics and animations are widely used and accepted in the decision-making process. AI-generated frames are not fundamentally different from these graphics, visual effects, or the CGI already used to explain complex plays; they are all forms of generated imagery, and can all be beneficial in the storytelling process.
Similarly, for the up-conversion from 1080p to UHD, about 75% of the pixels on screen may be newly generated during the process. In other words, three-quarters of the image is synthetic, but because the transformation relies on familiar hardware or established software, the authenticity of the content is rarely questioned.
Introduce AI into the very same workflow, and suddenly an equivalent enhancement sparks debate.
This double standard exposes a deeper truth: acceptance of technology often depends more on perception and communication than on actual technical accuracy.
Another dimension of trust is speed. Live sports production demands extremely low latency, and the margins leave little room for error. Some AI tools are already delivering near-real-time results but in many live scenarios, ‘real time’ is rather measured in milliseconds, and ‘near real-time’ isn’t fast enough.
Too often AI relies on closed files, clear pieces of video and audio data with a start and an end, to process and output results. But doing this in actual real-time, with milliseconds as opposed to seconds of latency, can be a massive challenge and heavily depend on the complexity of the processing required.
Scaling AI to meet these expectations consistently, across more scenarios and more types of content, is the next frontier. Innovation is accelerating quickly, and bringing AI as close to live as technically possible is a top priority.
We’ve moved beyond asking what AI can do; its capabilities are becoming well established. The critical question now becomes: ‘How do we deploy AI in ways that strengthen, rather than compromise, trust?’
Answering that means being thoughtful about where AI belongs, why it’s being used, and what editorial safeguards need to be in place. Clear communication and consistent standards are key – to help production teams understand how to use the tools in the most efficient way, to give broadcasters confidence in the provenance of what goes on air, and to reassure audiences that what they’re seeing genuinely helps tell the story accurately.
The challenge is big but so is the opportunity.