Apps want to break free: Lawo on setting processing apps free

By Jamie Dunn, Lawo deputy CEO.

It’s funny how history finds ways of repeating itself in increasingly shorter cycles that deliver ever more spectacular advances. The vision to run broadcast-grade processing functionality on generic compute hardware is arguably the final step in a long series of transitions from one status quo to the next: analogue to digital, and baseband to IP.

Some of us may remember the migration from analogue or digital multitrack tape recorders to software running on Atari computers, which were eventually replaced with Digital Audio Workstations (DAWs) for Mac and PC.

With the advent of field-programmable gate arrays, something similar happened to broadcast processors. The proprietary hardware devices providing software-defined functionality allow operators to use such a unit for a variety of tasks.

Given the need for increasingly powerful and faster processing, and more IP bandwidth, some vendors began looking into strategies that would allow them to comply with these great expectations, while remaining competitive from a pricing point of view and staying at the cutting edge of what broadcasters expect.

Concurrently, the idea to run broadcast functionality on compute hardware designed for just about any IT application was embraced by the European Broadcast Union, which published a whitepaper to this effect.

This was a clear indication that broadcasters themselves were already contemplating a usage model that abstracts processing functionality from the hardware it runs on. A second whitepaper, released in 2024, provided details about why hardware-agnostic processing was considered the future, and how this should ideally be implemented. The reason for advocating an app-based approach was that this would free vendors from the obligation to both design cutting-edge software tools and the hardware required to compute it.

Keep it generic

The latter, it was felt, would slow down development cycles, raise R&D cost, and lead to hardware iterations with an increasingly limited lifespan amidst breathtaking advances by IT hardware giants regarding IP bandwidth (800Gbps is already on the horizon), the CPU/GPU number-crunching heft and pricing, and ever more sophisticated features. Nor would it lead to a reduction in rack sizes, carbon footprint, energy consumption, or the number of hardware devices operators habitually purchased to cater to occasional usage peaks. In a competitive broadcast environment, this is no longer a sustainable option. Decoupling software functionality from the hardware is dubbed the Second Wave of the migration to IP.

A generic server platform that runs all processing tasks makes a lot of sense, especially when the processing functionality resides in so-called software containers and is composed of modular microservices that provide flexible input/output capacity and various compression and transport flavours (ST2110, NDI, SRT, Dante, etc.) that can be mixed and matched. Users can now start the processing apps they need for the task at hand, switch off the ones they don’t need, and leverage the remaining compute capacity for additional tasks. This has a positive effect on energy consumption, as fewer CPU/GPU cores, and hence fewer hardware servers, are required.

Next logical step

The next logical step for this forward-looking strategy is to decouple licensing fees from the processing functionality, providing function-agnostic payment options. Ideally, operators should be able subscribe to a pool of existing and future apps for use on premise – or indeed anywhere in the world – at a highly competitive cost. Such a service does away with the need to take out perpetual licenses for apps one may only need sporadically.

A flexible subscription model does exactly what it promises: while there are unused credits in your virtual wallet, additional processing instances with differing configuration settings (several multiviewers, say), or indeed more apps offering other processing features can be started instantly, on the exact same budget. Idle time would no longer be an issue, because it would have been virtualized. And users would no longer pay for specific processing functionality as such, but rather for access to a growing pool of applications.

Same time, same place

Running all apps on the same server platform furthermore means that the long-anticipated convergence of audio and video processing is becoming a reality. Just imagine being able to power your video and audio control rooms with the exact same compute hardware for maximum agility. In any audio/video combination. Expanding or shrinking the feature set, or substituting certain aspects such as input and output formats is encouraged by a stunningly granular, containerised architecture. Adding more instances when several production tasks need to run concurrently becomes possible without installing additional hardware, or paying higher licensing fees.

Does such an approach necessarily require hardware servers in a data centre? Not if you don’t like the idea. Operators are free to decide where the processing apps should run, provided they have seamless migration in their DNA. Run them in data centres, in server farms anywhere in the world, on a GPU platform such as Nvidia’s Holoscan for Media, etc. The proposed approach turns generic hardware specifications into a commodity that is just “there” and requires no R&D for the hardware components.

In a way, decoupling software functionality from the computer hardware sets the processing apps free and enables the convergence of audio and video processing on a single, unified platform. The apps’ specific functionality seamlessly travels to whatever platform users find most convenient. Being platform-agnostic, finally, the apps always deliver their magic at full throttle.

Imagine the convenience of doing more with less in a perfectly managed production environment!

Subscribe and Get SVG Europe Newsletters