Audio vendors rise to IP and NGA challenges with new audio processing platforms
Expectations placed on the latest broadcast audio processors are many and varied. Not only do they have to enable flexible IP-based operation using multiple network technologies, they also need to accommodate current and possible future next generation audio (NGA) production.
The increasing number of broadcast and OTT platforms that are typically supported in modern workflows constitutes another abiding preoccupation for audio R&D teams. The extent to which audio processing infrastructures – both those provided inside consoles as standard, and available as separate and/or auxiliary systems that allow overall expansion – have evolved recently was underlined throughout NAB 2019, where most of the leading vendors had significant new developments on show.
Centre of attention for Lawo was the A_UHD Core, which is a network-based, software-defined audio DSP engine that was launched in September 2018. With a processing density of 1024 DSP channels, the UHD Core is designed for use with the Lawo mc2 broadcast console series.
As director marketing and communications Andreas Hilmer explained, the UHD Core has been designed to allow broadcaster users to cope quickly and flexibly with escalating channel counts. “Utilising the IP network as an extension of the console’s core backplane, Lawo’s UHD Core can be located anywhere on the network and can be utilised by a single mc2 console for coping with even the most challenging productions, or be shared amongst up to four consoles,” he said.
The Core accords with Lawo’s existing policy of supporting the AES67, Ravenna and ST2110-30/-31 IP
audio standards. Redundancy is another abiding concern for broadcast teams everywhere, and to this end “maximum flexibility is achieved by the use of redundant networks via ST2022-7 seamless protection switching (SPS) and optional full hardware redundancy by a second hot spare unit which permanently mirrors all settings,” said Hilmer.
“For mobile productions, the scalable DSP performance with temporary licences is a great way to turn CAPEX into OPEX, whilst in facility applications the possibility of resource pooling and flexible allocation of DSP resources to multiple consoles can significantly increase the utilisation of the audio infrastructure investments.”
Noting that immersive audio has been “on our priority list for years as the native counterpart of 4K video”, Hilmer confirms that Lawo maintains an agnostic approach to surround sound formats. “All Lawo broadcast production consoles provide appropriate immersive sound control elements – e.g. a surround positioning stick and a dedicated 3D Z-axis controller – whilst on the monitoring and processing side our desks are designed to cope with the multi-channel demands of immersive productions,” he said, adding that the mc2 desks “provide support for any immersive format – making it a secure investment despite the current dynamics in immersive audio formats.”
Inside the ImPulse core
NAB 2019 also witnessed a significant presence by one of Lawo’s primary peers, Calrec, which showcased the new ImPulse core audio processing and routing engine with AES67 and SMPTE ST2110 connectivity.Dave Letson, vice-president of sales for Calrec, explained that ImPulse is “compatible with current Apollo and Artemis control surfaces, providing a simple upgrade path for existing Calrec customers. In addition, future scalable expansion will allow up to four DSP mix engines and control systems to run independently on a single core at the same time, providing the ability to use multiple large format mixers simultaneously in an extremely cost-effective and compact footprint.”
In line with Letson’s observation that immersive audio is now “a major consideration in the television industry [alongside] IP”, ImPulse provides “3D immersive path widths and panning for next generation audio applications, and has an integral AoIP router, which fully supports NMOS discovery and connection management, as well as mDNS/Ravenna discovery.”
The pace and nature of transition to immersive audio production is likely to vary considerably between broadcasters – hence a desire with ImPulse and other current developments to allow customers “to upgrade to this technology when the timing is right for them”. But the company is in no doubt about the potential extra burden accompanying multi-channel mixing.
“Immersive audio for delivery over platforms like MPEG-H and Dolby Atmos, along with OTT content, increases the number of channels that need to be processed for a given production, as well as increasing the complexity of that processing,” said Letson. To this end, the ImPulse core utilises Bluefin3 DSP to “ensure that plenty of processing headroom is available for the biggest of productions. It is hugely scalable and suitable for medium to very large-scale productions, with a flexible upgrade path and expansion capacity for whatever the future may demand.”
Outlining OPC technology
Tom Knowles, SSL broadcast product manager, also confirms the profound influence of IP and immersive audio, noting that its latest broadcast platform, System T, was developed with these technologies “in mind from the start”. At the heart of the Tempest Processing Engine, which operates on multi-core CPU devices and includes a Real Time Operating System (RTOS) as well as SSL’s patented OPC (Optimal Core Processing) technology.
Knowles explained: “OCP guarantees real-time and deterministic allocation of resources across the CPU, enabling multiple 64-bit floating-point operations with high precision and ultra-low latency… a single sample per processing block. Uniquely, processing and mixing are all done inside the cores – no additional FPGA or DSP is required, reducing buffering and decreasing latency. All of this provides an extremely stable and flexible DSP-like architecture, with capacities only previously possible with FPGAs. CPUs running RTOS with SSL’s OCP combine both these technology specific attributes, with complete transparency to the console operator.”
Like most vendors, SSL notes that the overall adoption of IP networking will continue to owe much to open standards that ensure “the potential widest future interoperability”. But SSL’s “primary concern” is the use of IP technology shaped by “the application, specific usage case and required functionality of a system. The market share of AoIP technology stacks is also an important factor. [Hence], at this point on the standards adoption curve for audio, the use of licensed AoIP technology stacks provides the widest guaranteed interoperability and greatest functionality.”
Specifically, System T utilises Audinate’s Dante technology stack, including the Dante API managing audio routing of SSL Network I/O and a choice of more than 1600 third party AoIP products directly in the console GUI, including automatic discovery. The exact same hardware interfaces on Tempest engines and Network I/O devices simultaneously support Dante as well as the AES67 and ST2110-30 standards.
In terms of immersive audio, System T supports multiple formats for all channels and bus paths in the console. More generally, SSL’s recent participation in an EBU trial for NGA mixing to both AC-4 and MPEG-H technologies using an S300-32 compact broadcast console continues to prove indicative of the way in which immersive production may develop over the next few years.
Knowles noted that “all the SSL equipment at this event was running released code. It was a trial in the sense of workflow and human experience, rather than a trial of technology for SSL.
“This sums up the challenges, really, in that they are as much about what the broadcasters actually want to do and how mix engineers might need to adapt as they are about the technology. Trials like the [EBU] one in Berlin are a good way to move this forward.”
It is therefore evident that the primary design motivations affecting console and processing design are – for now – relatively uniform across the professional audio community. The fact that vendors are enabling customers to adopt IP and immersive audio in a highly flexible manner is to their credit, not least as workflow requirements will surely evolve further in what is a hugely significant transitory phase in the history of broadcast audio.