NAB Reflections: Three things you may have missed in streaming, social media and AI
NAB 2017 seemed a bit lacklustre in terms of breakthrough innovation, particularly from startups, but three companies were notable for their substantive innovations which represented broader trends, writes Brian Ring.
Delmondo: Live streaming vs TV viewership, apples-to-apples
As recently as last fall, online publishers live-streaming sports — even NFL games — could safely declare success using digital metrics like “views,” by which they usually meant cumulative views with no specified duration. That’s not a hugely useful metric, since it doesn’t account for the duration of attention captured. Worse, it doesn’t capture the size of simultaneous engagement, an essential ingredient for brand advertisers.
So is there a better metric? Sure: average-minute audience. Notably, this is also the metric reported as ‘TV viewership’ by Nielsen.
Streaming techies can log and report something like this number as ‘concurrent’ views. But aggregating and validating concurrent viewership falls apart in a fragmented digital universe, where data is reported platform by platform, with specific dashboard metrics most beneficial to the platform. Better monetisation of live streaming requires a trusted third party to aggregate across platforms and provide metrics on an apples-to-apples basis with TV viewership.
Enter Delmondo, which offered a nicely packaged demo of its tool within the Facebook Live zone at NAB 2017. According to CEO Nick Cicero, the scrappy startup began life as a SnapChat agency and quickly jumped into the centre of one of the most important industry discussions: measurement of social streaming by ingesting raw data from across platforms and converting it into a single, average-minute–audience metric.
SAM Live Touch 4K: Social-video highlights with Quantel DNA
It seems like yesterday that Quantel had a re-dunk-ulous race car just outside the South Hall, coupled with an edit-suite/outdoor PCR/hospitality booth that conveyed its focus on fast-turnaround content creation, editing, and production workflows.
Since 2011, when SnappyTV (the foundation of Twitter’s Amplify program) had its NAB Show debut, I’ve been waiting for sellers of broadcast graphics, replay, or cloud media-asset management (MAM) and remote editing systems to seize the opportunity of real-time social-video creation. This year, the carpe diem award goes to Snell Advanced Media (SAM) and its Live Touch 4K replay system. And it seemed that Quantel DNA — and its focus on content creation — is at the heart of the new SAM offering.
In a discussion during the show, Product Marketing Manager Matt Zajicek cited three key trends driving instant-replay innovation.
First, there’s the move to 4K UHD workflows. UHD videos are gargantuan files, and not all compute, software, and networking architectures work equally well at handling multiple feeds. Ask SAM for more detail about how its servers cluster and perform.
Second, the rise of social media in the TV ecosystem may have hit a tipping point. Second-screen behaviours on Facebook, YouTube, Twitter, Instagram, and SnapChat are here to stay. Thus, poorly framed, single-angle clips grabbed from the content-delivery network aren’t sufficient. Sports nets are adding production value to real-time social-video creation with multiple angles and zooms, super-slo-mo, and graphics, stats, and context. The result will be better engagement and tune-in.
The third trend is the changing nature of work, automation, and computer-enhanced productivity. With touchscreens and the ability to drive more-flexible editing and creation workflows, tomorrow’s replay operators will be asked to do more. Who knows, that may also include social-media–editing duties.
IBM Watson: Big Blue with a win in Cloud AI
It was certainly notable to see Google join Microsoft and AWS Elemental at NAB 2017, rounding out the big three in terms of public cloud. But it was surprising that one of the more interesting demos of ‘cognitive cloud’ was at the IBM booth. That’s where a real-time speech-to-text demo appeared to work well enough for closed-caption requirements.
(It’s odd to think that US government regulations are driving artificial-intelligence innovation, but it’s a fact.) Surely, this tech will also drive interesting social-media and in-stadium fan-engagement use cases. After all, closed captions comprise colour commentary and play-by-play from your favourite broadcasters.
Now, it should be said that Google, Microsoft, and AWS do have speech APIs of one sort or another. But it’s not clear which use cases those technologies are geared for. None of them showcased a real-time speech-to-text service at NAB 2017, and that reason alone made the IBM Watson demonstration notable. It was the first I’ve seen of this type of technology at an NAB Show, period.
Also notable is that the technology’s effectiveness was coupled with a clever interface, including a display of confidence levels meant to represent the likely accuracy of Watson’s output. That seems a perfect example of how to drive progress in AI: enabling humans and computers to work together to achieve better outcomes than trying to enable computers to do it all, perfectly.
Extracting audio from a video and turning it into text in real time has clear implications for targeted advertising, personalisation, and second-screen user experiences.