Olympics Q&A: NBC Olympics’ Dave Mazza on how the Peacock brings the Games to the masses
By the time the Closing Ceremony concludes on Sunday, NBC Olympics will have delivered more than 1,500 hours of coverage from Sochi — the most ever for a Winter Olympics and more than the coverage of the previous two Winter Olympics from Vancouver (835 hours) and Torino (419) combined (1,254). Once again, overseeing the monumental technical and engineering effort behind the scenes is Dave Mazza, SVP/CTO, NBC Olympics. Now in his 13th Olympics with NBC, Mazza and his team supply NBC with the technology and know-how that brings the world’s highest-profile sports event to the masses.
SVG sat down with Mazza this week to discuss the Sochi Games, the future — and past — of NBC Olympics, and the technology that has allowed the Peacock Network to deliver the Games to U.S. viewers like never before.
SVG’s Ken Kerschbaumer contributed to this report. Photos by Carrie Bowden.
How do these Games compare with previous Winter Olympics in terms of challenges and obstacles overcome?
This has been our most challenging Games logistically. Not our hardest technical Games but our hardest from a logistics standpoint: getting the equipment here, getting our field shop built, finding the [right] contractors, the telecoms infrastructure, the power situation. But it has all worked out very well, and we’ve received great cooperation. Sochi came a long way in just three or four years, but it has been a long road for all of us. A lot of it has to do with the fact that they have never done huge events like this before and the people and infrastructure weren’t there. It’s there now, though. Come back in two years, and it’s going to be very different.
You have a shorter timeline — about a year and a half — between the Summer and Winter Games than between the Winter and Summer Olympics, when you have more than two years to prepare. How do you deal with that, and how much does it change your workflows as a result?
We look at it two ways. If it is a behind-the-scenes thing and is more of an infrastructure thing that could get only incrementally better in that short period, we generally don’t touch it. We had about 16 months between London and here, and in the middle of that we were starting up our new broadcast center in Stamford, [CT], so we didn’t really have a lot of time to do anything that we didn’t absolutely need to do. Even if a few products [had been] nearing the end of their lifecycle, we would get through one more Games and [upgrade] it before Rio because we would have more time.
If it’s visible to the viewer, though, then that’s a different story because we have to innovate along with everyone else. We worked very hard on the virtual technology this time, both with OBS and our production team. A lot of effort was spent on that. We use a combination of [Red Bee Media] Piero, Sportvision, ChyronHego; some of it comes right through [official timekeeper] Omega directly.
Then, in terms of workflow, like the MAM [media-asset management], we can’t just sit back and not advance that, so it has been advancing quite rapidly. The real boon to that is that we are now running the Stamford plant on the same system. Getting that running for Beijing and London was very difficult because it’s so complicated to stand up for just 17 days, but it’s up and running in Stamford already. In the past, we have had to build and train and integrate at home for China [in 2008] or London, but, this year, half of that equation is already up and running. Plus, a lot of our staffers didn’t need much training because they had been using it in Stamford.
That has really taken off now because the genie is out of the bottle. Guys can pull a clip from anywhere — from here, from Stamford, the system doesn’t care where it is — and send it right to one of the venues here.
Speaking of Stamford, how did your needs here in Sochi impact the design for the Stamford facility?
The whole Stamford facility was built to be able to do 50 Hz, which we have to do here. That saves us a tremendous amount of standards conversions and has allowed us to do a lot of things that we’ve never been able to do before.
For example, [for] curling, which is being primarily controlled in Stamford, there is one announce team here and one in Stamford. On the very first day of curling, the Stamford guys were calling a match we sent home to Stamford and sent it back here for the NBCSN show about an hour later. That whole round trip stays at 50 Hz; it’s J2K [JPEG2000] both ways; it takes about 450 ms. Then, the curling announcers here [was being sent to Stamford] on another circuit and airing live in the curling show. So the announcers were actually in the opposite cities calling matches. That simply couldn’t happen at 30 Rock because, each time [you] sent it back and forth, you would have to convert it and the picture would look terrible.
Stamford was built so that entire control rooms and edit rooms could flip to 50 Hz. Right now, Control Rooms 2, 4, and 6 are running at 50 Hz, and Control 3 is at 60 Hz for the Premiere League and NASCAR show. So more of Stamford is actually running on 50 Hz than 60 Hz right now.
The Mountain and Coastal Clusters are much closer together in Sochi than was the case in Vancouver or other recent Winter Games. Has that helped you out logistically at all?
For being the hardest Games we have done logistically, [these are] probably the most convenient in terms of location. The Coastal Cluster is unbelievable — as condensed as a Summer Games — and you can get to the mountain in 30-45 minutes; it is very different than Vancouver or Torino.
We tried some new things to connect with the mountain IBC. For the first time, we used some 1-Gb connections between the mountain IBC and [the main IBC] and used a small AT&T Media Link frame to move feeds back and forth. Two of the big mountain venues have a hardline OBS VandA [video-and-audio] coming down the hill. The four others were aggregated to the mountain IBC and [the feeds] brought down over a lesser number of circuits with J2K, and that has worked well. We also have two 1-Gb [fiber lines] carrying data up to the mountain IBC, and that data fanned out to all the mountain venues, which has enabled the MAM transfers to take place to the venues.
How has your MAM system performed here by the way? And how has it evolved over the years?
It started in China and went through Vancouver, but London was where we knew we would really have to scale it up and simplify what we tried in China. In London, we were able to copy all the hi-res back home along with the lo-res proxies as they came in because the bandwidth was readily available. But, coming out of London, we knew the next three Games would be expensive-bandwidth Games: Sochi, Rio, and Korea. So we went back to our lo-res, hi-res international workflow that we used in China.
In terms of the MAM in Stamford, there is still some debugging to do, but we have it at about 75% of where we’d like it to be, and it will be that much easier to keep moving it forward in the future. The sheer number of users has grown faster than we ever thought it could, so there is a lot of debugging to do when that occurs.
Here, anyone that has Avid Interplay can easily have it land right into their venue system. Another new thing we have here is Interplay in a Box at five of the seven venues. We never had Interplay at the venues before, and it makes the MAM transfers a lot easier. Interplay in a Box is a stripped-down version with just one server that allows the MAM files to automatically check themselves in.
This year, we treated the venue MAM systems as an experiment. We told the production guys to bring their content on hard drives just in case the MAM doesn’t work. Thankfully, it has worked out very well so far.
As NBC relies more and more on file-based workflows and production in Stamford, has it cut down on your footprint and crew size here at the IBC?
We shrunk going into Vancouver from 75,000 ft. to 55,000, but the [NBC] News people were able to come into the IBC at that point and took that space so we were right back up to 75,000. The same thing happened here. We actually shrank a bit here but were able to give the News guys some more space.
So, even though we are at a similar size to Vancouver, we have the huge programming increase — with the streaming and the [NBCSN] live show 12 hours a day — but we hardly brought any more people.
How does having more live programming hours here than in Vancouver affect your operation?
We have a foot in both camps here in terms of live shows and edited shows. The [NBCSN] show and all the cable programming is live all the time, from around 3 a.m. to 3 p.m. back home until the NBC network afternoon show comes on. Then, we have the primetime show, which is much more complex. The live feed relies heavily on the world feed and fewer people, while the primetime show has all the extra cameras, replays, graphics, and other value-added content.
For example, on figure skating, we have three talent out at the venue calling the world feed, an A2, one camera operator in the mixed zone, and a producer. So there are six people working on that live feed. Then, there are probably closer to 75 people shooting and editing figure skating for primetime with a whole other set of announcers and a full-blown truck.
Will Rio have more live hours, and how will that change your operation there?
I don’t want to predict our coverage, but, with golf starting up, the 18-hole coverage will be huge. And then rugby sevens is starting up. And we have more and more programming outlets as well.
I don’t think you will see less cameras or virtual graphics or other accouterments that spice up the live feed with U.S. stories. But it will certainly mean less editing. The live effect will mean shorter days and less editing, but the number of tools the producer/director in the live control room will want to use will tick up slightly. For example, you don’t need a telestrator in the booth [for delayed coverage] because you can just have one in the voiceover booth and do it in post.
I bet we will keep a similar complement to what we did in London. That means six or seven trucks, 10-12 off-tube booths in Stamford, and four off-tube booths here.
Do you foresee NBC’s moving more of those editing operations and other sectors of the production back to Stamford?
Well, we keep moving more back there. A lot of the multi-sport, bigger shows like NBCSN or USA shows need to be in a big control room where all the feeds come in. We’ve got that working very well from home, so that will keep moving in that direction. The network shows, though, still want to have the control room and studio and anchor available on-site to interview the athletes in person. I don’t see that changing.
Since you rely so heavily on fiber connectivity now, do you see obtaining that kind of bandwidth to be a challenge in Rio in 2016?
The next two Games will probably be equally complex [as Sochi]. It all depends on the distance you are traveling, the country you are coming from and going through. London was great because there is such a plethora of bandwidth across the Atlantic. That wasn’t the case in China or here, but Rio could be a bit better.
We keep learning things about long-distance connectivity and IP video. We are doing a tremendous amount of video-over-IP here, and it has been a pretty big development effort for us. But we are still challenged to reliably get the video-over-IP back home and be totally glitchless on the level we are used to with a broadcast circuit. But we are doing a lot of things over IP at this point where it has become second nature, and we already could not imagine living without it.
Six years ago in Beijing, could you have ever envisioned that NBC would be delivering the thousands of hours of content that you are delivering here at the Games?
Well, let’s go all the way back to Sydney [in 2000]. I couldn’t have predicted the amount of growth we went through. All our focus at that point was on building transportable infrastructure that would last. We had three SD digital feeds coming out of Sydney, and that was a big deal. In London, we had more than 90 HD feeds. That is still an amazing trajectory to me.
Starting in Salt Lake [in 2002], because HD was just starting to get off the ground, we had one HD truck and were testing HD to a tiny audience. Broadcast devices were starting to be LAN-connected. I’m not saying you could see a clear picture at that point, but it definitely started snowballing. Then our programming demands also increased from Sydney to Athens . So we invented tools to handle that. Then, from Athens to Beijing was a huge uptick because home Internet connections were fast enough to stream video and the iPhone had arrived, so that is where the digital-media side really started. That then matured in London.
If you look back and realize that the iPad didn’t even exist in Vancouver, you realize how much things have changed.
Looking forward, I think we will keep doing more and more IP-connected stuff. It still has a long way to go to be as reliable as we need it to be. There is no doubt the flexibility of all the interconnected devices is tremendously helpful, but then you have the downside of viruses and malware, which we have to account for now and make sure all outside [content] is clean before it comes into the system.
I have a complexity chart that I use during presentations. It started going heavily vertical in China because of all the digital [content]. From Sydney to Athens, it went up because of the content increase, then Torino went up because of HD [conversion], but Beijing just skyrocketed because of streaming and [VOD] highlights and mobile phones. All those things were added with nowhere near the number of people that any broadcast show would have added. To this day, we have hundreds if not a thousand people that work on the primetime show, and then, for the 20 live streams, we have a couple dozen people. It’s just so impressive when you think about it.