SMPTE Emerging Tech leaders outline 4K, 3D roadmaps
The creation and distribution of content in the 4K format continues to be an object of increasing fascination for both the consumer electronics industry and the professional content-creation and distribution community, especially those involved with digital filmmaking. But when, and how, will it make the move to a production format for sport and entertainment programming? Industry leaders provided a road map during the SMPTE Forum on Emerging Media Technologies that shows that in five years, at the latest, many of the HD workflows will be replicated for the 4K format.
Consumer-grade 4K sets will begin hitting store shelves in 2013, and at this point it is pretty clear that digital films that have been shot in 4K will be the initial content available. But in 2014, the World Cup, much as it did with HD in 2006 and 3D in 2010, could provide a great production opportunity for the format.
The advantages of 4K versus today’s 1080p sets are obvious: the 4K or Quad HD standard for TV offers not only more than 8 million pixels in a 16:9 format but also improved bit depth and expanded color gamut. New display technologies will allow that color gamut to be enhanced even more. So consumers will see not only more pixels, allowing them to embrace larger screen sizes above 80 inches or sit closer to smaller 4K sets (as pixelation will not be an issue) but also improved performance out of a wide variety of other picture specifications.
“But there will be a choice between whether we want brighter pictures or better color gamut,” says Peter Ludé, SVP of Sony Solutions Engineering and president of SMPTE. “And some people will want to do both.”
Dr. Karl D. Schubert, CTO and SVP of Grass Valley, says that 4K as a TV-sport production tool will be one of the first places the next-generation format will offer benefits. For example, a 4K camera can allow an HD broadcaster today to zoom in for a replay and extract a Full HD-quality image.
“The viewer could see the replay without the image being pixelated, and that is one of the uses that drive the need for 4K,” he adds. Many believe that those sorts of workflows could be in place by the middle of next year.
4K acquisition is one thing, storage another
While companies like Sony, Canon, Red, and others take a closer look at 4K acquisition systems, there is still much work to be done in terms of storage of such data-rich files. For example, Peter Jackson is shooting 2K images in 3D at 12 bits and 48 frames per second, requiring 7.5 Gbps and 5.6 TB of storage for 100 minutes. And if the 2K 3D images were produced, instead of 4K images, up to 28 TB of storage would be required for 100 minutes.
“That [storage requirement] is not going to go down with the ability to shoot digitally so storage on enterprise-class storage will be required,” says Ludé. Backblaze, for example, brings together Hitachi 3GB drives to build 135 TB Raid 6 storage systems that cost less than $7,500.
“It’s amazing to have that amount of storage for less than $8,000, but that is for data archiving, not the performance grade for motion images,” he adds. “But what will it be like in five years?”
There are also advances in RAM, like the IOFusion board and PCI Express that provides 256 Gbps throughput into the motherboard and allows files to be moved at 10 times faster than real time. “That is quite amazing for 4K,” he says.
Next steps for 3D
Ludé also says that 3D is here to stay and optics acquisition systems from Lytro and Fraunhofer IAS point to the potential of, eventually, 3D acquisition systems that would not require traditional 3D stereoscopic rigs that have one camera for a left and right channel.
“The Lytro system captures light rays from multiple angles on multiple sensors and allows the capture of an image that can be refocused after acquisition,” he says. “It’s counterintuitive to refocus an image after the fact, but it is an example of doing stereoscopic from one camera.”
Leszek Izdebski, managing director, Global Media Group, Cisco, reinforces Lede’s vision of the light-field cameras.
“All of the images can come together for a 3D reconstruction of the entire field, along with texture mapping onto the 3D objects,” he explains. “So the producer can now have a virtual camera in any place they want, not just camera 1, 2, 10, or 20. So for slow-motion or replay, the producer can go anywhere needed. And in the future, that will be something we can do for ourselves in the home.”
The engineering challenge is that the sensors require a total of nearly 200 megapixels of resolution in the Bayer pattern, but, adds Ludé, advances in CMOS sensors coupled with Moore’s Law should solve those issues within five years.
The Fraunhofer Heinrich Hertz Institute (HHI) Muscade Research Project involves using a real-time encoder for MVC (Multi-View Video Coding) which compresses four video streams by exploiting redundancies within the four cameras views. Two cameras inside a mirror-box capture the two views needed for a stereoscopic display while two additional satellite cameras allow wide baseline applications.
Even so, adds Fraunhofer, MVD4 content acquisition remains a daunting challenge. The multi-camera rig needs to be calibrated to a high degree of accuracy to ensure that all cameras are positioned on a common baseline. Such a geometric constraint is necessary for efficient camera recording and subsequent estimation of disparity maps.