Whitepaper: Changing the Game – 3Ality Technica
Bettina Martin, 3D technical supervisor at 3ality Technica, considers the benefits, challenges and workflow of live stereoscopic 3D broadcasting and how it will bring fans closer to the world’s greatest games. [A good, basic, stereo 3D 101 – Ed]
Whether watched in a London pub crammed with cheering fans on match day, or at home with family and friends, televised sports are solid mainstream entertainment and a critical component of the broadcast business. The recent transition from analogue to digitally-enabled HDTV has contributed to the growing popularity of sports viewership worldwide. Close on the tail of this, we are now moving into the “next big thing” in televised live sports; HD3DTV, or S3D (stereoscopic 3D) for short.
Anyone who has seen one of the recent broadcasts from BSkyB or ESPN understands how compelling and immersive S3D can be. Also, there are now so many networks, consumer electronics companies, and vendors supplying the S3D business that the genie cannot be put back into the bottle. S3D TV is here to stay. When it comes to critical mass, it is no longer a question of if, but when?
What is missing from the equation is content. While there are some events already heading to air, there is not yet enough variety of content to appeal to a mainstream audience. Interestingly enough, the current generation of technologies are now good enough to launch S3D television, but without this content there is no solid business case to convince broadcasters and rights holders to make the leap. As with the introduction of many new technologies, it’s a chicken and egg scenario. The business case will only work when there is a large installed base of 3D-ready televisions. Yet the growth of this installed base is reliant on a steady supply of mainstream content; shows that the audience wants to watch week after week.
The good news is that in the near future, almost every large, flat-screen television sold will be 3D-enabled. Estimates from one major manufacturer are that 80% of the large flat screens that they will sell next year will be 3D enabled sets. . Eventually, there will be enough of a base to make the existing revenue models of television work for 3D broadcasting. However, between now and then, the idea of immediate profitability should be shelved for a few years, and broadcasters should be satisfied with finding a way to cover the costs.
Some broadcasters, such as BSkyB, are already embracing S3D from a production standpoint. Integrating S3D systems into the existing infrastructure of a 2D OB unit is becoming less expensive and more efficient every week. Similarities in the workflow have made the changeover of a 2D OB unit into an S3D unit, even on a one-off single broadcast basis, into a streamlined and fairly quick process. These similarities have also enabled fast setup and the ability to shoot S3D on a traditional 2D schedule. However, some augmentations are also required for shooting live S3D events.
As with live 2D broadcasts, images flow in real time from the cameras, through the OB, and are immediately transmitted, usually through an uplink to a waiting satellite. All systems in the pipeline, such as shading, replay, graphics, compositing, and switching have to work in complete synchronisation. The broadcast of S3D images is no different in this need for synchronisation; although, some adjustments have to be made to the signal flow.
Stereoscopic images are created through the use of a S3D system. There is a lot of confusion in the market where the common belief is that all you need is a rig that holds two cameras to create S3D images. While it’s true that S3D images can be captured solely through the use of a rig, these are not images that will be suitable for immediate transmission, nor is the sole use of a rig suitable for working in real time or at a speed to match 2D shooting. Rigs alone are not anything that can be integrated into an OB unit. It takes a complete system to make live S3D broadcasting work.
A rig, however, is at the front end of the system. A rig consists of a motorised or robotic platform that holds two cameras. These come in two flavours. The most common is a rig which holds two cameras at right angles, with both lenses pointed at or near the same spot in a half silvered mirror. The other is a side-by-side which holds two cameras in a configuration suggested by the name. The mirror rig is generally referred to as a beamsplitter.
The robotics come into play for many reasons. In order to hold two cameras so that the images match pixel for pixel, the positions of these cameras will need to constantly change in real time, depending on the focal length of the shots and how well the lenses match. Assuming that, unaided, no lenses will ever match one another perfectly (although some may come close), mechanical or digital help is required to get the perfect match required for a comfortable viewing experience. In order to achieve the level of accuracy necessary for a good S3D broadcast, there will need to be either positional changes to the cameras, or as a second best option, the ability to manipulate the images in real time in a DSP environment behind the cameras. Furthermore, in order to create images which are comfortable to view, the position of the two cameras with regards to the baseline distance (interaxial) and the angle of view (convergence) need to be adjusted in real time in accordance to the subject to camera distance, which in sports broadcasting, changes rapidly and dynamically.
Behind the cameras and in usually in the OB truck (and in some cases in a “B” unit) is another major component, usually referred to as an image processor, or stereo image processer. The device developed by 3ality is known as a SIP, for Stereo Image Processor. The most basic of the SIP functions are multiple display options, the ability to manipulate images on an X and Y axis, depth indicators, sync monitoring, alignment and geometry monitoring, metadata creation, the ability to align cameras remotely, plus a few dozen other useful applications.
Production crews for live S3D broadcasting consist of all the standard people associated with a field production plus three other roles. The first addition is the stereographer. Whether you are using manual or fully automatic 3D systems, the stereographer is responsible for the overall depth and consistency of depth between the cameras throughout the broadcast. The stereographer also works with the graphics team to assure that placement on the Z axis of all graphics will integrate well with the 3D picture. In essence, the stereographer is responsible for all things depth-related..
The next addition is the 3D engineer. This person is primarily responsible for the operation of the SIP during the broadcast, managing SIP functions, including camera alignment throughout the show, as discussed earlier. Additionally they work with the OB engineering staff to help define and implement signal flow and sync.
The final addition is a rig tech/convergence puller. This is a combined role, and there is generally one per camera. The job title “convergence puller” is the popular term, but the job actually entails more than this. The convergence puller not only adjusts the convergence distance in real time during moving shots, but they also control the more important function of interaxial distance. The combination of interaxial distance and convergence angle are the two functions that control the perceived depth of the picture.
One of the budgetary issues with S3D broadcasts is the number of convergence operators required on a per show basis. If there are 15 cameras, there is a need for 15 convergence operators. However, this is about to change. There is new software coming into the market that will automatically handle the functions of convergence and interaxial distance adjustment in real time throughout the show. The software, called Intellecam from 3ality Technica, is already being successfully tested at beta sites. The role of convergence puller is a technical one with the creative side of depth being the domain of the stereographer. So, although keeping depth and the subject consistent on a shot to shot basis is currently the role of the convergence puller, this task can be fulfilled by image processing. Tests have shown that that this provides more consistent image matching than human operators. The other half of the convergence puller’s job, that of rig tech, can now be taken over by the operators who are accustomed to setting up their own cameras for 2D broadcasts
Most live sports productions require zoom lenses. The 3D rig should be capable of supporting these and the 3D system should be able to correct the differences in the optical centers of the camera pair throughout the zoom range, as well as field of view differences between the eyes. Matching the field of view and optical center in 3D is part of the setup procedure of the stereo system. Surprisingly for some 2D producers, live 3D sports broadcasts require less cutting than 2D shoots. 3D encourages the viewer to explore the depth of a shot and to let the eyes wander. In sports the depth of the shot provides another way to bring the viewer into the game and certainly is better at conveying the use of space in a game. The movement of a football team up or down the field, or the lie of a green in an important putt, is a story best told in S3D.
Soon, the streamlining of live S3D sports broadcasts and the successful merging of these with 2D trucks will encourage more companies to address viewer demand and explore the possibilities of 3D. The negative misperception of high costs and long setup times are quickly becoming a thing of the past. Adding a new dimension to sports productions will bring fans even closer to the games they love.