Written by: Jim Tanenbaum, CAS
In August of 2007, I got a fateful call from production sound mixer William B. Kaplan.Bill had been recording a feature for James Cameron, but had left because of a previous commitment.The show had not only gone on far longer than he expected, but sound was not required every day.I had heard rumors of this project, but knew nothing about it other than it was a motion-capture CGI sci-fi film.
Cameron had come up with the idea for Avatar about 15 years ago, but he was too far ahead of the technology at that time.He had to wait a decade for motion-capture to develop to the Beowulf level before embarking on his project.He added four new features to that system: 1, real-time animation display; 2, video facial capture; 3, the “virtual camera”; and 4, the Simulcam; all to be discussed later.Cameron’s project was so advanced that he wasn’t just “pushing the edge of the envelope” — he was pulling on it from the outside.
Since I had never done a motion-capture (MoCap) show, I jumped at the chance.(I also benefitted from the equipment deal Bill had negotiated: a 4-day week with a 2-day guarantee on rental even if no shooting was done that week.)And Bill’s son, Jessie, came along to boom, as he had previously worked on the show with him, so at least he wouldn’t have a learning curve.
However, I had heard horror stories from others about working with Cameron.And I had spent one day many years ago as an unpaid timecode consultant for a friend of mine (Austin McKinney) who was recording sound on Cameron’s Terminator 3-D for a Universal Studios Florida theme park ride.Because of multitudinous problems, Cameron had not been particularly happy that day, and let people know it.(I discovered that the TC issues were not caused by Austin, thank God.) On the other hand, Austin told me that he had enjoyed working with Cameron on many of his earlier films (from Battle Beyond the Stars to Terminator), as, among other things, a designer and builder of camera/optical equipment.His good experiences convinced me to take the job.(Later, after the Terminator 3-D shoot, Austin and Cameron had lost touch.
Because of the incredible complexity of the Avatar infrastructure, Bill brought me in ahead of time to observe Art Rochester, who was currently mixing the show but also needed to leave for another job.The audio workflow was extensive — recording 8-tracks in digital (a mix and seven isos), and sending the eight channels all over the place in analog: Metacorder, Avid, some unnamed XLRs…The mix also went to the computer that was used for on-set recording of video and audio for immediate playback to the director (which fortunately, I didn’t have to run), and several other mysterious computers.
But I found it difficult to pay complete attention to the sound setup because of all the fascinating things that were happening on the set.Cameron’s real-time animation system had large flat-panel monitors scattered around the stage that were continuously displaying ten-foot-tall blue aliens with tails running around a weird rainforest or flying around on fantastic creatures that avoided the clichéd look of a pterodactyl or a Sword-&-Sorcery fantasy “dragon”.I grudgingly tore my eyes away from them long enough to take copious notes, and Bill provided me with some workflow charts and wiring diagrams.
Then it was time to meet Cameron.So far, I hadn’t seen any of his legendary outbursts.Bill introduced me.Cameron was polite enough, but I could see that he was distracted by needing to think about five or ten other things at the same time, and I wondered if he would consider me just another plug-in module.Normally, this would be fine with me, as I prefer directors who leave me alone to do my job and use my own artistic talents and judgment, but I had been told that Cameron gets involved with every aspect of his films, and I could expect considerable interaction with him.I decided to play my ace in the hole.(Having spent 40 years in the business, I had learned a few things.)
After I the introduction, I said: “By the way, Austin McKinney asked me to say hello.” Cameron’s face lit up.“You know Austin?!Tell him to come down here — he’s got to see this place.”My 40-year friendship with Austin immediately cranked me up a notch or two in Cameron’s eyes, and helped to establish a more personal connection.
On the day before I was to start recording sound, I had my cart and gear trucked down to the Playa Vista MoCap stage (actually the second-largest of the buildings at the old Hughes Aircraft facility) and proceeded to set up.Bill’s regular boom operator, Tommy Giordano, had agreed to come in and help me, as he was familiar with the rigging.
My cart has a Deva 5.8 recorder and a Cooper 106+1 mixer (with the stereo module as the seventh and eighth channels using the L and R trim pots as faders).I have eight Zaxcom Stereoline radio mikes, and since I wasn’t using their stereo function on this show, I sent the receivers’ left outs to the Cooper, and the rights directly to the Deva line in, thus completely bypassing the Cooper for the isos on the Deva Tracks 2 through 8.This protected the Deva isos from any accidental overloads in the mix panel as well as avoiding the added circuit noise.Because of their tremendous dynamic range, once the -20 dB tone from the Zaxcom receivers was used to set the channel gains on the Deva, no further adjustment was needed on my end.Deva Track 1 was the mix from the Cooper.
The MoCap (because it is so advanced, Cameron calls his system “performance-capture”, but I will continue to use “MoCap” when referring to just the part of the system that reacts to the reflective markers) and live-action systems both used 23.98 FPS timecode, which came from a house generator.I fed this TC to the Deva.I recorded 8 (or sometimes only 4) tracks at 48 KHz, 24-bit, and turned in DVD-RAMs with BWF-P, FAT32 files.This was the only thing “ordinary” about Avatar.
For many years, my cart has been a “cable-free” zone — it runs completely on batteries; I have my own video transmitters to feed my monitors from the video-assist system; and I use Zaxcom radio mikes for everything: booms, plants, picture cars, and 2-channel hops to video assist and/or video cameras with the stereo E.N.G. receivers.No more worries about buzz from H.M.I. power cables near my mike cables (especially on camera-car tow shots).The Stereoline transmitters are perfect for plants, because I can get two plant mikes with just one radio-mike channel.And for METHOD! actors, I can use two lavs, and set the transmitter gains low on one input and high on the other.
However, I was not able to dispense with cables on Avatar, because of the multitude of output channels I had to provide.And I wasn’t sure that the analog audio distribution system on the stage had the dynamic range of the Deva recorder — I was worried about overloading it with unattended isos.Therefore, I chose to use the Cooper’s channel PFL outs, as Art had done previously.He very generously left behind his old Sonosax mixer, because it was necessary to raise the unbalanced -11 dB isos from my Cooper to +4 dB to feed into the stage system.
I have three different Comtek systems.Normally I use my BST-25 transmitter and twelve PR-25 receivers for director, script, producer, producer’s mistress, writer, 2nd A.D., etc, and my M-216-P7 to transmit to a couple of PR-216s for my boom operators.I also have a BST-216 high-power transmitter for my MicroEar earwigs, or if my boom operators are at extreme ranges, or if they’re double-booming and need a second channel to hear only their own mike.(I have modified my Cooper to give me PLs to either the Monitor A or Monitor B outs.)On Avatar however, the requirements were different, and occasionally changed, with (and without) warning.I was told that there would be times when a number of actors would require music playback over headsets.Since “a number” wasn’t specified, I elected to establish the use of the PR-216 receivers (of which I had only six) for the director, script, dialog coach, two other tech people involved with the audio, and Paul Frommer (the creator of the Na’vi language) when he showed up.The M-216 transmitter was fed an audio return from the computer that served as the on-set recorder so everyone could hear audio with playback as well as direct.(I sent just the mix to that unit.)I reserved the twelve PR-25s for playback.
When music playback wasn’t required, which was most of the time, my boom operator used one of the PR-25s; when it was, he could use one of the PR-216s if Paul or the dialog coach weren’t there, albeit without any PL from me or the ability to hear his mike during playback.Otherwise, I had a “B-unit”: a “Listen” brand transmitter and three receivers, also on the 216 MHz band.The quality wasn’t as high as the Comteks, but at least the boom operator had PL and a continuous feed from me.
It took a long time to get all the tracks to all the destinations at the correct levels without any hum or AC buzzes, but I finally integrated my cart into the system.(If I have any input for the infrastructure of “Avatar 2 & 3”, the audio will be distributed digitally.)Then I started on the MoCap suits, and Murphy’s Law made its first appearance, as was to be expected.The performance-capture actors wore snugly-fitted helmets which sported a CCD-chip video camera mounted on a short boom in front of their faces.A Sanken COS-11 had been attached to this boom so the actors would always be on-mike regardless of which way they turned their head.Since the mikes were permanently mounted on the helmets, they were wired to a male TA panel connector mounted to a metal angle bracket in back so the cable from the radio mike transmitter could be easily unplugged when the helmet was removed.
I saw all this on my earlier tour day, and had made up adapter cables for my Zaxcom transmitters.I connected them and heard audio from the mikes.Unfortunately, I also heard the buzz from the Zaxcom transmitter’s digital RF — the connector brackets were open on the back, and the unshielded wiring was picking up the hash.My attempts at improvising a shield with aluminum foil were unsuccessful, and there was no hope of making new, fully-shielded brackets overnight.I called Coffey Sound, with whom production had an account, and rented every Sanken lav they had.(I had made 16 adapter cables, because many actors needed to be wired up at once even though they wouldn’t all work in a given scene.And some, of course, for spares.)I mounted the rented mikes next to the built-in ones, and coiled up their cables at the rear of the helmets, with the plug hanging down on a short length of cable.
Murphy doesn’t give up easily.Some of the rented Sankens had their plugs wired differently, and didn’t work with my adapters.But Murphy’s Law fails just often enough that you can’t trust it, and there were enough working mikes for the first week of shooting.Another failure for Murphy: the company paid for the rental on the duplicate mikes instead of taking it out of my box rental.
The last piece of sound gear I was responsible for providing was a “Voice of God” PA system.Bill told me to bring in my Anchor speaker-amp and a hard-wired hand mike on a stand, because that was what he had used.I did, but now Cameron wanted a wireless rig.I broke out one of my backup Vega VHF units, and gave him a clip-on lapel mike with a PTT switch.He didn’t like this setup, but contrary to the stories I’d heard, he didn’t shoot the offending gear with a .357 Magnum or throw it overboard.He simply asked for a head-mounted boom mike.I provided a combination Comtex headset and boom mike for the Vega, but now the recessed power switch on the transmitter was annoying, as was the “pop” from the speaker when it was used.Still nervous, I offered to send a P.A. out to buy whatever he wanted on the spot, but he said he would use what I had — as long as I fixed things before he needed the “God Mike” again.I got a Countryman E6 the next day, and connected it to another one of my backups, an A.K.G. UHF unit whose transmitter had an easily-manipulated mute switch on the upper surface.This sufficed for the run of the show, although at some point Cameron took to hooking the E6 in the neck of his shirt instead of over his ear.
I discovered that all the downconverted NTSC video feeds on the set were spoken for, and my three cart monitors (2 permanent and 1 add-on) couldn’t display Hi-Def.Not wanting to rebuild my rig during the shoot, I opted to buy several AJA downconverters, until I found out that they were $2,000 apiece (in 2007).I settled for one.Having my own cart monitor to see the animation in 1080p up close (even with its limited rendering) was well worth the price.An added bonus was that the AJA unit has a “focus” switch that expands the center third of the image to fill the full screen.It was absolutely amazing how much detail was in the CGI, even in the crude form that was used for the real-time animation.
Although the “stage” was lined with sound absorbing material, the sheet metal walls and roof admitted more than enough noise to punch through it, helped out by multiple air conditioning ducts (flexible cloth tubes) running through open doorways and holes cut in the walls.Bill had previously ordered that the outside sections of the ducts be covered with sound battens (Insul-Quilt or something similar), so at least the higher frequencies were attenuated.Of course, the “stage” was situated in the middle of a huge construction project, complete with pile drivers and many-storey-high swing-arm cranes.We were out of LAX’s flight path, but there were more than enough light planes and helicopters to make up for it.
As if exterior noise sources weren’t enough, we had hundreds of computers and monitors on the set with individual cooling fans.Again, Bill had previously arranged for some plywood baffle walls to be built, but since his tenure the computers had proliferated and spilled out from behind them.And then there were the elevated floor modules made out of plywood.I carry a dozen 2’ x 5’ carpet strips (6 grey and 6 black) on my follow cart — sometimes Cameron would let me put them down, and sometimes not.
Initially, I tried to boom everything in addition to the helmet mikes, using a Sennheiser MKH-40 or -50.There is an amazing freedom in MoCap: the boom can be placed almost anywhere I wanted it, so long as it did not block a reference camera.No worries about shadows or the pole cutting the corner of the frame.I did have to wrap flat-black paper tape on the shiny clutch nuts where the anodizing had worn off; they were reflecting light back into the MoCap cameras.I later acquired a black cloth sleeve to slip over the fishpole to cover its entire length.
Murphy quickly took care of my booming ideas.The MoCap “stage” was simply too noisy for the boom mike to reach more than two or three feet.Trying a MKH-416 didn’t improve things.With only one boom operator, I had to rely on the helmet mikes for most dialog scenes, though I did use the boom solo during stunt shots in order to get some audio and avoid silent footage that lacks the “energy” appropriate for action scenes.
Nothing was “normal” on Avatar, not even scene numbers.No getting a simple “Scene 195A — Take 2” from the script supervisor (Luca Kouimelis for most of the performance-capture work) — the scene numbers were as complex as all the other elements.They had to contain both the “conventional” scene and take numbers, and information describing the particular CGI environment involved, and which version (with multiple parameters) of it was being used.195A_tk_00E_002_Z1_pc001_0A01_VC_Av001_LE was typical.Initially, Luca would take the time to give me the part of these long strings of number and letters that I needed for my sound metadata as soon as she got them from the data wranglers, but then the IT Department was able to install a Python client on my laptop that downloaded the scene numbers in real time from Lightstorm’s Ethernet so I could enter them into the Deva directly, without having to bother her.I had an external keyboard, but I found the Deva’s touchscreen so convenient that I didn’t need to use the keyboard.I would enter the number above as “Scene” 195A_00E_002, and “Take” 2.The Deva can automatically increment the take number, but since the last digit of the scene number was also the take number, it had to be advanced manually.Fortunately, the Deva has a dedicated “INCREMENT SCENE NUMBER” touchscreen button as well as an “INCREMENT TAKE NUMBER” button.
After a few days, I got into the rhythm of the shoot, and stopped having to think about everything before I did it — but I never got over the distraction of the wonders going on all around me.
…CONTINUED FROM THE PRINTED VERSION
Every morning, Cameron would ask me “is Austin coming in today?”It took several weeks until Austin was able to get there, and then Cameron immediately stopped shooting and gave him an extended tour of our performance-capture facility, including showing him a completed scene.(He also gave him a blanket invitation to return whenever he liked.)Cameron’s loyalty and gratitude were two more of his many characteristics that impressed me.
Here is a brief basic description of the performance-capture system used on Avatar for those who are not familiar with it.I believe more and more productions will be done this way every year, in part if not in whole.
The stage (usually referred to as “The Volume”), starts out completely empty.The floor is marked off into a 12 x 6 array of 6-foot squares.The ceiling and upper walls are covered with a grid of over a hundred small video cameras, each with a bore-sighted light source, and all connected to a massive computer network.The actors (performance-capture artists) wear black leotards studded with reflective dots.(Actually, they are Scotchlite-covered spheres about the size of a green pea, attached by a short stalk to a Velco “foot”, which allows them to be positioned anywhere on the leotards.)The nature of this reflective material is that it reflects most of the light back in the direction it came from, so each camera sees only the light from its own lamp.The stalk allows the “dot” to be seen from a wider angle than would be possible if it was mounted flush on the fabric.
The process begins with all the actors standing near the edges of the volume, with their legs spread and their arms extended straight out from their sides.This is called a “T-pose”, and allows the motion-capture computer system to sort out who is who, and where their body markers are located.The MoCap people can see a raw image from the video cameras: it looks like a low-contrast black-and-white picture of the stage and actors, with brilliant points of light on each actor’s body where the markers are located.The computer then generates a wire-frame model of the actor, followed by a featureless, manikin-like figure, and displays it on another monitor screen.Finally, Cameron’s real-time animation system replaces the manikin with the Na’vi character’s body, and places it in the CGI environment.
Previous to Avatar, the actors had to wear hundreds of BB-sized reflective dots on their faces as well as the larger ones on their bodies.Not only did this fail to capture the effects of most emotional reactions on the face (e.g. position and movements of eyes, teeth and tongue), but the actors found it difficult to relate to one another when they both had a bad case of “Scotchlite acne”.(I worked on pre-production tests for “A Christmas Carol”, and they did it this way.)Cameron solved these problems by using a tiny CCD-chip video “facial-capture” camera to continuously capture an image of the actor’s face, which is then recorded by a small video recorder worn by the actor.In post, this image will be used to help animate the character’s features.Using “edge detection”, it can sense pupil size and position, mouth shape, tongue and teeth position, and any skin wrinkles that form.(When Bill Kaplan went on to record “A Christmas Carol” during production, they had already adopted a version of Cameron’s system.)
Avatar originally used a microwave transmitter worn by the actor that sent the live face image to the animation system, where the actor’s moving human mouth and eyes were “painted” on the animated Na’vi figure’s face.However, this feature proved to be not that useful and was eliminated when the production returned to L.A. in 2008, and the aliens’ faces remained frozen in the remaining real-time animation.
During the performance-capture process, we had up to a dozen hi-def video “reference cameras” that covered all the actors’ actions, from wide shots to closeups.This was necessary for two reasons: 1, the face-capture camera has an extremely wide angle lens that distorted the facial features and made it difficult to judge the emotional content that would be shown on the CGI face when the scene was rendered later; and 2, occasionally the MoCap computer would get confused when actors were close together, and in the real-time animation, an arm or leg would suddenly sprout from some character’s head.The live-action video footage enabled the editors to sort out and properly re-attach the various body parts in addition to fine-tuning the CGI characters’ facial expressions.”
Cameron’s “virtual camera” is built around a small, hand-held flat-panel video monitor, with reflective markers so that the MoCap system can know where the monitor is located and in which direction it is pointing.It receives an image from the real-time animation system, which then renders the scene from the camera’s point of view.Cameron can move the monitor during a shot, producing the effect of a “hand-held” camera.(The computer can provide varying degrees of motion smoothing if desired.)There is also a “proportionality control”, which, when increased from 1:1 to 10:1, changes the monitor into a “virtual crane”.Raising it two feet creates a 20-foot crane shot.With the control set to 50:1, it is possible to make aerial flyovers.Cameron can also change the physical relationship between the real and virtual worlds, so he can point his “camera” away from the performers but still see them on the screen.
After the MoCap system has accepted the T-pose, the actors take their places in the volume, and begin the scene.Cameron can either just stand and watch, or use his virtual camera to record a shot.In any case, the MoCap system will record the position and movement, in 3-dimensional space, of every performer (and every prop that has MoCap dots) during the scene.Then, when Cameron is satisfied with the actors’ performances, he will let them take a break while he walks around the empty stage with the virtual camera as the MoCap data is replayed, and sees the Na’vi characters acting out the scene on Pandora.After checking a few of the angles, he will move on to capturing the next scene.After several days (or weeks) of capture, he will schedule some “camera” days, sans actors, to “shoot” all the coverage for those captured scenes.
The “empty” capture volume is not completely empty.To match the contour of the CGI ground in the current scene (if it is not flat), there are 6’ x 6’ hollow plywood risers of various heights and shapes, which can be placed on the stage floor grid to match the elevation and slope of the virtual world’s surface.Objects in the virtual world that the characters interact with were also represented in the volume: tree trunks were wire mesh cylinders of the appropriate diameter; vehicles and building structures were mere outlines: a framework that had doors and windows delineated with thin-walled metal tubing so the actors would walk or look through the correct area.Props like bows and arrows were adorned with reflective dots so the MoCap system could track them.
The reason that there were no solid surfaces used on the set was to allow the MoCap cameras to observe all of the actors’ bodies all the time.If an actor was behind a tree from the P.O.V. of the virtual camera, the CGI character would be hidden by an opaque trunk in the real-time animation.However, if the MoCap data was played back with the camera now positioned to the side, the system had to be able to show what the character had been doing “behind” the tree.
To insure that the real and virtual worlds meshed properly, they were adjusted with a “gnomon”.In the real world, this consisted of three short metal rods welded at right angles to each other, and tipped with reflective points.The MoCap system created a virtual gnomon which represented the corresponding location in the CGI world.If the real gnomon was set on the corner of a prop table and the virtual gnomon was not exactly on the corner of the CGI table, either the real table could be physically moved into the correct position, or the entire virtual world could be slid into alignment with a few keystrokes.
Pandoran animals were represented in various ways: a human in a MoCap suit mimicking their actions; or a “life-size” cloth doll festooned with reflective dots and manipulated by human performers (without dots, so they would be “invisible”).This allowed the actor to struggle with something that was alive, and could fight back.Other techniques used a mockup of the banshee’s neck and shoulders mounted on gimbals and springs to permit an actor to “fly”, and be subject to the inertial forces of banking and turning.
To further enhance the actors’ connection to the reality of their Na’vi bodies, the performance-capture suits were equipped with hanging tails and the helmets had nerve-laden hair queues dangling from them.The physical motions of these appendages constantly reminded the actors that their “real” bodies possessed them.
This was just one of the many reasons for Avatar’s success: live-action humans were not simply “pasted” into the CGI environment, but interacted with it according to the laws of physics.For a scene with Quaritch in his ampsuit, the actor (Stephen Lang) sat in a full-size ampsuit torso on the green-screen set.(The suit’s arms and legs were added later with CGI.)The torso was mounted on a motion-control base, and the camera was mounted on a motion-control dolly.In the shot where he turns and walks away from Jake, the torso swivels around and then rocks from side to side, while the camera dollies back.The actor is tossed about in the ampsuit exactly as he would be if riding in a real one.This accurate mingling of the real and CGI worlds helped to produce a subconscious acceptance of the reality of Pandora in the minds of the audience.
While some of the shots of humans seen in Avatar are CGI, most are live-action.But even then, many of the live-action sets were entirely or mostly CGI.The scenes were shot against green screen, with only the parts of the set immediately adjacent to the actor(s) being practical.The troopers in the jungle waded through real plants up to their knees (at most) — almost everything else in the rainforest was CGI.The interior of the ISV’s cryo vault (and the huge crematorium that didn’t make it into the film) was CGI except for the foreground modules.Since the live-action was shot on HD digital video, there was no film “sprocket jitter” to make the seams stand out.
Another use of CGI solved a problem that has plagued filmmakers from the very beginning: reflections of lights, cameras, and crew in shiny surfaces.Bubble faceplates on spacesuits were a particular problem.(We had to build a quarter-million-watt artificial sun for the TV mini-series “From the Earth to the Moon” in part because of the astronauts’ mirrored visors.)For Avatar, most of the exopack masks were only open frames, with red fiduciary (computer-tracking) dots around the edge.CGI faceplates were added in post, complete with the appropriate reflections of trees, sky, other characters, etc.Many of the windows in vehicles were done in the same manner.This provided a rare benefit to the sound department: the ability to shoot through a “closed” window or mask with a boom mike.
The show moved to New Zealand in late 2007 for the main live-action shoot, where Tony Johnson and his crew took over, and recorded most of the production sound that made it to the release print.He was by far the most deserving of the single production mixer Oscar nomination, though the rest of us contributed a great deal to the film as well.We are all honored to have participated in the making of Avatar.
Cameron’s Simulcam came into use in NZ.It allowed previously-captured CGI characters and/or environments to be inserted into the live-action scenes as they were being shot, and be visible in the camera and set monitors.Jake’s avatar waking up for the first time in the lab is a typical example.First, the scene was motion-captured on the Playa Vista stage.Then, on the live-action set in NZ, the Simulcam provided the previously-captured CGI image of Jake’s avatar sitting up, getting off the gurney and moving around the room, and superimposed it on the view of the practical set with the human actors.Cameron shot the scene exactly as though the 10-foot alien was in the room with him.To cue the human actors, a thin rod with a red ball at the tip was positioned to give them an eye line as the action proceeded.
When Avatar returned to the Playa Vista stage in mid-2008 to continue the performance capture, I found that major changes had been made in the audio infrastructure: the Metacorder was gone, but so was Art’s Sonosax.I had to find another way to boost the isos from my Cooper.I bought two ATI Audio UB400B converters.These 4-channel units took the Cooper’s unbalanced -11dB and converted it to +4dB balanced on XLRs.The two amps were physically ganged together with stacking plates, and an ATI 12-v to 24-v DC switching power supply bolted on the bottom so I could power them from my cart batteries.
There was yet another sea change early in 2009, when we moved across the street to the cavernous “Spruce Goose” building for those green-screen live-action scenes that were shot here.(Because of wartime steel shortages, the structure’s massive support beams were made entirely of formed and laminated wood planks.)The sound department was given a room to store our extra stuff, and to lock up the cart on days when we weren’t working.A small segment of the San Andreas Fault ran through the room.An earthquake (or maybe just settling) had offset the floor vertically by half an inch.The crack ran out the middle of the doorway.Someone had glued a piece of Styrofoam to half of the bottom edge of the door to close the gap — for climate control I hoped, rather than trying to keep the rats out.
Unfortunately, there was no way to keep the birds out of the stage.Many holes had been cut in the walls for pipes and electrical conduits, and no one had sealed the gaps.I complained on the first day, and repeatedly thereafter, but the Playa Vista stage management never “managed” to close off the entry points.They did come around occasionally to scare off the birds with canned-air horns, but that fix lasted less than an hour.At the time, I was surprised that Cameron didn’t insist on the repairs, but he obviously knew how to deal with the chirps because they’re not audible in the ampsuit or aircraft interiors, and were lost among the many creature sounds in the exterior scenes on Pandora.
The building was immense, and everything connected with it was, too, even the men’s room: one long wall lined with countless urinals; the opposite wall with toilet stalls; and down the middle, a row of circular wash basins with foot-treadle water valves.I opened a large heavy door in a corner and discovered the Mother-Of-All-Water-Heaters: 8 feet in diameter and at least 12 feet tall, and fed with a 2-inch gas main.Hughes’s factory must really have been something when it was filled with thousands and thousands of workers.I found portions of wooden flooring that appeared to be made of metal — they were carpeted with embedded metal shavings, screws, broken drill bits, rivets, bits of wire, lockwashers and other small hardware; and all worn smooth by the tramp of many, many boots and steel-toed workshoes.
Because the live-action footage had to be integrated with the CGI world, it was necessary to keep track of the positions and movements of the 3D cameras and certain set elements (such as the ampsuit torso).But now “conventional” lighting had to be used for the actors and sets, and this, combined with the many reflective surfaces that were present, made the MoCap Scotchlite ball system unusable.Instead, active markers were used: 1-inch cubes with a matrix of infra-red LEDs on five faces.They emitted non-visible coded signals in sync with the frames of the MoCap video cameras now located on towers positioned around the set.The Simulcam was used here, too.
Audio was handled differently as well.Now eight digital audio feeds were needed for the 3D hi-def video recorders.They were located well over a hundred feet away from my position, and of course, no one told me about this in advance.I had to scramble on my load-in day.I had the four AES/EBU stereo digital line outs from the Deva, but they were 110Ω XLRs, not 75Ω BNCs.I also didn’t have that much spare BNC cable.While my boom operator was running my hard-wired red light and bell system all over the huge building, I dug out my long-unused duplex boom cables and made two runs of that, giving me four XLR paths.I also had two sets of male and female Neutrix 75Ω-to-110Ω transformers that I use for long runs of timecode over XLR cables.I put those (with sex-changers on two of them) at the video-recorder end of the cables.It worked, but I wasn’t happy.I brought in more BNC cable the next day, and swapped out the XLR cables, moving the impedance-matching transformers back to the Deva end.
Several days later, Jessie went on to another project, and Ken Beauchene came in to finish out the last months of the show.His fishpole was allowed to appear in the green-screen frame, and didn’t need to be wrapped with green tape.(There are green and blue cloth sleeves available for shows that require the pole to be “invisible”, along with an acoustically-transparent colored cloth cover for the mike and mount.)
We had several green-screen scenes involving aircraft interiors, such as Quaritch leading the attacks on the Hometree and the Well of Souls. With the windows removed, we could use a boom mike (as well as planted mikes on the window frames).Stephen was bare-headed, but the other aircraft crew all wore helmets.Cameron wanted them to hear Stephen’s dialog wherever they were in the vehicle.I opened up the helmet cable plug and located the headphone terminals.It was a simple matter to connect them to some Comtek PR-25s — problem solved.Then Cameron started giving them lines.A radio mike with a Countryman B6 wedged in the top of the visor opening took care of that, except for an occasional squeak from the Styrofoam liner.
When we had a different scene a week later, I planned to use the same technique.But Murphy showed up at call time: the helmets for this scene had no built-in headphones.I modified my single-earphone units to fit inside.Then Cameron wanted to be able to talk to the actors during the shot.Now I was in trouble.On an ordinary movie shoot, I would have plenty of free buses and outputs to immediately create an appropriate mix and send it to another RF link.But Avatar was anything but ordinary.Every connector on my cart had a cable plugged into it — in fact, it resembled Grace’s body draped with the tendrils of the Mother Tree.
I pulled my cart’s monitor speakers off the AUX bus, and the Cooper’s Channel 8 off the main mix bus.I patched Cameron’s radio mike into 8, blended it with the mix on the AUX bus and fed that to my B-unit Listen brand audio monitor transmitter.I checked out the channel 8 program on two Listen receivers with a spare headphone, and then connected them to the helmets.We wired up the actors and they climbed into their cockpits.They could hear themselves, but not Cameron.
I quickly checked the panel to make sure I hadn’t bumped a switch — nothing wrong there.I grabbed the last receiver.It worked properly, so I had Ken swap it out for one of the ‘defective’ ones.But as soon as it was on the actor, it refused to pick up the director’s voice, though I could still hear him at my cart.
Fortunately, Cameron didn’t immediately fire me.Instead, he just had walkies put in the cockpits — problem solved.
It took me several hours to figure out that mystery, and why I was able to buy all four brand new Listen units on sale for $200 total.Their front end is broad enough to fly the ISV Venture Star through.When the receivers were at my cart, next to the Listen transmitter, they captured its signal satisfactorily.But on the set, the Comtek transmitter’s slightly stronger signal, only 400 KHz away, was captured instead.(I had set the Listens to the far end of the band, but the Comteks happened to be set in the middle, and I didn’t have time to reset all of them to the other end.)
One of the fascinating things about Avatar was doing “impossible” shots on a regular basis — shots that no one else would even think of trying because they know it’s impossible.One day, we had a virtual CGI character physically pick up a human actor on the set.It’s the scene near the end where Neytiri picks up the unconscious Jake and places an oxygen mask on his face.
An “ordinary” director would have done the scene entirely in CGI, and then cut in a live-action closeup of Sam for the mask being put on.But Cameron had other ideas — including not intercutting anything.The scene was shot at three different locations, at three different times, with three different techniques:
1. The complete scene was shot at Playa Vista with MoCap, using a child actor to play the human Jake in order to keep the correct size relationship with Zoe as a nine-foot-tall Na’vi.
2. A motion-control camera was set up on the live-action stage in New Zealand, in the “Remote Link Module set”.With no actors in the set, the Simulcam was used to feed the camera viewfinder and play back the CGI image of Neytiri jumping in the broken window, picking up Jake, and putting on his mask.Now the camera operator moved the camera around as though shooting the scene with a real alien moving around the set and picking up Jake.This created a moving background plate of the Link Module to match the CGI action.
3. Back at Playa Vista, a green-screen duplicate of the Link Module set was built, and equipped with a motion-control camera positioned to match the location of the camera on the live-action set in New Zealand.Data from the NZ shoot was now used to duplicate the camera moves.Again the Simulcam was used, this time to initially place Sam on the floor of the green screen set in a matching position to the CGI Jake.Next, the scene was played back and the camera tracked the CGI Neytiri as she jumped in.At the point where she picks up Jake, two crew people in green Ninja suits reached out and lifted up Sam, watching a monitor with a 50/50 mix of live and playback images to match Jake’s position during the shot.A third green Ninja reached in with the mask and placed it on Sam’s face.The green-clad Ninjas were invisible in the shot, except for their eyes floating around the set.(The eyes were later removed digitally — from the shot; not the Ninjas.)
All of this work resulted in Jake’s entire body properly reacting to the force of being lifted by Neytiri’s arms, the draping of his clothes hanging down, etc, and again seamlessly tying his real body into the CGI world.
Of the 92 days I worked on Avatar, not one of them was boring.
Going back to 2007, after the first few weeks of performance capture, Cameron still had not killed anybody.I asked some of the crew who had worked with him before, and they all said he was far calmer than they had ever seen.Up to this point, I had only been yelled at once: an A.D. had stopped me from radio miking a bunch of stunt performers that were making occasional grunts and other sounds — of course, it turned out that Cameron wanted them recorded individually, not just with the boom I used.
I decided to take a chance.Cameron had announced that he wanted Avatar to be as scientifically accurate as possible.I had noticed that in one of the background plates, the disc of Polyphemus (the Jupiter-like planet that Pandora orbits) in the night sky had an upper quadrant illuminated.This could only happen if the sun was well above the horizon (e.g. in the daytime).Since I have an academic background in physics and astronomy, I “casually” mentioned this discrepancy to him.He thought for a moment, then thanked me and asked me to tell him about any other discrepancies I noticed.(Unfortunately, in the rush to get Avatar released, this, and a couple of other errors were not corrected.)
The next day, I placed two pages of comments on the physics of a lower-gravity world with a higher-density atmosphere on the table he uses for his workspace.He picked them up and read though them as soon as he saw them, and then came over to discuss the issues I had raised.The thing that impressed me the most about Cameron was the breadth of his knowledge.He knows so much about so many things.While I know somewhat more about physics and astronomy than he does, the total amount of his knowledge completely dwarfs mine.A week later, I had five pages for him, and my notes became a continuing element during the production.
Sometime in 2008, Cameron asked me if I would like to move up from being the set’s “unofficial science consultant” to an official one for the spin-off books that would be published later.Of course, I immediately agreed.(It never occurred to me to discuss being paid to do this, but his generosity resulted in my receiving a sizeable remuneration.)However, I wanted more.I wrote up a 26-page “A (not so) Modest Proposal” to demonstrate my literary skills.He told me he had never encountered a scientist who could write like that, and I was promoted on the spot to a contributing writer for the Pandorapedia and other books.
Cameron took the science behind Avatar seriously from the start.He put a great deal of thought into the mechanical design of spacecraft and other vehicles, weapons, and laboratories, as well as the anatomical and physiological design of the lifeforms.The logic of the story (and the backstory) was also worked out in detail.Cameron and producer Jon Landau held a 2-day symposium with most of their human resources: 3 writers; 2 editors; a xeno-anthropologist (Na’vi music and other cultural aspects); a botanist (Pandoran flora); and an astrophysicist, i.e. me.There was a lot of cross-fertilization between the experts during the two long days, which will ultimately lead to a “bible” for guiding the creation of “Avatar 2” and “Avatar 3”, the expanded video game, spin-off novels, etc.
My job is to keep the “real” science as accurate as possible, and provide believable “technobabble” for the unreal stuff: faster-than-light communications, unobtanium, floating mountains, etc.I worked out orbits for the planets in the Alpha Centauri system that (hopefully) make them currently undetectable from Earth.And then how they got that way in the first place, incidentally creating unobtanium in the process.I also took the image of the interstellar vehicle that Cameron had created and wrote a description of its physical structure and inner workings.
To date, my writing has been featured in the Survival Guide to Pandora, the online “Pandorapedia” and the DVD version, with more to come.Perhaps I’ll take up writing professionally after I get too old and deaf to mix.
1 Comment
Being myself a realist painter of imaginary, if I can put it this way, I have been impressed by, on one hand by your wonderful written account about working in the Avatar and on the other hand by the astrophysicist in you and the scientistically efforts of all involved, to create a fantasy world that according to the Multiverse theory can actually really exist somewhere in the vastity of the space-time.
Thank you !