How We Shot in PAL DV,
Cut in Final Cut Pro and
Transferred to Film
by Andy Somers
Whenever possible, I try to spend my free time helping non profit filmmakers realize their projects. Not only do I feel that its important to assist up-and-coming filmmakers, but it often affords me
One such recent project was entitled 'Sweet', a short directed by Elyse Couvillion. The piece was 3 minutes in length, and is set to the music 'Coffee' from Tchaikovskys 'The Nutcracker Suite'. This film was shot and edited completely in the digital domain, using Final Cut Pro, and then transferred to 35mm film as the final step in the process.
Elyse was fortunate enough to attract Allen Daviau, ASC as Director of Photography for the project. His creativity and expertise gave us the opportunity to push the limits of Digital Video technology as an origination medium. We had a full crew and lighting package and it was amusing to see so much gear and so many people around a big Chapman dolly with a tiny Sony VX1000 DV camera mounted on it.
Whenever I lecture film students, I typically begin with the statement: "Good post-production starts before you begin shooting!" For Sweet, we took this quite literally. Because it was set to music, Elyse wanted to pre-visualize the production, so we opted to create an "animatic" much like those used in the commercial industry.
Elyse spent some time with William Mitchell, her storyboard artist, developing the design of the individual shots. We then scanned the storyboard frames into Adobe Photoshop and imported the resulting stills into the edit system. Here, we were able to cut the storyboards to the music. Final Cut Pro has some easy-to-use keyframe animation features that allowed us to simulate possible camera moves for speed and pacing.
One of the caveats of using FCP is that all stills (not to mention motion effects) must be rendered before they can be played. A future realtime version of the program should address this issue.
Preparing To Shoot
Because the intention was to transfer the completed project to 35mm film, we decided early on to shoot this project on MiniDV, in PAL, (as opposed to NTSC). PAL is the European television standard, and it runs at 25 frames per second. By shooting in PAL, we could transfer one full PAL frame to one full film frame. Of course, wed be playing back the footage 4% slower to achieve this (shooting at 25 and playing back at 24), but a speed change this small is not generally noticeable in terms of action. (It can result in a noticeable audio pitch change, particularly in music, but more on that later.)
Shooting in PAL eliminated the motion artifacts caused by converting 30 fps NTSC video to 24 fps film. We also shot in 16x9 mode, where the video data was digitally interpolated and squeezed in a manner similar to shooting with an anamorphic lens. When unsqueezed, the resultant image has an aspect ratio if 1.76:1 which is much closer to 1.85 than normal 1.33 video.
Cutting It Together
After shooting we loaded the footage into Elyses Final Cut Pro system using FireWire, so the video never left the digital domain. My next task was to cut the footage to the music.
Because we were shooting in PAL, which is 4% faster than film speed, I sped up the music by 4% so that when we transferred to film, it would be playing back at its correct speed and pitch. I used Adobe After Effects to perform the speed change on the music because it allows for very accurate time manipulation.
I found Final Cut Pro to be intuitive, complete and user friendly. Having cut on many other systems, including Avid, Media 100, Premier, and so forth, I have to say that as a basic cutting tool, FCP has become my favorite tool.
The approach to editing in FCP is quite different than in the Avid. The system is much more focused on drag and drop mouse moves than the Media Composer. It does have a Trim Tool, somewhat similar to the Avids, but I found that it is actually easier to trim directly in the timeline.
All is not roses in the world of Final Cut Pro, of course. FCP still has a serious lack of functionality in terms of media management you must manually handle all media management at the Finder level. That is, if you want to delete a master clip, you need to find it in its folder on the hard drive where it lives. When you have to deal with a large number of clips, this can be quite cumbersome.
FCP also seems to have some issues with frame accuracy, both when digitizing, and when using the "edit to tape" function. There seem to be some random inaccuracies when digitizing where timecode on the digitized footage is off by a frame. Also, you have to test the timecode offset for each different type of deck that you are going to use. For instance, on one BetaSP deck we have, the offset is 5 frames. On another type of deck it is 6 frames.
The film had a few simple effects, such as a composite shot (a train reflection in a car windshield), 2 time-lapse shots (which were shot with the Sony VX1000 camera) and some fades and dissolves.
While Final Cut Pro is capable of doing these effects to a reasonable level of quality, we chose to do the final versions in Adobe After Effects. Not only is After Effects a more capable (and complex!) piece of software, but it has a decidedly better rendering engine than Final Cut Pro. For example, we were unhappy with the look of some of the fades that FCP rendered After Effects was able to create fades that looked more like film.
We contracted with Stu Maschwitz at The Orphanage to handle the digital effects we needed. The Orphanage is a group of ex-ILM engineers and effects artists located in the bay area.
A Magic Bullet With Your Name On It
The Orphanage also contributed a process they call "Magic Bullet". Magic Bullet is software that will handle color correction and, more important, remove the inter-field motion of a video frame while retaining its full resolution. It takes the 50 fields of PAL video and turns it into 25 progressively scanned frames. These frames do not have the "field flicker" that a full frame of PAL (or NTSC) video will display.
The Magic Bullet process gave us a series of numbered Targa images (one per frame) that can then be printed to 35mm film. These Targa files can also be loaded into After Effects and converted to NTSC video using After Effects pull-down process (similar to the pull-down process used when telecining film). The result is an NTSC version that looks amazingly like film and far better than the so called "film look" processes.
Dealing With Sound
The sound requirements for this project were fairly simple, as our soundtrack was just the music, so we used FCP to produce the mix. But because I was cutting precisely to music and we were working in PAL, I had to take special care with sync.
First, we needed to sync up our video soundtrack to the film we were going to create. So I digitally created a countdown leader and 2-pop in Final Cut Pro. Because it was PAL at 25 fps and each PAL frame was going to become a single film frame at 24 fps, I counted out just 24 frames for each number in the countdown. Most important, I set the 2-pop not at 2 seconds before the first frame of action (50 frames), but 48 frames before (i.e., the 2-pop frame, then 47 frames of black, then the FFOA).
I mentioned earlier that I sped up the audio 4% for the PAL project, so that it would play properly when run at film speed. To be more precise, I sped it up by 4.1% (actually 4.1% is an approximation, but this is beyond the scope of this article). I thought that it would be easier to make a transfer from an NTSC master and do a standard .1% pull up during the transfer to film, than to try and find a transfer facility that could do a 4% pull down. More important, I wouldnt have to deal with pitch changes and other quality issues when making a big change in speed.
So after completing the PAL version of the project, I made an NTSC project for just the audio. I loaded the normal-speed music and a sync pop (in this project the sync pop was 60 NTSC video frames ahead of FFOA).
Video was cut at 25 fps and would eventually be slowed to 24. Audio would come from the NTSC project and be sped up by .1% during the transfer to film. So the total compensation in the PAL project had to be 4.1%. But because my music was handled in NTSC, it only needed to be adjusted by .1% and quality was preserved.
If we had sync sound from PAL dailies, the procedure would be a bit different. Here, the easiest way to handle the situation would be to output the PAL version of the sequence, picture and track, to a Quicktime movie, and then use After Effects to add 3:2 and slow it down to NTSC speed (approximately 4.1%). You would then have an NTSC Quicktime movie, that could be imported into Pro Tools for further work. If you are concerned about quality, you could import the audio only from the PAL Quicktime movie into Pro Tools, and use Pro Tools to do the speed conversion on the audio. The advantage here is that Pro Tools can alter the speed without affecting pitch, and Pro Tools has better audio algorithms than After Effects.
In addition, I was very concerned about potential drift problems playing "wild" out of FCP to a DAT. So I first output the FCP NTSC audio sequence to an NTSC MiniDV camera using FireWire. This produced a digital clone of the audio. Next, I took the MiniDV tape (and camera) to Film Leaders in Burbank, where it was transferred to DA88. MiniDV cameras run timecode over Firewire but dont provide traditional LTC, so having the DA88 chase timecode was out. The solution? We simply took the NTSC composite video output of the MiniDV camera and used it as a sync reference for the DA88. This locked the DA88 to the camera and made sure they were running at exactly the same speed. This DA88 was sent to DJ Audio where it was pulled up .1% to film speed and transferred to optical.
The sound for this project only lived in Final Cut Pro we never went to a dubbing stage for a real mix. Final Cut Pro does not have any form of level metering, and during the transfer to DA88, it was clear that the output levels from Final Cut were a bit low. So we boosted the level when shooting the optical track.
Gluing It All Together
The Targa images from The Orphanage were sent via Exabyte to Film Output Express in Glendale where they were printed to 35mm film. This was sent to FotoKem, along with the optical track, and we all marched over the next day, eager to see the results.
I was quite surprised to see images that did not look at all like they originated on video. There were a couple of shots where I saw minor digital artifacts, but overall it looked like it might have originated on film, perhaps 16mm. The look was softer than something shot on 35, but certainly did not have any of the artifacts that we traditionally expect to see when NTSC video is transferred to film.
If you are interested in seeing the
show, it is playing at ResFest, a touring digital film
festival. Resfest will be in LA, November 1-5, at the
Egyptian, Writers Guild and DGA Theaters. For more, see
Andrew Somers is a Guild member, a picture editor, sound editor, and mixer.
contact him viaGeneral Titles & Visual Effects
The Motion Picture Editors Guild Magazine
Vol. 21, No. 5 - September/October 2000