PostProduction

Home
Up

Developments in Post Production 1946 - 1991

In 1991 I was asked to give a Masterclass for the British Kinematic, Sound and Television Society at the Museum of the Moving Image (MOMI) in London. The following notes formed the basis of this talk, delivered on 19th June 1991. It offers a brief overview of the developments in post production techniques since the advent of television, focusing especially on the BBC.

Three themes run through this talk; three broad trends which have dominated, and will continue to dominate, television post production:

a) The move from film to tape

b) The move from linear to nonlinear recording

c) The move from analogue to digital

What is post production? It’s mainly editing, both picture and sound. Some other necessary processes are also included, mainly concerned with transferring material from one medium to another (an increasing nightmare as we shall see).

The film industry was originally hostile to TV. Television used a little film for links initially but apart from that, features, newsreels, and live shows predominated.

Later on film cameras were often put beside outside broadcast (OB) cameras, and the film processed and shown later that day. There were no internal sound dubbing facilities, commentary was often given live as the film was transmitted.

The post-war 1946 BBC film unit had one cameraman, Alan Lawson, and one editor, Don Smith, maker of “London to Brighton in 4 minutes”. He was known as Film Assistant (Editorial) until 1947, because non journalists were not allowed to be called editors! Neg cutting was done by the film unit.

Early films were factual; assurances were given to film industry that there was no intention of selling them, or entering the Cinema newsreel business.

In 1947 plans were announced for first weekly newsreel, and film unit was increased in strength. Also in 1947 ‘off tube’ filming was introduced. Telerecording, as it became known, was to become a powerful tool, because for the first time it was possible to record live television for possible resale or for further post production.

It was first used publicly in November of that year from the Cenotaph service, and for the wedding of Prince Philip and Princess Elizabeth. Television cameras were not allowed inside the Abbey, but film cameras were. On the evening of the wedding viewers saw a cut together of the film from inside the Abbey together with telerecording of the procession shown live earlier in the day. This was a first in television post production.

Until the mid fifties there were few all-film TV programmes. Rather, film was used to provide inserts into OB or studio based shows. As the sixties approached, this began to change and more directors wanted to make all-film programmes— still mainly on 35mm (and B&W, of course).

16mm was viewed with great suspicion by nearly everybody: engineers didn’t like the equipment, sound was poor, it was hard to edit, telecines didn’t cope well with it, and so on. The introduction of the Arriflex camera by 1958 marked the start of a new era in television film making.

Editing was also more difficult on the smaller gauge. Not only was it harder to handle, but the intermittent motion type editing machines, exemplified by the 35mm Moviola were much more difficult to use with the more fragile 16mm. Although 16mm Moviolas were introduced, the continuous motion Acmade machine and, later, the ubiquitous Steenbeck were to dominate television film editing.

In 60s film sound moved from optical to magnetic. I worked with editors who still moaned because on mag they could not see the sound! Thirty years later, with the advent of digital disk audio editing systems such as Screen Sound and Dawn it has again become possible to ‘see the sound’ and to use the shape of the displayed waveform to determine the exact editing point.

Telerecording was not ideal, because of the time required to process it. In the early fifties a number of teams were working on the possibility of magnetic recording of pictures, using the same principles as those used in audio recording. One such development was VERA (Vision Electronic Recording Apparatus) produced by the BBC. VERA was a linear tape recorder, with three heads (two vision, one audio) with a tape speed of 200 inches per second. The reels were huge (20 1/2” in diameter) and could record 15 minutes of programme. The tape was 1/2” wide   the same size as the BBC’s new VT standard, D3!

VERA worked, but the size and speed of the spools made it impracticable as a post production tool, and it was the quadruplex development by Ampex which was adopted as a standard both here and througfhout the world. In the USA the invention of VT developed in response to need to record and replay shows at same time across time zones in US. The quadruplex video tape recorder using 2” wide tape was invented by Ampex in US in 1956. As the name implied it has four recording heads, recording across the moving tape, giving an effective tape speed of 1500 inches per second.

Quad tape was composite, with two audio tracks, one of which was used as a cue track, and later for longitudinal time code (LTC). There was no picture search, still frame, or slo mo. By the end of 1959 BBC had four VT machines in regular use, although there was still twice as much telerecording being made as videotape recording. There was an increase in recording due to the need to get the best use of studios, and because of foreign sales.

Videotape was originally devised as a recording and storage medium. There was no thought of editing it. But soon this became a requirement. At first the only way to edit videotape was to physically splice it, just like audio tape. These were first performed during black events, between fade ins and outs, for instance. In order to get an accurate picture to picture edit, without frame roll, it was necessary to find the exact point of the frame start. No sprocket holes of course, so iron filings were used to reveal the pattern of magnetic fields on the tape.

The fact that the sound was laid down 15 frames ahead of the corresponding picture meant that simple level sync cuts were impossible and sound had to be copied off to 1/4” tape and then laid back onto the edit tape. However, paradoxically, as VT machines got better and engineering tolerances finer, the physical splice became increasingly unsatisfactory and likely to misplay. The advent of colour in the late sixties simply exacerbated these difficulties. Despite this, cut editing was still being occasionally used well into the seventies on programmes such as “Grandstand” which sometimes needed a very fast edit and could not wait for material to be dubbed from player to recorder.

The invention of the “Electronic Editor” (Ampex) and “Electronic Splicer” (RCA) made it possible to ‘drop in’ a new shot onto an existing picture without any visual disturbance such as frame roll or loss of colour. With this the potentiality for true videotape editing was opened up.

Cueing up the machines to enable accurate copying from player was initially a rather hit and miss affair, requiring copious amounts of luck, skill, patience, and chinagraph. The first electronic editing at the BBC used the “Editsure” to trigger the editing process more accurately. An improvement was “Editec” from the Ampex Corporation, which used the cue track on quad tape to store the edit cues. But even this did not provide true frame accuracy—that had to wait for the invention of time code.

While tape editing was struggling to emerge from its crysallis, film editing was also undergoing some significant developments. It received two major technological advances round about 1965: the tape joiner and the PicSync. This permitted a much faster and more flexible approach to editing, the tape joiner in particular having a major impact on technique. Before its advent all film joins were cement splices, made by overlapping and then chemically welding the two pieces of film.

Splices were time-consuming to make and difficult to undo; a common technique was for the editor to mark the cuts, break of the film as few frames away from the cut point, clip it to the previous shot with a small brass clip, wind it into the assembly roll and then pass it to the assistant to splice together on a lethal instrument called a foot joiner. Subsequent edits could be made but if a shot needed to be extended it was necessary to add some blank spacing film to build it back up. Tape joiners allowed the editor much more freedom. An edit could be quickly tried and then re-made without any real fuss.

These two inventions crystallized film editing technique and technology. There have been no significant developments in film editing practice or equipment since that time. Even though improvements have been made, any film editor capable of cutting film in 1965 would be able to be transported 25 years into his or her future and walk into any cutting room today and start editing within five minutes. Such uniformity of practice throughout both space and time has never applied to videotape editing—and there is still no indication that it will in the future.

The mid sixties saw the first use of slow motion; initially for sports event such as the Tokyo Olympics and the 1966 World Cup. This was achieved by modifying a quad machine to, say, play each frame three times if third speed was required. The machine would play the frame, whizz back, then play the frame again. In order to cover the gaps, a magnetic disk was used as a frame store to cover the whizz back.

This was not the first use of disk to record video signals. In fact, the first disk recorder dates back to Baird’s experiments in 1927 when he successfully a 30 line per picture video signal (30 fps, 15 pixels per line). But this was a physical recording on the disk, like an audio recording ( and similar to a laser disk today) and so was not re-useable. For full functionality the magnetic disk is required.

The first fully functional magnetic disk recorder was the Ampex HS 100 videodisk, also introduced in the mid sixties, which was able to provide instant replay, together with variable speed and freeze frame. The system used two double sided 16 inch disks with four record/replay heads and was capable of storing 1800 fields (32 seconds at 25 fps).

This is the first intimation of the second major theme in this survey, the move from linear to nonlinear recording media. Both film and tape are linear media. To access a frame 30 minutes further down a roll you have to run every frame of those 30 minutes before reaching the frame of your choice. A nonlinear medium offers the chance of accessing one frame as quickly as any other, no matter how ‘far away’ from it you are. To put it another way, linear recording entails sequential access, nonlinear recording enables random access.

The above statement may surprise some people since it is often claimed that film is a nonlinear medium. This is quite untrue; although film can be broken down into small chunks to make access relatively quick, this is a result of the physical ‘cut and paste’ nature of the medium rather than the way the image is recorded or stored.

If each take (that is each attempt at a shot, as in “scene 5, take2”) is broken down and wrapped up separately and stored in film cans as is normal practice, then we could speak of the storage of takes as nonlinear, with random access to any given film can or take within that can. But within each take access would still be sequential.

There is no theoretical reason why this approach should not be adopted with videotape, especially with cassette systems: each take could be copied onto a separate cassette, but the sheer complexity and scale of the undertaking would ensure that it was unlikely to be efficient on any but the largest and most heavily edited of projects.

True nonlinear systems offer the potential for real ease and speed of editing. The first nonlinear video editing was devised by CMX (a joint venture between CBS and the Memorex corporation). Sound and vision were recorded normally on a VTR and then transferred to computer disk packs of the kind used then for mainframe computers. The video bandwidth was restircted to 2MHz, so the picture was monochrome only. There were two sound channels.

Each disk pack had twenty recording surfaces and was capable of storing two and a half minutes, or five minutes if only one field per picture was recorded. Six disk packs could be used to give thirty minutes capacity. Slow motion and freeze frames can be displayed. An edit list was recorded on the computer and was subsequently used for conforming the original tapes. All communication with the CMX was via light pen—a system still used in Ediflex.

The videodisk also made its appearance around this time. While it is true to say that the videodisk is a nonlinear recording medium, it could hardly be used as an editing system, since its capacity was only 36 seconds. Nevertheless, it did prove useful in giving slow motion capability to quad edit suites, something which the videotape machines themselves could not achieve until the advent of helical scan and C-format.

Colour started in 1967 (and so did I), but most postproduction was still in monochrome. We had 10% of our rushes printed in colour. This led to some problems (red fogging) and misapprehensions—I remember one of the great BBC editors, Alan Tyrer, telling of a pretentious young production assistant seeing a rough cut in a dubbing theatre and waxing lyrical about the aesthetics of the choice of colour shots in the predominantly monochrome cut. The choice was, of course, completely random.

Improvements in telecine, with inventions such as Vertical Aperture Corrector, were instrumental in obtaining a high standard of colour transmission. According to Kodak, the standards achieved on a day to day basis by the BBC were the decisive factor in the world wide trend towards using negative and positive, rather than reversal, for television.

Time code first made its way into the BBC in 1968 and a system called “On Time” was used to made accurate edits with respect to time code. Yet even this was not 100% accurate, because although it could start the two machines at exactly the right frame, it could not guarantee that they would run up to speed at exactly the same rate.

The manufacturers of “On Time”, the Electronic Engineering Company (EECo) developed an editing system which provided accurate editing by controlling the run up speed of the replay machine to ensure that it reached the ‘in’ point at exactly the same time as the record machine.

The EECo system could also control a second player and vision mixer to produce mixes and fades, and could be used to trigger the remote start of other devices such as an audio tape machine.

Sound handling was still a major problem in video editing, and complex sequences were still being film recorded, tracklayed and dubbed on film and then layed back to tape.  The introduction of the first ‘video dubbing theatre’ in 1973 which had a multi track audio recorder synchronized to a low band U-Matic video recorder. This became known by the acronym SYPHER (SYnchronised Post dub, Helical scan and Eight track Recorder). The BBC now has four SYPHER suites, and the original one has just been refurbished.

During the 1970s the demand for film increased, with many major productions being made on 16mm and occasionally on 35mm. But already the relationship between film and tape was being scrutinized. In an address to the IEE in 1972 the Director General, Charles Curran, said:

“At the moment film has all the advantages of flexibility of editing. It is more expensive than electronic methods in the studios but it is incomparably more mobile outside. It is also easier to duplicate and more universal as a medium from which to originate transmissions in a world where systems differ…But the day is clearly coming when it will be possible to take an electronically based picture with as much flexibility as can now be done with a film unit. What will happen then to all the union expectations about the handling of picture making gear in a mobile setting, I don’t know. But it is impossible to believe that the 1980s will not see the wholesale development of practical systems of electronic recording in the field which could replace the film camera.”

By the start of the seventies there were a mulitplicity of helical scan systems available, recording on tape mainly using ½” or 1” tape with one, Akai VT100, using  ¼” tape. In 1972 Low Band U-Matic was introduced in US as a medium for ENG. This was also a helical scan system, and was, I think, the first colour videocassette system. The picture quality was nowhere near as good as film or 2” tape, but already many could see the writing on the wall.

The early seventies also saw the beginnings of ‘offline’ editing, although it was viewed in a very different light from now. In his major review of videotape editing in 1973 Geoff Higgs wrote that offline is “...perhaps more suited to short, intensively edited sequences, such as titles, special effects, and commercials.” Curiously, this is almost exactly the opposite of the perspective which we would now adopt.

In 1977 BBC TV News did a one year ‘Electronic News Gathering’ ENG experiment. Although it was accounted a success, permanent adoption was delayed until end of 1980 because of negotiations with unions.

1977 also saw the beginnings of another revolution, and the third of our themes: the move from analogue to digital. New England Digital brought out the Synclavier, the world’s first digital synthesiser. But the significance of digital recording is not immediately obvious. Indeed, at first sight it might seem like a retrograde step. However, consider what happens when a sound is recorded using analogue methods:

What we call sound is actually the result of continuously changing air pressure interacting with structures in our ears. Analogue recording works by analogy, and so as the air pressure increases, so will the electrical voltage or magnetic field strength on the recording. As an example let us say that we wish to represent a certain sound by a voltage of 10v. The first point to note is that no recording can be perfect; we might actually end up with a voltage of 10.05v. Furthermore, every time we make a copy from one medium to another there will be more errors. So if we make a copy of our sound we will get a slightly different value: 10.09 volts perhaps. What is worse, each copy we make will tend to get further from the original, since in the long run errors always add up rather than cancel each other out.

So, even if you had a perfect original analogue recording, you will always lose quality when you make copies. Digital recording does not suffer from that problem.

When we record digitally, we only store two values: 0 and 1. With these we build up more complex numbers. Let us suppose that we represent a one by a voltage of 10 volts and a zero by a voltage of 5 volts. Then our value of ‘10’ might be represented by a string of ones and zeroes as follows: 1010. (This is a way of representing the number ten in what is called binary arithmetic; machines are very good at understanding it, people are not). So on the tape we would have a pattern of voltages which ought to be 10v, 5v, 10v, 5v.

Once more, the actual values will probably be rather different from the ideal: 9.88v, 4.95v, 10.07v, 5.04v, perhaps. Although all the values are different from what they ought to be, there is no chance of confusing which was intended to be 5v and which was 10v, so we can easily reconstruct exactly what was meant. Not only that, it may be possible to rewrite these voltages when a copy is made so that errors do not accumulate no matter how many times the copy is made.

So, although digital recording is much more complicated and difficult to achieve than analogue recording, it is well worth the effort if you can do it.

Experiments with digital video were also taking place, but it was clear that it would take some time before it would be possible to deal with the vast amounts of information required to record and replay digital pictures. Meanwhile, the introduction of C-Format in 1978 was a major advance in postproduction tape flexibility. (B format used in Germany.) C-format was recorded on 1” using broadcast quality helical scan recording in composite, with four audio tracks, with LTC on track 3.

Film stock prices continued to increase, giving rise to sentiments such as, “Film makers record on silver, video makers record on rust”.

Digital started to become ever more important as a concept, and was beginning to be linked with nonlinear media. In 1980 Synclavier introduced the world’s first commercial audio disk recorder offering 16 bit sound sampled at 50 kHz.

The following year Quantel produced the DLS 6000 digital frame store, offering digital pictures in a nonlinear medium (computer memory and disk), although the editing potential was as limited as the original video disk.

Experiments with digital videotape had been in progress since the early 70s. By 1982 a PAL version had been demonstrated, but the CCIR (Committee Consultative International de Radio) recommendation 601 pointed the way to component digital as the standard of the future. (Their recommended way of recording digital is often known as the 4:2:2 standard.) This required a bit transfer rate of 216Mbits/sec—quite a tall order.

High Band U-Matic started in the BBC Film Department in Jan 1983. Betacam, originally demonstrated in NTSC-format in 1982, followed in 1985, and Betacam SP, introduced in 1986, gradually phased in throughout 1989 and 1990.

Film was losing out to tape as an editing medium in another way as well. In 1984 Montage and Editdroid were demonstrated at NAB. Montage used 17 identical copies of a set of film rushes on domestic Betamax cassettes running under time code and computer control to provide a simulation of random access editing. Editdroid used analogue videodisks. Like all nonlinear editing systems, all that was edited was the ‘play list’—the set of instructions telling the equipment how to replay the picture and sound.

The theory was that with so many copies of the rushes, there could always be one machine cued up to replay the next shot in real time. Changing the play list could be done easily, and the results seen immediately. The practice wasn’t quite so simple and the first Montage was not a great success, although Stanley Kubrick used it for “Full Metal Jacket”

Montage was reincarnated as Montage II in 1987, and Montage III has just appeared at NAB this year, using digital disk technology, which should prove to be considerably less cumbersome than the Betamax system.

Although Montage has some success with feature films, it was “Ediflex”, using a similar principle but with multiple VHS machines, which captured most of the television market in the US (Dallas, Dynasty, Falcon Crest etc.). In 1989 they introduced a PAL version and Yorkshire TV became the first British television company to use nonlinear methods in a routine way.

By the mid 80s digital audio was in full swing; the AudioFile was introduced in 1985, and digital tape recorders were also beginning to appear for specialist applications.

The first commercial digital videotape format was D1: ¾” component digital. The development for this format took place from 1982 onwards and the first working prototypes were demonstrated in 1986. Very expensive, lots of information to turn into digits. Used mainly for top level graphics work, where quality is everything. Can copy down a hundred or more generations without noticeable degradation.

Component digital was not only expensive, but when it was introduced it lacked component digital vision mixers to permit a complete component digital post production chain. It also lacked a number of features which broadcasters had come to expect; variable speed broadcast replay only went from -¼ to +¼, for instance. So the less technically demanding composite digital formats were developed.

The first of these was D2, a ¾” composite digital format, using the same cassette shells as D1. This was developed between 1985 and 1986, and the first production PAL versions arrived in the UK in 1989. Not quite as good as D1, but perfectly adequate for everyday work. Can copy down twenty or more generations without noticeable degradation.

The latest of the composite digital formats is D3 which uses ½” tape. It went into production earlier this year. The quality is similar to D2. Like D1 and D2 it has four editable digital audio tracks. Picture search at 100x, -1 to +3 variable broadcast speed. Chosen by BBC, partly at least because it takes up less room than D2.

So far that is the end of the digital videotape story but there have been persistent rumours of a ‘D4’ which will be ½” component. Indeed, some surprise was shown when no such product surfaced at NAB this year. If it ever does appear it will probably be the most technically desirable of all the formats, but not necessarily the most commercially successful. It may also represent the end of the line for videotape development, with the big money chasing after mass nonlinear storage as the next big development in digital broadcast video.

However, digital disk for non-broadcast video is already with us. In 1989 the Editing Machine Corporation introduced the EMC2 and the Avid Corporation introduced the Avid Media Composer. Both were based on personal computers (IBM compatible and Mac II, respectively) with graphics boards, digitizers, big hard disks, and some pretty smart editing software. Neither was really up to the rigours of professional editing, but both excited the editors who saw them. For the first time both film and videotape editors could see a tool which each could use and each could see would be an improvement on their existing methods.

Appendix

Selective Time Line of Post Production Development from the end of World War II

1946

7th June; TV started again, 28 hrs programmes/week

Film unit set up

1947

Off-tube filming invented; Nov: 1st use of telerecording for wedding of Philip & Princess Elizabeth

1948

5th Jan; First Television Newsreel

March; Film department and OBs split up to become independent departments

1950

30th Jan; 1st Children's Newsreel

1954

"War in the Air"—first major TV documentary series—15 episodes; a breadth of treatment which cinema or sponsored documentary could never match

1955

22nd Sep; ITV started

BBC has 10 camera crews, 14 cutting rooms

JVC starts video recorder development

1956

27th Jan; BBC buys Ealing Film Studios

Quadruplex tape invented by Ampex

1958

VERA

BBC gets quad VTs

16mm Arriflex + EMI 1/4" tape recorder

1959

JVCdevelop two-head helical scan system

1960

June; Television Centre opens

1964

20th Apr; BBC2 starts

1965

Adoption of Incollatrici tape joiner

PicSync introduced

1966

"Cathy Come Home"—1st all-film TV play

Video disk

1967

July; colour launched on BBC2

A-D videoconverters developed(?)

1968

Time code introduced in BBC

1969

Low band U-Matic

1970

Ampex VR 3000 portable recorder~ and Fernseh camera to give first 'PSC' (Portable Single Camera) at Mexico World Cup

CMX computer-based edit controller

1972

Charles Curran's address to lEE

First 'ENG' from Munich massacre with VR 3000 & Fernseh

1973

First SYPHER suite introduced

1976

VHS (Video Home System) introduced

First fully digital time base corrector from Ampex

1977

High Band U-Matic (Europe only)

BBC TV News did one year ENG experiment

Synclavier digital synthesiser introduced by New England Digital

1978

C-format introduced

1980

BBC News adopts ENG

Synclavier II includes first commercial audio disk recorder offering 16 bit 50kHz

Quantel 5001

1981

M format (Panasonic)

Quantel launch DLS 6000 digital still stores

1982

Betacam

1st PAL digital VT demonstrated

1983

SP U-Matic

Film Department started using High Band U-Matic

Dave Bargen given Emmy Citation for the development of 409 and Trace EDL software

1984

Montage and Editdroid shown at NAB

1985

Ediflex in operation

AudioFile introduced

BBC Film Dept introduces Betacam

First digital production switcher from Thomson

1986

M-II

Betacam SP

First prototype Dl machines from Sony

1987

Montage II introduced

1988

NTSC D2 machines  introduced by Ampex

S-VHS

1989

First production PAL D2 machines

EMC2 introduced

AVID introduced

Jemani Flix nonlinear editing system under development

1990

March; NTSC D3 shown at NAB

November; PAL D3 shown to EBU

Eidos demonstrated

Sony Offline Work Station demonstrated

Transputer-based AudioFile PLUS

1991

D3 delivered to BBC

Lightworks demonstrated

Montage IV demonstrated at NAB