Video Stratigraphy: Working with Multigenerational Video

Sunday, January 7th, 1990

(delivered to the Society of Motion Picture and Television Engineers Post-Production Seminar, Orlando, Florida, January 1990)

I got married in December–to an archaeologist who doesn’t even own a television set. Well, I guess she does now. And I learned a great new word from her, a piece of archaeological jargon that you might find of some use. Here it is: stratigraphy. What it means is: the study of layers. You see, that’s what archaeologists do…they uncover layer after layer of ground, of sediment, of pot sherds and human remains and charcoal and stone and all of that…stuff carefully set down by civilizations long gone and forgotten.

After listening to her explanation of stratigraphy I realized that she and I indeed had a lot in common, because these days, video graphic design is the art and science of creating delicate layers of moving video, one atop another, in perfect synchronization. Video stratigraphy.

It seems as if we’ve been messing with the idea of layers of video for as long as the medium has been around–certainly for as long as I’ve been around the medium. I remember my first experiments with layering–really just playing around with an ACR cart with a friend–a fellow master control switcher in the studio at WTCG. We went back and forth between the two decks of the ACR, recording on one, playing back on the other, and then back the other way, each time adding another layer of this guy, and the result, which at 2 in the morning seemed pretty cosmic, was also, unfortunately a great example of the big drawback of quad videotape–in fact, all of analog videotape: what you get out is less than what you put in. And, as a free bonus, you get noise, dropouts, banding(back then)…all kinds of artifacts that mess up your stratigraphy.

It’s interesting–despite years of experiments on MTV, I think it can be concluded: noise in video doesn’t look arty, or attractive, or neat, unlike grain in film. It just looks…noisy.

So we’ve been frustrated with the idea of decay, and much of what we’ve done over the years to achieve what we idealize as a `first-generation image’ is to use as many discrete sources as possible, combining them only at the final `mastering’ point to create a finished composite that was as `clean’ as we could make it. Lots of tape machines rolling in sync, and a switcher with lots of keyers to get it all together, on one piece of tape, in one pass.

So much of the kind of television I do–graphic design for television–is created by many, many layers of material, one atop the other. The reason for this has a lot to do with good design. Design with subtle colors, textures, shadings. And the one quality that design on television has that print can’t quite duplicate–movement. A great piece of television graphic design has, to me, the qualities of a well-choreographed ballet. It’s subtle. It’s complex. It has small things you don’t notice until the second or third time you see it.

I’m going to assume that you’re here today–thank you for coming here today, by the way–because you’re involved in creating this kind of graphic material, too. It may be you’re working to get together a facility that can do what people call, somewhat magically, “computer graphics,” and you don’t want to get the wrong equipment to do the job. It may be that you’ve got a piece of design you want done and are somewhat fuzzed out on all the buzzwords these days that delineate approaches to getting it done. Gee, do I want to do it on the Wavefront or on the Harry or on the Mirage or on the Abekas? D1 or D2 or Beta or M2? 3D or 2D? It’s easy, given this array of blurry options, to throw up one’s hands and say…”uh…whatever.”

So let’s sort out approaches today. And let’s start right off by saying that there isn’t one approach that works for everything, and as a subset of that, there certainly isn’t one approach that’s cost-effective for everything. (I’m more conscious of that these days in my role as a freelance bum.) It’s important to look at what you want to do–whether we’re talking here about just one project or a place to do a whole range of projects–and see exactly what it takes to get the job done..or those jobs done.

As we go along, I also want to examine our options in terms of developing technology. Television and graphic equipment now isn’t what it was ten–or five–years ago, and this incredible upward spiral will certainly make a lot of the particulars of this discussion obsolete in a few years. What won’t become obsolete, however, is the overall trend toward simplification, universality, and cost-effectiveness. Like any cutting-edge thing, as time goes by, the cutting edge gets further out there and what was the edge becomes easy, available, affordable, and understandable. I’ve only been doing this television stuff for about twelve years now, and back when I started, a videotape machine was a device to be operated by wizards, amazing people with arcane knowledge and nifty pen-protectors on their pockets. If you were just a producer or a director…or, hey, a graphic artist …you kept a respectful distance from these guys. Now, of course, it seems as if everyone knows the basics of videotape, and the operating controls of a home VCR aren’t all that different from a Sony D2 machine. Well, not too different.

And what’s interesting to me is how these trends of technological development are bringing a number of formerly diverse fields together. You may or may not be aware of a parallel revolution in how print graphic designers are creating their work. Like their broadcast counterparts, they used to produce print artwork with crude tools, paper, and pencil for the most part….also Letraset and border tape and lots of stats and film and chemicals…and like the videotape wizards, their craft had an air of mystery about it that kept the fundamentals away from a wider audience.

Now, they’re going through the same revolution that television designers did when the first paint systems and character generators appeared. They’re sitting in front of screens–in front of desktop computers–and manipulating type and color and texture in the same way for print. You may be asking why that’s important to you, a television person. We’ll get back to that a while later…right now, it’s just nice to know that television and print people…and motion picture people, for that matter, are going down converging technological paths. Everyone benefits from that kind of synergy.

But back for a moment to the old days, back to..uh..the late seventies, back when the personal computer was just something for engineers to tinker with back in the shop when they could be putting new tubes in the film chain.

So you had designers who were not TV people. And TV people who very definitely were not designers. And since the first pieces of television graphic equipment were cranky, cumbersome, and designed to be operated by technicians, they were able to put letters on the screen or move pictures around in a very basic, low-res kind of way, but the results weren’t all that aesthetic…and the people operating them didn’t know a serif from a sans-serif, and it didn’t make much of a difference to them if they typed a name super in flashing purple all caps letters–at least it was up there without having to shoot a camera card, right?

Lucky for me, I came at this revolution in graphics technology from a couple of unorthodox directions. I was a journalism major in college, and always expected to be working at a newspaper someday. And I worked, just for fun, at my school’s Public Television Station, in operations, switching, loading slides, running camera. And my first job out of school was–hey, I took what I could get–as a master control operator at Ted Turner’s cable superstation in Atlanta, then called WTCG.

I always had an interest in graphics and design–especially typography–but I never took any formal instruction in that field. Instead, I was lucky enough to have a TV station to play with in the middle of the night, and I was able to put the results of my experiments on the air, where a lot of people saw them, without my getting fired. A great place to learn about what worked in television and what didn’t–and right from the start–and this is why I’m giving you way too much of my life’s story up front–I was sure that the rules and the subtleties of good print design also applied to broadcast.

That’s what led me down the path of trying to coax clean, complex, high-resolution images out of equipment that engineers said `wasn’t designed to do that, and why do you want to do that anyway?’ These days, things are much easier, and lo and behold, engineers are beginning to appreciate the subtleties in a graphic image in the same way that a perfectly shaded camera brings a smile to their face. A nice, big, clean anti-aliased word, letters tucked together perfectly, with subtle shading and light sources. Nothing like it. Clean video, no matter what the source.

Now, we look back on those pre-digital days as “back when we made graphics with rocks.”

Archeology and stratigraphy again.

But a lot of what I learned from those early days about keeping an image clean through the food chain–excuse me–through the chain of old cameras, switchers, and tape machines–still applies in this luxurious world of the future where I can sit down and create perfect digital layers until the cows come home.

Which is why we took that particular left turn before we got to where we are now. Which is: you’ve got this graphic work to do. You want to get it done in a spectacular up-to-the-moment state-of-the-art groovy way that will impress your client or boss or creditor or whomever.

And you want to use digital..uh, something, right? You’ve heard that staying digital–that is, keeping material in a digital form throughout the production process–is the key to keeping things clean as long as possible…at least until it gets broadcast or cablecast and gets watched on an old 1967 RCA color TV with rabbit ears.

OK. Great. Maybe we’re talking about an open for a show, or a design for an entire program. I want to make the point here that it’s important to think of what you’re creating in context–that is, it doesn’t make sense to me to create the fanciest, trendiest open in the world and then plop it on the front of a show that has a set, still graphics, namesupers, and credits in a totally different style from that open. Seems to me these days there’s a lot of this going on, where someone has the budget and goes out and gets this one thing–which doesn’t relate at all to the rest of the show.

When people who aren’t in television ask what I do, I usually offer the explanation that graphic design for television is a lot like wallpaper–when it’s all just right, you may not notice, but if it’s wrong, or if one element stands out like a sore thumb, then it’s just like having your living room–or the viewer’s living room–ruined by this ugly piece of graphic art.

Conversely, Rembrandts don’t look too good in house trailers next to paintings of Elvis on black velvet.

So much for My Philosophy of Television Graphics.

One of the big questions you should be asking yourself at the early stages of a design project is: to 3d or not 3d. Actually, with apologies to Hamlet, it’s not an either-or question these days. Although the use of 3d animation has been on a steady upward curve since its first tentative steps early in the eighties, it still remains too expensive and too complex a technology to use indiscriminately. That doesn’t mean that it isn’t used indiscriminately sometimes, just that it shouldn’t be.

This is as good a point as any to admit that I’m a bit of a stick in the mud about 3d. I’m a big fan of 3d animation, but I use it in my own work very, very sparingly. Part of the reason is budgetary, of course, but part of it is just plain design. It seems to me that there’s way too much of this “let’s fly around a really big logo” just for the sake of flying around a really big logo. That is, I always like to get somewhere in an open for some reason. I know that sounds a lot like “what’s my motivation in this scene?” but c’mon, is flying around the huge words “Home Shopping Spree” as if in a helicopter for 15 seconds really an open for that show? Does it really tell you something useful about the show? Does it really set the scene? Does it, in short, get the job done?

Well, sometimes you find yourself working on a show open that defies any attempts to depict it graphically, but I always try and give it my best shot. if nothing else, I like to include enough layers of visual information that communicate a general impression, a feeling, a mood. In a five or six second open, you may not be able to communicate much more than that, but I prefer that approach to “look, here are the letters that spell out the name of the show. They’re really big. They’re really shiny. Let’s fly around them in a helicopter for a while, shall we?”

The added plus to including these subtle elements is that, for the most part, opens run a lot. Week after week, or day after day, or, in the worst-case scenario of a project I did last year, forty-eight times a day. If all there is to the open is “look, here are these letters,” then the viewer gets burned out on it real fast.

But when you’ve got a limited budget, and for some reason you’re determined to use 3D to get the job done, sometimes all you can afford is one simple move around one simple element. If it’s well-designed, if it makes sense visually, that can be fine. And as the cutting edge in 3D technology moves on down the road, I can definitely detect a downward trend in the amount you have to pay for high-resolution 3D animation–if you know where to look, and if you know what shortcuts you can take without losing quality in the finished product. But that doesn’t mean that an attitude that says “I don’t care what it does, as long as it’s 3D” is a good idea.

Instead, for many of these kind of projects I would advocate considering using 2D techniques in 3D ways to achieve animation that has depth, complexity, subtlety–at a more reasonable cost. That doesn’t mean I don’t think there isn’t a time and a place for 3D animation–when I need it, I figure out exactly what I need, I budget for it, and I plan it so it can be smoothly and seamlessly integrated into backgrounds and the rest of the graphics in a package. But more about that later…let’s look a little more closely at the tools for doing 2D graphics well.

I mentioned “doing graphics with rocks” earlier, the era where we used press-type, things shot on camera, switcher wipes, holes punched in a card, monitor feedback, crosshatch from test generators, anything we could get our hands on to give our graphics a sophisticated look. That era ended for me personally between 1982 and 1983, when I finally got my hands on a couple of devices that were introduced just about at the start of the decade, and, I’m here to tell you, seem to remain the industry standard to this day.

I’m talking about the Ampex ADO and the Quantel Paintbox. The end of “graphics with rocks.” Now, before this starts sounding like too much of a commercial for Ampex and Quantel (two fine, fine companies), let me acknowledge that a number of paint systems and DVEs–that is, digital video effects units–have come on–and gone off–the market since. And the ADO and the Paintbox commanded huge price tags when they were introduced–prices that have dropped only slightly in the succeeding years, when technological progress have surely made their cost of manufacture now a fraction of what it was then. But these two have endured. Why?

Well, the ADO was the first to take a live picture and move it around in true three-dimensional perspective (or darn close to it) fairly transparently–that is, the picture you got out of the ADO was only slightly worse than the one that went in. There were other units on the market at the time that compressed and positioned a live video picture, but this one let you think of the picture as existing in a huge three-dimensional world of its own, one that you, the camera, could move around in as you looked at this rectangle of video–and, for that matter, one that this rectangle of video could move around in on its own. It established a coordinate system–numbers–that described where you were and where the object was in a way that set a standard–and prepared us for the very similar three-dimensional coordinate systems used by high-end 3d systems like the Wavefront and the Symbolics. It was a cool toy, and then some.

And so was the Paintbox. Its lasting contribution was the ability to capture a real-world video image, again, pretty darn transparently, and then use it as a canvas for your painting. You could pick up colors from it, subtly airbrush it. Cut a part of it out and put it somewhere else. Quickly, without having to go out for coffee while the machine crunched numbers. Oh, and the other unique thing about the paintbox–a very smart design move–was to create a way for video illustrators to do what print airbrush illustrators can do–precisely mask off and work with a very small part of an image. The ability to take a portion of a station logo–say, just the edge, and apply a smooth color graduation to just that portion as if the rest were covered with masking tape–was a clever innovation that Quantel still tenaciously holds patents on to this day.

Since the early eighties, the ADO has picked up some options to keep up with the competition–most notably the ability to control multiple channels and what it calls the `Infinity’ package, which is an additional framestore that lets you do all kinds of goofy trails and sparkles and delays off the edge of an ADO image that you usually see on used car spots–or “Star Search.”

The Paintbox has made a couple of improvements over the years in their software–but just this past year, the Quantel folk have released a version called the `V Series’ that basically is a redo of the whole box with a lot of custom chips in a much smaller and faster package and hey, it’s only about two-thirds of the hundred and fifty or sixty grand we paid for one back in 1985. Progress.

It’s important to understand just what a paint system is. It is not an `automatic converter’ of video, creating rendered type, airbrushed people and cities at the touch of a button. It is definitely an illustration tool that creates work only as good as the operator behind the bitpad. It’s also not instantaneous–I’ve run into a number of producers who seem to think that crisp, clean Paintbox illustration is a matter of five minutes, maybe ten. There are some very fast Paintbox artists out there, capable of cranking on deadline pressure–but if your project is a graphic that should withstand the test of time–if it’s part of an open, for example–you should be willing to budget for the time it takes to do the job right. One consequence of rushing the paint work is something I like to call `blurry paintbox’–you’ve all seen it. Merely taking a frame of video and hitting it with the airbrush, a little color here, a little scribbly stuff there–that doesn’t usually yield an image that’s better than the one you started with. Like most computers, the quality of the image out is no better than the image in…and that’s why it’s important to capture images from crisp, clean originals. Not sloppy 3/4″ dubs…not fuzzy Xeroxes of logos. This is one place where a little extra effort in pre-production pays off–not only in a cleaner rendered image, but in less time on the paint system cleaning it up.

There are other paint systems out there, although it’s been my experience that the majority of post-production facilities use the Quantel Paintbox. Most of them–the Ampex AVA3, the Aurora, the paint portion of the Symbolics software, the Artstar, which a lot of TV stations picked because hey, they bought weather computers from the same people…most of them have a subset of most of the Paintbox’s features, but usually with a speed or interface penalty. By `interface penalty’ I mean that it’s a big pain to do certain things, like work with cutouts, interact with stencils, or do type. It is safe to say that none of these has as smooth and subtle an airbrush as the Paintbox–especially the new V Series model. A number of the lower-end models–and I definitely include the weather-computer-based systems here–have, in my opinion, no business at all being in a television station or production environment. Their sluggishness, miniature storage space, and clunky bitpads add up to something like the Fisher-Price version of a Paintbox–in other words, they’re toys. I’ve seen artists at stations–usually in smaller markets–stuck with the lower-end systems, trying to crank out graphics for a news broadcast on deadline, and believe me, it’s not a pretty sight.

It does seem that the ADO doesn’t have the field as much to itself these days as does the Paintbox. A number of contenders in recent years have put the fire to Ampex’s feet, and the technology seems ripe for another quantum leap (or Quantel leap?) in features. I’ve seen a growing number of facilities with the Abekas A53-D, a DVE system with most of the features and feel of the ADO–and it offers a `warp option package’ that does nifty page turns and curls, and has the advantage of a very sensible `live control room’ interface. Disadvantages? The picture quality, especially in an enlarged picture, is, to my eye, not as good as an ADO…but close, very close. The DVEs at CNN and Headline News are all A53-Ds. Then there is the Kaleidoscope from Grass Valley, in theory, everything a Digital Video Effects device should be. It’s a big mama in the racks, and it integrates seamlessly into a Grass Valley 300 production switcher or runs out of its own box, which looks like a small Grass Valley switcher. The positioning and coordinate system is very ADO-like, which is good, in my view. It’s a very very clean (and expensive) system, and has a lot of flexibility in terms of allowing for component or digital inputs or outputs. It also has this built-in feature that you see on a lot of LA-produced stuff these days that puts a `glow’ or `highlight’ across the picture as it turns…but, as you’ll see in a minute, you can do the same thing with a switcher wipe half-dissolved out. The Kaleidoscope seems to have the architecture to grow into a remarkable machine, especially in light of developments and the possible synergy between Grass Valley and Sony.

What synergy? Well, one device that shows what in Southeastern Ohio we would call `po-tential’ is the Sony System G, a high-end picture manipulator positioned to compete with–and surpass the Quantel Mirage, which is, you may know, a very high-end system for wrapping pictures into spheres, Coke bottles, and all kinds of other goofy shapes. It is, in my experience, a clever machine that is cranky, difficult to program even with the newer software, and often frustrating in that it will give you almost what you want–an almost perfect, but noisy sphere. If you handle the Mirage with tender loving care, which is what a number of large post houses have done–it is possible to get some great stuff out of it. On the air, it’s most often seen folding and sphere-ing and ripping on Entertainment Tonight. But back to the Sony product, which I understand will be at this year’s NAB in a more fully functional form. It uses parallel processing–that is, a symphony of tiny chip-computers all pumping numbers together–to achieve real-time texture mapping, creation of these strange shapes, mutating one shape to another, all under mouse control. For four hundred grand or so, the System G could be a box that makes it easier to do a lot more 3d-esque things without going to the high-end rendering equipment.

We pause here for a warning from the graphics police: This new machine, like so much of this stuff, is an example of what graphic designer Harry Marks like to call “dangerous in the wrong hands.” Just because you can wrap Peter Jennings’ head into the shape of a Coke bottle doesn’t mean you should do it. All too often with a new piece of an equipment there is a natural tendency among tech types to play with it. All well and good–I feel that playing with TV stuff is the best way to learn how to run it–but then some idiot says “gee, that purple and green modulated switcher wipe looks great–let’s put it on the air.” Just say, uh-uh, please. It’s not enough to do something “just because we can.” End of warning.

Somewhere back in those last few paragraphs I mentioned `type’, and in some ways it’s surprising that I’ve waited as long as I have to talk about my favorite subject. I’ve always been fascinated with letterforms and typefaces. Elegant curved, metal type in fine magazines and newspapers, precisely spaced….huge, perfectly formed letters on billboards. Type in and of itself is not only everywhere we look in modern life these days, it is art in its own right. And on television in the early seventies, the state of that art was…the Vidifont, from what was then CBS Labs. Now the Vidifont was a remarkable technological achievement at its time…but the resulting squared-off letters on the screen were distinctly low-res. They were, it seems amazing to consider this now, digital letters created from the intersections of tiny copper wires in the machine’s core memory. Two fonts, big and small, and the small was always all caps. It’s no wonder that when the Chyron–eventually, the Chyron IV–made it onto the scene, we all breathed a sigh of relief. Here was a machine that could reproduce different typefaces–well, kind of, sort of `typefaces’, well…actually camera captures of letters from type catalogs and God know what other unlicensed sources. And the letters, well, they weren’t that bad…a bit jaggy, actually, pretty darn jaggy, but hey, easier than stats and camera cards.

And then somewhere in there in the mid-eighties, this synergy I’ve been talking about came into play. The phototypesetting business, land of those print people, was undergoing its own quiet revolution to digital systems that used digitized typefaces based on the outlines–what they called curve descriptions of the thousands of typefaces that print folk use. These machines used the curve descriptions–also called vectors–of a font to create type proportionally of any size by very quickly mathematically scaling the curves, and then converting the outline to a rasterized font–that is, a bitmap of ons and offs–at the thousands of dots per inch resolution that print requires.

It turns out that once these outlines have been digitized, they’re very portable between systems. And that means portable to television systems as well as print. I think Quantel was the first to realize this, and their initial offerings of text on the Paintbox were conversions from these phototypesetting outlines. All well and good, except that on a Paintbox, you couldn’t make type bigger than 72 scan lines without blowing–and blurring–it up. A ridiculous restriction. But then Chyron came along and realized that although the Chyron IV was the industry standard, the cutting edge was passing them by. They released the Chyron Scribe–which uses smooth, anti-aliased fonts created from digital outlines supplied by a firm in Boston called Bitstream–who has thousands of them. Suddenly, a television character generator was available that could produce clean, anti-aliased representations of real typefaces…that is, fonts designed by type designers dating back several centuries–and, most importantly to me, it could make letters on the screen as big as the whole screen. Me, I like big letters. In fact, a lot of my design is based on looking at big, big type. So I became a big fan of the Scribe, even back when Chyron was almost keeping it a secret, for fear it would hurt the sales of the Chyron IV, which they were still trying to hustle.

And at this moment, the competition for these high-resolution character generators is, to say the least, heating up. Chyron is continuing to soup up the Scribe’s processing and manipulating power, and they promise that the new Infinit! system–which has to be the goofiest name I’ve heard since `Harry’ for a piece of equipment–will do all sorts of neat character-display stuff fast, fast, fast. Like the Kaleidoscope, I detect here the architecture for a machine that could end up doing all kinds of things beyond just throwing up letters onto the screen. Fine, as long as it keeps doing that well.

Meanwhile, Abekas has released, finally, the A72, which, according to my secret Abekas decoder ring, stands for `neat character generator.’ It’s a flexible machine that uses huge hi-res bitmaps from a well-known type supplier called Compugraphic to create type on the screen that it scales up and down in size nearly instantaneously. It also deals with character transparency and animation in fairly intuitive ways. The A72, like the A53-D, does a lot of things right and gives Chyron a real challenge in the market.

Quantel, meanwhile, has never had much luck with character generators. It released the Cypher several years ago, and although it has a virtual overkill processing system, it had then the clunkiest interface for a character generator I had ever seen. Getting one line of type on the screen was a huge ordeal, and getting the letters squished together right was yet another one. But it got a new lease on life when, for the 1988 Summer Olympics, NBC said `we’ll use it if you completely redesign the interface.’ It may well be better now, and its ability to use cutouts created on the Quantel paintbox is a plus. Still on the minus side, though, is Quantel’s now comparatively-tiny library of typefaces, and the heavy-duty charge they place on obtaining new ones. The machine itself ain’t cheap, either.

And Ampex has a character generator almost out there called the `Alex’–another goofy name. I know very little about it at this point, but I expect it will become a more `real’ product at this year’s NAB.

That concludes the Consumer Reports portion of our program. Well, not quite, because although I’ve rambled on at some length here about some of the tools that make clean graphics, I’ve neglected the ways and the means to get all these neat things layered together on one piece of videotape.

Yep, we’re back to layers again. And a pop quiz: what was that archeological term? Stratigraphy. That’s right. Now try and spell it.

While you’re trying, let’s discuss two basic paths, two basic roads toward `first-generation’ layered graphics. What exactly do we mean by `first-generation’, anyway? Well, of course, the term came into use when talking about working with videotape, because, again back in those early days, a recording, on quad or that newfangled one-inch, looked pretty darn good in its first generation–that is, when it went from the camera to that piece of tape and that was it. But then, as part of the post-production process you had to play back on that piece of tape and record on another one–you know, to add dissolves and graphics and stuff? And every time you want through that playback-on-one-machine, record-on-another cycle, that was a generation. The picture quality degraded, some. Every time. And if you were talking about a sitcom recorded live on tape before a studio audience, you might be talking six or seven generations before it actually came into America’s living rooms. And doing that to camera video was one thing–but the crisp edges and sharp transitions of graphics showed the errors and degradation even more. (After all, what is a test pattern but…a graphic?) These errors–stop me if I’m telling you the obvious–come from the analog process–and that is, indeed, the great promise of digital: once you get it across that analog-to-digital doorway–once the picture becomes binary numbers–well, then you can do all kinds of stuff with it and nothing will degrade it until it crosses that doorway again, back out into the cold cruel analog world.

So that’s the challenge…keeping things in the digital world as long as possible, and minimizing the need to cross that threshold, from analog to digital–because the very process of crossing introduces some noise and error into the picture.

That’s why Abekas introduced the A62. And why Quantel introduced the Harry. I don’t know what’s worse, code numbers or goofy names. Two approaches to digital layering, each with their own advantages, and each found (sometimes side-by-side) at many major post-production houses.

The Abekas is, basically, a digital keyer placed between two hard-disk drives that simulate two videotape recorders. You record something on one `side’ of the A62, and then play back that `side’ and record on the other, while adding something new from the outside world. Once safely inside the A62, the video does not degrade, no matter how many hundreds of times it is passed back and forth between the two sides of the A62. The hard disks have a capacity of 50 seconds of composite NTSC video on each side, long enough for most chunks of animation, I’ve found.

Meanwhile, at Quantel, they knew they had a good thing with the Paintbox, so they designed a layering machine–but–and this is a big but–using the Paintbox as the keyer, as the focal point between the layers stored off on those huge hard disk drives. This approach has certain advantages–it certainly is easy to do subtle keying, pasting, and retouching on the Paintbox–but since the Paintbox (and thus the Harry) composites elements using the whole stencil thing I mentioned much earlier–that is, it needs to have everything onto its hard disk drives–including the white key signal to define what layer goes where–before you can actually say take strip `A’ here with matte `B’ there and put it over background `C’ here and store the whole thing off onto strip `D.’ To make a really strange analogy, you have to have all the ingredients in the refrigerator before you can begin to make sandwiches.

I have to say that I am more used to the Abekas approach, where it just catches whatever you toss in `on the fly’ from the outside analog world and lays it down as one more digital layer. And from a hardware standpoint, the smart guys at Abekas included software that made it interface to the CMX editing system as if it were two plain old tape machines–very clever, and a good way to think about it. Two tape machines and a switcher, and you go back and forth, but the video doesn’t degrade into mush. Until it leaves the A62.

Or until it leaves the Harry. And in fact, for the first year or so of these products’ existence, they both had the same drawback, sure to drive compulsive types like me up the wall. Once you got it into that perfect digital world, you never wanted it to leave, because if it did, you could never bring it back without losing some quality. Both manufacturers experimented with computer tape-drive backup systems, you know, but they were expensive, cumbersome, and had very little capacity. Both Quantel and Abekas seemed to be waiting for a digital tape format that they could interface with and pass binary numbers to–and get those numbers back from–perfectly.

Leave to Sony, right? Well, Harry, as a component system–that is, one where the pictures were stored internally as R,G,B signals…got its D1 format first. But it wasn’t long before Sony’s composite NTSC format, D2, made a perfect match for the A62.

(I should explain that there is a component version of the A62 from Abekas, called the A64, but that begins to complicate things. For now, let’s stick with the A62.) Mostly because that’s what I use at the facility I use, and I’ve found that the trend seems to be this: if you’re fitting digital into an existing post-production suite, it’s best to stay composite, but if you want to build the ultimate component room from scratch, then component digital–D1–is really a cleaner way to go. And with D1, one new alternative I’m doing some studying on is the new Abekas A84 component digital switcher–a high-end unit that offers remarkable super-subtle keying, color-correction, and multiple layering capabilities.

So now we’ve got a viable system, right? And now, it may relieve some of you to see, we have some visual aids, too. I want to show you a more-or-less block diagram of the setup I often work with. It’s a plain old composite post-production suite, controlled by a CMX editing system, that has entered a digital world.

[slide 1] Start with source material. This place has a Quantel Paintbox, a Chyron Scribe, Betacam SP and one-inch machines, an Ampex Century switcher, and two Sony D2 VTRs. And except for the initial pass, when you’re creating layered animation, you want everything to move. So, everything goes to and through the ADO in analog form. That’s an important point from an engineering standpoint. Everything is only as clean and as transparent and as tweaked as the ADO–which at this facility, has its good and bad days.

[slide 2] In the case of the Paintbox and Scribe, or any irregularly-shaped object that you want to fly through the ADO, the key signal goes into the ADO, too, and…

[slide 3] Analog NTSC video and a key signal go out of the ADO and into the Abekas A62. Here’s the doorway into the digital world. And once in the A62, we can add more video from these sources, through the ADO, over and over.

[slide 4]…and when we want to get that video out of the A62, we can, of course, record it–master it–on plain old one-inch or beta, or, preferably, transfer it digitally to the Sony D2 videotape machine–by the way, Abekas sells an add-on `black box’ option that makes this possible. And the neat thing is, if on another day you want to go back, or reload an intermediate layer, you can transfer it digitally back into the A62 and pick right up where you left off. Very powerful ability.

[slide 5] So this means we have, in this hybrid system, an analog pathway, subject to loss, noise, hum, and general signal degradation until the last stage–but it works, because it’s digital where it counts.

[slide 6] I guess in compact disk terms, this is an `A A D system.’ We could do better, though. There’s talk of a `black box’ interface between the ADO and D2 format that would make the link here between the ADO and the A62–or the D2s–a digital one…and that would make a perceptable difference.

[slide 7] And compare this to the Quantel Harry setup, an even more idealized system, because once you get source material into the Harry, either directly or through one of the Quantel picture manipulators, like the Mirage, or the encore (which is, by the way, their ADO equivalent), the pathway is digital for both the video and the key signal into the Harry…

[slide 8] …and out of the Harry and to and from the Sony D1 component videotape machine, we’re again talking a no-loss digital pathway, using the component digital standard, known by the in crowd as `601′.

[slide 9] So the Harry/Encore/D1 combination is an `A D D’ pathway, except for stuff that starts right in the paintbox from scratch, which is then `D D D’, totally digital.

…but enough block diagrams. If you don’t have the basic idea of layering down now, I don’t know what would help, except maybe a videotape that goes through a handful of layers using the A62 setup I just showed you to give you an idea of how layers combine to make an animation, and, importantly, how the idea of a global move–an overall identical ADO camera move that is repeated for every pass of the individual layers leads to a very dimensional feel from 2D animation. Fortunately, I have just such a videotape. Let’s take a look.

I hope you enjoyed at least some of that, and I hope it gave you some idea of how this kind of video stratigraphy works. It is a flexible and cost-effective approach to animation that, with the right kind of pre-production and the right kind of attention to detail can be completed quickly and on budget.

And it’s my belief that knowing something about working in 2D animation gives you a big head start when you decide your project needs 3D graphics. First, it gives you a sense of whether you need or want to create an entire world in 3D–or whether you can take the more economical step of designing an element or a series of elements that can be blended in with other 2D animation.

Unlike my discussion of the Paintbox and the ADO and character generators, I’m not about to launch into a `Consumer Reports’ listing of equipment, advantages, and disadvantages. To an extent that’s because a lot of 3D work still lies out there on the cutting edge, where producers have developed their on custom software and/or hardware, or they’ve adapted commercial or scientific rendering packages for their own needs.

My axiom applies even more here: just because a place has a Wavefront system doesn’t mean that it can do great 3D work. And just because an operator knows how to run the software, it doesn’t mean he or she is an artist on the machine. Because 3D work takes a command of illustration, certainly a command of geometry and mathematics, and it doesn’t hurt to understand photography, lighting, and maybe furniture-making too.

Here’s the one-paragraph description of how most 3D animation works. Hang in there. It’s all based on creating mathematical representations of objects. This process, called digitizing, is basically the same thing as I described earlier when we were talking digital typography: a path around an object is described using curves and straight segments. Usually, this means a stat or xerox of the shape is taped down on a bitpad, and it’s traced by a pointer device with a crosshair, plotting points on a curve. Then, the shape is taken into three dimensions, either by extruding it–which is just what it sounds like–creating a thick triangular solid, for example, from three points. Thick letters from thins. Cubes from squares. Or in some cases, the shape is rotated about a point, like a piece of wood on a lathe. Even curved shapes, and spheres can be created, although they’re really not spheres–all objects are composed of flat surfaces, and in the case of a sphere, we’re talking about many, many tiny flat surfaces.

In any case, an object is created, and just like our ADO planes earlier, they’re placed into three-dimensional coordinate space. That is, they’re in a place we can look at them with our global viewpoint–our `camera’–and they can move about on their own coordinate paths. And in many animation projects, we’re talking about a lot of these individual objects. A representation of a building could have a triangular solid for the roof, vertical rectangular solids for pillars and windows, and so on. The ground is an object, too.

These objects are assigned colors–in fact, each facet, or flat surface of each of these objects can be assigned colors, or graduations of colors, or, and this is where it gets real interesting, they can be assigned the image of a two-dimensional painting–like a paintbox image. The image–a texture, say, can be mapped to the specified tiny facets making up a surface, and a flat paintbox painting can seem to have been seamlessly wrapped around a sphere, or other dimensional shape. If the texture is meant to change (like `moving video’ in an ADO), the frames of texture must be stored and each frame in sequence must be programmed to be mapped onto the same shape for each frame in the animation.

Now let’s talk lighting. Because we’re talking a simulation of a three dimensional world–that is, we want the computer to display what a certain object of a certain color would look like viewed from a certain angle, we also have to tell it what kind of light is hitting it, how bright, from what angle. Most systems allow you to assign any combination of point lights or ambient lights–which are pretty much what they sound like. They can be any color, too.

So hey, it’s easy. Now all the computer has to do is calculate the moves of the objects you’ve specified in relation to the global camera move you’ve specified, and of course then calculate how bright, how dark, what color each pixel–each picture element of a displayed object is, based on the lighting and the mapped textures. Phew.

Here’s the catch. At this point, that’s too much number crunching to happen in real time. But you knew that. That’s why these workstations–Wavefronts, Alias, SiliconGraphics, Pixar, whatever–display these objects at first in wireframe form, in low-res simulation, while you work at positioning things and locking down the move. Then and only then do you hit the big button and go out for coffee as each frame–excuse me, each field of animation–60 of them for each second of finished product–is calculated, and rendered–that means displayed, pixel by pixel, on a framebuffer (just like a stillstore) which is then usually recorded to a non-keying version of the Abekas A62 (the Abekas A60), field by field.

In fact, for complex animation, you go out for a lot of coffee. Maybe you go out and get a good night’s sleep, or go out and have a good weekend while it cooks. That’s the incredibly frustrating thing about 3D–you have to wait for it to render–wait to see if you did it right or whether that red light should have been a little higher, or whether you should have made that logo slow down a little more as it came to rest. It’s a big pain, and 3D animators look to technology, the cutting edge, with baited breath to find faster machines, more memory, more storage, new tricks–anything to speed the process along.

Is it worth it? Unfortunately, yes, it is. When 3D is done well, the results can be refreshing–and amazing. There’s a lot of good stuff out there, but I called two companies I’ve done work with to get a couple of things to show you. One of them, Pacific Data Images, is a pioneer in the field, a bunch of mellow Californians who work with proprietary software on Ridge minicomputers–last I checked–people who have a great sense of design,color, and movement and, as of late, have done a lot of work pushing the edge of 3D into more and more real simulations of reality. You’ll also see some stuff here from Crawford Design/Effects in Atlanta, where they’ve got their Wavefront system cooking around the clock for a variety of commercial clients and broadcast stations. They’re newer to the game, and I’d say a bit hungrier. They’re also growing very very fast. So here’s 3 or 4 PDI pieces, followed by 3 or 4 Crawfords.


There is a growing trend for systems–for equipment–to try and become `everything in a box’–after all, it’s all digital stuff, all chips tap dancing with numbers, right? An aggressive supplier of 3D systems, Symbolics, is making their paint software–which is quite Quantel Paintbox-esque–a major selling point. You get one system which is your 2D and your 3D workstation. And Digital F/X of California is offering an all-in-one digital switcher/paint/DVE/layering combination called the `Composium’–which I’ve seen the literature on, but I’d like to see more. The advantage of these combination systems is obvious–but the disadvantages–at least at this early stage of the game–are that the individual parts may not each be the best at the job at hand. For example, is the character generator portion the most flexible, with the most fonts available? I’d want to know before I make a major purchase like that.

Beyond combinations, what I’d like to see more of is a trend toward establishing standards where equipment from different manufacturers can talk to each other transparently. It’s fun–and productive–to have an Abekas A62 and a Sony D2 working together under CMX control. Here’s hoping that standardized digital file formats, machine control, and coordinate systems will make things easier as the cutting edge rolls on.

In the 3D world, that standardization is likely to come–slowly–with the implementation of the Renderman interface, a description language developed by Pixar that would allow 3D systems to share objects, numbers, moves, lighting, and more. It means I could work out the rough details of a 3D move on my Macintosh at home–and then take it to a service bureau–that is, a graphics house, for fine-tuning and final rendering. I really will be doing this stuff from my home office in a couple of years–it’s an exciting thought.

I want to leave you with just a smidge more than optimism for the future, though. As you sit down to think about design projects, I’d like you to keep in mind that the best technology can still be used to complete absolute garbage–nicely lit, 3D absolute garbage, sometimes. So remember the importance of good design, and think about this list–my top ten graphic suggestions for projects big and small.

  • Start with the cleanest possible sources.
  • Always use graduations instead of solid surfaces.
  • Make your drop shadows go the same direction.
  • Keep type white or high-luminance. Add color in lines, edges, rules.
  • Keep your background in the background.
  • Use big elements: big type, bold images.
  • Save intermediate layers and components so you can go back.
  • Use textures from the real world.
  • Type kerning (squish): either very loose or very tight, not wimpy
  • in-between.
  • Use smooth movements that start and end smoothly. Don’t rush the moves.

That’s the list, and I’ll be glad to talk with you about them in more detail in the time we have remaining.