Convergence of video and print
Friday, June 11th, 1993
Remarks to the NBC Advertising and Promotion Conference / June 11, 1993 (presented in slightly edited form to PBS Conference attendees in late June 1993)
I do remember my first Macintosh, purchased a little over nine years ago. A small machine with one 400k floppy disk drive and a small black and white screen, and yet there on my dining room table was the first personal computer I had encountered that worked with graphics. You could even paint–in black-and-white–on those early Macs, and in 1984 that was pretty amazing.
My old company bought its Quantel Paintbox in 1985–a couple of years after the box was first introduced at the NAB. It was–and still is–the benchmark for television paint systems. We–and the bank–paid something over $160,000 for the Quantel, and that didn’t include the monitor, hard disks, and the extra cost of beefed-up air conditioning and the construction of a sound-isolated equipment rack room to handle the thing.
A few years after the Quantel became the established standard for video paint systems, they came up with a version that would handle the high resolutions of a print picture–for just a few hundred thousand dollars more.
I told the Society of Motion Picture and Television Engineers, meeting here in Orlando in January of 1990–that the worlds of print and video were indeed coming together, and that the Macintosh was the common denominator–the point where computing power could be used to create graphics at either print or video resolutions. And I made the semi-bold prediction then that there would soon be software for the Mac that would create images as clean and as attractive as those created on the Paintbox.
Well, four months later, Adobe released Photoshop. And since then, things for me and for many designers have not been the same. There sits now on my desk in my home office a machine that will do everything the Paintbox can do. And it doesn’t come with a hefty bank loan attached.
[VIDEO CUT 1]
A Macintosh, running Photoshop software, can draw a background of graduated colors.
It can place smooth antialiased type on the screen.
It can paint, with an pressure-sensitive airbrush…or with brushes of all sizes, from a palette of millions of colors.
It can assemble an image on deadline from component cutouts.
It can capture, retouch and color-correct a video image.
It can resize, rotate, and distort a cutout.
It can create an electronic mask which controls where paint goes on the picture.
And the Mac, with Photoshop, can perform tasks well beyond the reach of the Paintbox. Designed to be comfortable working with huge print resolution images as well as the 72 dots per inch that video requires, Photoshop has a host of controls that allow you to precisely examine and color-correct the image…and a unique set of plug-in filters–both their own and those created by third-party vendors–that allow you to alter all or part of the image…blur it…sharpen it…distort it…invert it…crystallize it…or do this, this, this, and…well, you get the idea.
And when you’re done with the image, it can be saved to disk–or other images can be brought in–using a ridiculous number of formats that pretty much run the gamut of the ways you can keep a digital image–on any kind of computer. This makes Photoshop a kind of Grand Central Station for digital images. If it’s a picture, chances are Photoshop will open it.
[END VIDEO CUT 1]
Photoshop was released at a time–the spring of 1990–when the Macintosh had evolved into a machine capable of handling large amounts of memory and disk storage, and again, the Macintosh operating system gave the Photoshop programmers a head start in working with large color images. Only in the past couple of months has a Windows version of Photoshop been released–and one of the Photoshop programmers told me that although the performance is comparable, the Mac interface makes file management, configuring input and output devices, and dealing with typefaces much, much easier.
But…I can hear you all asking…does it do it as fast as a Quantel Paintbox? As fast as the Aurora?
On a fast Macintosh, like the top-of-the-line Quadra 950 or an accelerated older model, with enough memory and disk storage, I’d put the Mac and Photoshop up against the AVA, the Aurora, and the Classic Paintbox. Is it as fast as the Quantel V-Series? Nope. Does it cost anywhere near as much? Nope. Is the image quality as good? As good or better then most dedicated broadcast paint systems–and don’t forget, this system works for print as well.
Actually, the idea of using the Mac for video is just part of a overall trend towards increasing generalization in the computer and television industries. A Quantel Paintbox or a Chyron Infinit is at its core, a computer–a device with a microprocessor, disk drives, screen, and input and output connections. So’s the Macintosh. What you get for the extra tens of thousands of dollars you pay for broadcast equipment is often specialized add-on hardware that accelerates certain graphic functions and provides for output that is synchronous with the rest of a television station, as well as software that is, for the most part, built-in and dedicated to one function. The Quantel is a computer which runs one program all the time–a well-refined, terrific paint program. It doesn’t, however, do spreadsheets.
It’s getting easier these days to buy a general purpose computer and add the specific hardware for video input and output that you need, plus accelerators, a bitpad perhaps, and, of course, paint software, in order to end up with a system that does what the dedicated systems did, at much lower cost. And because it’s not a dedicated machine, you have a device that is a Paintbox in the morning and a print typesetting and layout system in the afternoon. And because a Mac installation is cheaper by as much as a factor of ten, facilities with multiple artists can use multiple Macs as workstations–the waiting line of designers who all want to use the Paintbox at the same time.
But before we go too far with this, let’s be clear about where the Macintosh stands in the world of, for want of a better term, general-purpose computers. It stands somewhat apart, a different microprocessor and operating system than the IBM standard that many corporate buyers of PCs are accustomed to. There are indeed many more IBM-standard PCs than Macintoshes. And throw into this mix the third standard of Unix–the operating system used by those high-end, high-pricetag machines that render 3D graphics and animation fast, fast, fast. So what distinguishes the Mac in this field?
Well, I would contend that the unique advantage of the Macintosh is one its had for almost ten years now–a consistent user interface from program to program, and an operating system that allows you to use these programs together in unique and productive ways. Sure, PCs have Windows and Unix systems have their graphic interfaces, but any objective evaluation still has to give the Mac the edge in this category. The result is, I believe, superior ease-of-use, and less of a learning curve for designers suddenly thrust into a digital world.
I don’t want to subject you to too much technojargon, but let me just say that the Mac’s graphic interface–unlike the others–is an integral part of the operating system. With Windows on a PC, it’s more like a facade, imposed on top of the ugly world of DOS.
As more and more people are using the Mac to process graphics as large as video images are, another key advantage comes into play. The Mac’s operating system has special support built in–it’s called Quicktime–that makes dealing with these big ol’ pictures as easy and fast as possible.
The Macintosh is a machine that is the unquestioned favorite of graphic designers, mostly because of its ease of use–but also because most graphic design software that you’ll find on these other systems began life on the Mac, and the service bureaus–the places you take your files for high resolution paper or film output for print production are, for the most part, Mac based–although sure, they can handle the PC stuff these days as well.
Look. This whole IBM versus Mac thing is a big bone of contention across the spectrum of computing, and people who know a lot more than we do about these machines can’t agree about it either. All I’m saying here–for what its worth–is that the Macintosh is where most of the technology that makes this possible started. It’s the environment in which most creative people–especially artists and designers–choose to work.
Think about how designers use a Paintbox for television graphics–for news graphics, for example. It might be better to think of a Paintbox as more of a Cut and Paste
"Quick, give me a graphic of Clinton and a state department seal, with a headline `Investigation’ in our standard box format!"
The artist doesn’t start with a blank screen and begin to draw the President from memory–are you kidding? We’re on deadline! So the designer works in layers, just like building a sandwich. First, call up the standard news graphic background–it too is not painted from scratch. Add the state department seal. Then, capture a video image of Clinton…trace around the edges to define what part of the video you want to use, and then place the finished cutout on top of all of this…and then type up the word `Investigation’ in the text portion of the paintbox–and place that with a dropshadow on top of the whole mess. It’s done, and it’s a miracle of cut and paste.
Oh, and one thing about the sandwich we just made–once it’s done, you can’t expect to pick the President up from finished image–without causing some damage. All the components are smushed together into a finished bitmap image. They’re no longer individual objects.
Whether you’re working with a Quantel, a Chyron, a Mac, or a Silicon Graphics box, the end result, a single frame of television, is a lot of data–just about a megabyte–about one million bytes–of raw uncompressed binary numbers–just for one frame of video–one 30th of a second. Considering the average word processing document is maybe one-twelfth that size, you can see that it helps to have a computer that can process all of those bits as quickly as possible. It also helps if you have a way of dealing with all that information in a more compact, flexible, portable form…and that’s what PostScript is all about.
PostScript was designed as a `page description language`–a compact way of representing what would end up on the printed page while it’s still inside the computer. Some clever folks at Adobe Systems came up with PostScript when 300 dot per inch Laserprinters were first introduced. A printed page full of tiny black and white dots contains even more information than a color television image–so sending those dots from place to place —and storing them inside your machine–was, early on, deemed impractical. It made more sense to store the information as a series of points which mathmatically describe curves.
[VIDEO CLIP 2]
To describe, for example, the letter `A’, you’d only have to use about 19 points–and–we’re getting just a bit technical here–because these are just points and curves plotted relative to each other, they can represent a letter `A’ either as small as this…or a beautiful Times Bold `A’ as large as a full printed page. Just in case you’re curious, here’s the PostScript code for that letter `A’. Just a peek. Think of this as computer shorthand that describes the shape of that letter `A.’ At the final step in the process, the `A’ is rasterized–translated from those 19 points to hundreds of thousands of filled pixels–dots, if you will–inside the laserprinter, or, in the case of our area of concern, they’re translated to smooth, clean looking letters on the screen.
Why am I telling you about this? I think it’s important. You see, the ability to work with PostScript images–typefaces as well as drawings of any kind– is a key advantage of using the Mac as a paint system for television.
With PostScript images you can…
Work with objects that can be scaled, recolored, or reshaped while they`re still objects. Take a look. Here’s that state department seal again as a Quantel Paintbox cutout. Want to make it smaller? No problem. The Quantel, or any paint system smoothly discards the pixels that aren’t needed any more when the seal shrinks. Want to make it larger? Big problem. Enlarging a bitmapped cutout results in a blurry, mushy finished product, because the system must create new pixels–interpolating them from a best guess.
But over on a Macintosh, take a look at that same seal. It’s a PostScript image–that same kind of collection of points and curves that made up the letter `A`–but in this case, much more complex. So if we want to create an image that’s just the eye of the eagle in the middle of the seal, the computer rescales the points in just a moment–and then rasterizes the seal–turns it into the finished bitmap. Big difference in quality. Big advantage in flexibility.
Because it’s so much easier to work with graphics in their component, object form at the design stage–when you want to move things around a lot, re-color them, change and shuffle layers…I do all my initial design–especially the logos and logotypes that form the foundation for a station package, on the Mac, using one or both of the two principal PostScript object-oriented drawing packages–Aldus Freehand or Adobe Illustrator.
I can manipulate the size and shape of the type elements and blocks of color, making fine adjustments to the spacing of the type, tweaking even the shape of the letters or numbers themselves–and then once a logo is done, a designer can take the shapes and render them in black and white or color in video, again, using Photoshop–or take them to a service bureau for even higher-resolution output in print. The important point: The same shape description forms the logo in every step of the process. It’s consistent, flexible, and easy to work with.
Every station logo design I do now starts in PostScript–making it easier for that station to distribute their logo whether they have a Mac or not. And I’m pleased to see that some syndicated shows–like Designing Women–supply a disk with their promo kits that gives you their intricate logo in video or print in all its detail. That’s the wave of the future.
Of course, the most complicated elements–like this one, are no easy task to create–even with the flexibility of a drawing program like Freehand or Illustrator. Did I spend a lot of time creating this PostScript image of the State Department seal? Did I labor over this Mexican flag?
Nope. I bought them. There are indeed companies who sell libraries of PostScript images, which you can, of course, resize, color, manipulate, and split apart–over and over again. Some of them are the kind of, well, goofy `clip art’ images which have shown up in bad newsletters for years–but some of them are collections of world flags, state seals, maps, corporate logos, and road signs that represent a lot of precise work that someone had to sit down to do–so why reinvent the wheel? The State Department seal and these others were created by a company called One Mile Up– they have several packages of what they call `Federal Clip Art’–from governmental seals to flags to incredibly detailed military hardware–just perfect for parking over an anchor’s shoulder.
Here’s another way to use PostScript to your advantage on a frequently requested graphic item–maps. By starting work in Illustrator or Freehand, you can draw roads quickly and precisely, repositioning them as necessary, and then assembling a map can be more of that sandwich-making–layering the PostScript elements in Photoshop. Start with a background…import the roads…and at this point, you can zoom in to any part of the highways with, again, no loss of resolution…adding freeway signs and route markers, each their own PostScript file–I’ve got a folder with all the major roads in Atlanta. This makes assembling a custom map a much speedier process.
But if you really want to talk about maps, there’s a dedicated application that creates PostScript maps detailed on a global scale. It’s called Azimuth…and it enables you to create detailed maps from already pre-digitized data, from any angle or perspective, even from the ever-popular `point on a globe’ view. It’s software used by everybody from CBS News to The Washington Post to make maps a-plenty.
Again, an advantage of working with a non-dedicated machine–you can use other programs–running at the same time as Photoshop–to create data that can be–thanks to the Mac–handed off from one program to the next. You can create a graph by typing numbers in Illustrator…transform it into 3D with Adobe Dimensions…and finish it up in Photoshop.
[END VIDEO CUT 2]
Hey, but with all this talk about `multimedia’, you’re probably thinking beyond using a Mac system for just the 2D paint functions like the Quantel or the Aurora. What about 3D animation? What about digitizing whole promos and cutting them together offline? Well, I’ve got good news and bad news. The good news is that it can be done. The bad news is, we’re still at the point where it’s darn slow. The calculations involved in creating 3D images can be done on any computer–and on the fastest Macs, they zip right along, but not nearly as fast as on a Unix workstation with dedicated graphics processing cards. Most Mac 3D software gives you rendering times of 20 to 30 minutes per frame–or more, and unless you can set it up to render a piece overnight or over the weekend, that’s too slow for my deadlines. I’ve seen work done in StrataVision 3D for stations that was of excellent quality–but they had to wait for it. There’s one high end package for Mac 3D animation called Electric Image–it does a lot of things well–it did the DateLine NBC animation, I’ve been told–but at almost $8000, it’s as or more expensive as the hardware itself, so I still use go elsewhere to use high-end systems for 3D. And the good news there is that there are programs available to convert, for example, PostScript outlines to Wavefront 3D objects–so you can bring in elaborate logos, type, and shapes pre-digitized to a Wavefront session. it’s a great timesaver.
And I’ve just begun to experiment with a new program that brings many of the functions of the Harry to the Macintosh–or at least the Harriet. It’s called `COSA After Effects’, and it will allow you to create layered, moving pictures, type, and other graphic elements. It’s especially impressive because it renders the finished frames at field resolution–which means that the finished moves are as smooth as any you’d get from a Kaleidoscope or ADO. It’s impressive too because you can work with the painted frames, type, and other items as repositionable objects. At this point it`s no speed demon, but the COSA folks say just wait, they plan to upgrade the software so it will use graphics accelerator boards to speed things up. So as of now it’s impressive to me for its potential more than for what it can do today.
Earlier, I said that a Mac system in a television design department could be a paint system one moment and a desktop publishing system the next. It’s true. In fact, as you probably know, the Macintosh was the pioneer in what folks insist on calling "desktop publishing". And now with the price of 600 dot per inch printers coming down to very respectable levels, the idea of having a system that would output completely camera-ready materials for most purposes–at four times the resolution of standard 300 dot per inch laserprinters–is attractive indeed. And, yes, the synergy that comes from using the same PostScript illustration programs like Freehand or Illustrator…the same typefaces
On the print side, the Grand Central Station of all the bits and pieces that make up a brochure, or one-sheet, or poster is called Quark Xpress. It could also be called Aldus PageMaker, because just as Freehand and Illustrator are locked in heated battles for feature supremacy, so too are Xpress and Pagemaker. My money’s on Xpress. It, again, seems to be the choice of `real designers’…it’s flexible, precise, and fast, and whether you’re doing a newsletter or a compact disk cover, Xpress gets the job done.
Even with a fancy 600 dpi laser printer as your typesetting machine, there are still times you’ll want to send out work to be imaged at higher resolution for quality color separations, Canon color copies of incredible quality, or slides at up to 4000 dots per inch. You can still use Xpress, Illustrator, Freehand, even Photoshop, thanks again to the universality of PostScript as a page description language. I’ve sent out PostScript files to signmakers for client stations…to the folks who make mike flags…to companies who can image PostScript on huge 5 foot by 3 foot sheets of paper, in full color…for a price.
The side benefit for stations, of course, lies in a more consistent application of your corporate identity. No longer does your on-air look like one thing, and your print look like something completely different.
Well, assuming what I’ve been saying sounds good to you, what should you consider when planning to purchase a Mac system?
1) Buy as powerful a Macintosh as you can. This seems obvious, perhaps, but the speed of the microprocessor
If you get a more basic system, consider an accelerator board, which basically bypasses the microprocessor, replacing it with a faster, newer one. Sometimes
2) Go out and buy tons of additional memory. Most larger Macs can hold 32 or even 64 megabytes of memory–that’s RAM, not hard disk space. The more you have, the more programs you can run concurrently, and the more you can do in each one. For Photoshop, the absolute minimum is 8 megabytes of RAM, and memory, from mail order dealers is dirt cheap–as low as $30 for 1 megabyte SIMMs to $110 for 4 meg SIMMs.
3) Buy as much storage as you can afford. Oh, it’s amazing how fast even the biggest hard disk can fill up with programs, fonts, and these images you’ll be creating.
Video Stratigraphy: Working with Multigenerational Video
Sunday, January 7th, 1990
(delivered to the Society of Motion Picture and Television Engineers Post-Production Seminar, Orlando, Florida, January 1990)
I got married in December–to an archaeologist who doesn’t even own a television set. Well, I guess she does now. And I learned a great new word from her, a piece of archaeological jargon that you might find of some use. Here it is: stratigraphy. What it means is: the study of layers. You see, that’s what archaeologists do…they uncover layer after layer of ground, of sediment, of pot sherds and human remains and charcoal and stone and all of that…stuff carefully set down by civilizations long gone and forgotten.
After listening to her explanation of stratigraphy I realized that she and I indeed had a lot in common, because these days, video graphic design is the art and science of creating delicate layers of moving video, one atop another, in perfect synchronization. Video stratigraphy.
It seems as if we’ve been messing with the idea of layers of video for as long as the medium has been around–certainly for as long as I’ve been around the medium. I remember my first experiments with layering–really just playing around with an ACR cart with a friend–a fellow master control switcher in the studio at WTCG. We went back and forth between the two decks of the ACR, recording on one, playing back on the other, and then back the other way, each time adding another layer of this guy, and the result, which at 2 in the morning seemed pretty cosmic, was also, unfortunately a great example of the big drawback of quad videotape–in fact, all of analog videotape: what you get out is less than what you put in. And, as a free bonus, you get noise, dropouts, banding(back then)…all kinds of artifacts that mess up your stratigraphy.
It’s interesting–despite years of experiments on MTV, I think it can be concluded: noise in video doesn’t look arty, or attractive, or neat, unlike grain in film. It just looks…noisy.
So we’ve been frustrated with the idea of decay, and much of what we’ve done over the years to achieve what we idealize as a `first-generation image’ is to use as many discrete sources as possible, combining them only at the final `mastering’ point to create a finished composite that was as `clean’ as we could make it. Lots of tape machines rolling in sync, and a switcher with lots of keyers to get it all together, on one piece of tape, in one pass.
So much of the kind of television I do–graphic design for television–is created by many, many layers of material, one atop the other. The reason for this has a lot to do with good design. Design with subtle colors, textures, shadings. And the one quality that design on television has that print can’t quite duplicate–movement. A great piece of television graphic design has, to me, the qualities of a well-choreographed ballet. It’s subtle. It’s complex. It has small things you don’t notice until the second or third time you see it.
I’m going to assume that you’re here today–thank you for coming here today, by the way–because you’re involved in creating this kind of graphic material, too. It may be you’re working to get together a facility that can do what people call, somewhat magically, “computer graphics,” and you don’t want to get the wrong equipment to do the job. It may be that you’ve got a piece of design you want done and are somewhat fuzzed out on all the buzzwords these days that delineate approaches to getting it done. Gee, do I want to do it on the Wavefront or on the Harry or on the Mirage or on the Abekas? D1 or D2 or Beta or M2? 3D or 2D? It’s easy, given this array of blurry options, to throw up one’s hands and say…”uh…whatever.”
So let’s sort out approaches today. And let’s start right off by saying that there isn’t one approach that works for everything, and as a subset of that, there certainly isn’t one approach that’s cost-effective for everything. (I’m more conscious of that these days in my role as a freelance bum.) It’s important to look at what you want to do–whether we’re talking here about just one project or a place to do a whole range of projects–and see exactly what it takes to get the job done..or those jobs done.
As we go along, I also want to examine our options in terms of developing technology. Television and graphic equipment now isn’t what it was ten–or five–years ago, and this incredible upward spiral will certainly make a lot of the particulars of this discussion obsolete in a few years. What won’t become obsolete, however, is the overall trend toward simplification, universality, and cost-effectiveness. Like any cutting-edge thing, as time goes by, the cutting edge gets further out there and what was the edge becomes easy, available, affordable, and understandable. I’ve only been doing this television stuff for about twelve years now, and back when I started, a videotape machine was a device to be operated by wizards, amazing people with arcane knowledge and nifty pen-protectors on their pockets. If you were just a producer or a director…or, hey, a graphic artist …you kept a respectful distance from these guys. Now, of course, it seems as if everyone knows the basics of videotape, and the operating controls of a home VCR aren’t all that different from a Sony D2 machine. Well, not too different.
And what’s interesting to me is how these trends of technological development are bringing a number of formerly diverse fields together. You may or may not be aware of a parallel revolution in how print graphic designers are creating their work. Like their broadcast counterparts, they used to produce print artwork with crude tools, paper, and pencil for the most part….also Letraset and border tape and lots of stats and film and chemicals…and like the videotape wizards, their craft had an air of mystery about it that kept the fundamentals away from a wider audience.
Now, they’re going through the same revolution that television designers did when the first paint systems and character generators appeared. They’re sitting in front of screens–in front of desktop computers–and manipulating type and color and texture in the same way for print. You may be asking why that’s important to you, a television person. We’ll get back to that a while later…right now, it’s just nice to know that television and print people…and motion picture people, for that matter, are going down converging technological paths. Everyone benefits from that kind of synergy.
But back for a moment to the old days, back to..uh..the late seventies, back when the personal computer was just something for engineers to tinker with back in the shop when they could be putting new tubes in the film chain.
So you had designers who were not TV people. And TV people who very definitely were not designers. And since the first pieces of television graphic equipment were cranky, cumbersome, and designed to be operated by technicians, they were able to put letters on the screen or move pictures around in a very basic, low-res kind of way, but the results weren’t all that aesthetic…and the people operating them didn’t know a serif from a sans-serif, and it didn’t make much of a difference to them if they typed a name super in flashing purple all caps letters–at least it was up there without having to shoot a camera card, right?
Lucky for me, I came at this revolution in graphics technology from a couple of unorthodox directions. I was a journalism major in college, and always expected to be working at a newspaper someday. And I worked, just for fun, at my school’s Public Television Station, in operations, switching, loading slides, running camera. And my first job out of school was–hey, I took what I could get–as a master control operator at Ted Turner’s cable superstation in Atlanta, then called WTCG.
I always had an interest in graphics and design–especially typography–but I never took any formal instruction in that field. Instead, I was lucky enough to have a TV station to play with in the middle of the night, and I was able to put the results of my experiments on the air, where a lot of people saw them, without my getting fired. A great place to learn about what worked in television and what didn’t–and right from the start–and this is why I’m giving you way too much of my life’s story up front–I was sure that the rules and the subtleties of good print design also applied to broadcast.
That’s what led me down the path of trying to coax clean, complex, high-resolution images out of equipment that engineers said `wasn’t designed to do that, and why do you want to do that anyway?’ These days, things are much easier, and lo and behold, engineers are beginning to appreciate the subtleties in a graphic image in the same way that a perfectly shaded camera brings a smile to their face. A nice, big, clean anti-aliased word, letters tucked together perfectly, with subtle shading and light sources. Nothing like it. Clean video, no matter what the source.
Now, we look back on those pre-digital days as “back when we made graphics with rocks.”
Archeology and stratigraphy again.
But a lot of what I learned from those early days about keeping an image clean through the food chain–excuse me–through the chain of old cameras, switchers, and tape machines–still applies in this luxurious world of the future where I can sit down and create perfect digital layers until the cows come home.
Which is why we took that particular left turn before we got to where we are now. Which is: you’ve got this graphic work to do. You want to get it done in a spectacular up-to-the-moment state-of-the-art groovy way that will impress your client or boss or creditor or whomever.
And you want to use digital..uh, something, right? You’ve heard that staying digital–that is, keeping material in a digital form throughout the production process–is the key to keeping things clean as long as possible…at least until it gets broadcast or cablecast and gets watched on an old 1967 RCA color TV with rabbit ears.
OK. Great. Maybe we’re talking about an open for a show, or a design for an entire program. I want to make the point here that it’s important to think of what you’re creating in context–that is, it doesn’t make sense to me to create the fanciest, trendiest open in the world and then plop it on the front of a show that has a set, still graphics, namesupers, and credits in a totally different style from that open. Seems to me these days there’s a lot of this going on, where someone has the budget and goes out and gets this one thing–which doesn’t relate at all to the rest of the show.
When people who aren’t in television ask what I do, I usually offer the explanation that graphic design for television is a lot like wallpaper–when it’s all just right, you may not notice, but if it’s wrong, or if one element stands out like a sore thumb, then it’s just like having your living room–or the viewer’s living room–ruined by this ugly piece of graphic art.
Conversely, Rembrandts don’t look too good in house trailers next to paintings of Elvis on black velvet.
So much for My Philosophy of Television Graphics.
One of the big questions you should be asking yourself at the early stages of a design project is: to 3d or not 3d. Actually, with apologies to Hamlet, it’s not an either-or question these days. Although the use of 3d animation has been on a steady upward curve since its first tentative steps early in the eighties, it still remains too expensive and too complex a technology to use indiscriminately. That doesn’t mean that it isn’t used indiscriminately sometimes, just that it shouldn’t be.
This is as good a point as any to admit that I’m a bit of a stick in the mud about 3d. I’m a big fan of 3d animation, but I use it in my own work very, very sparingly. Part of the reason is budgetary, of course, but part of it is just plain design. It seems to me that there’s way too much of this “let’s fly around a really big logo” just for the sake of flying around a really big logo. That is, I always like to get somewhere in an open for some reason. I know that sounds a lot like “what’s my motivation in this scene?” but c’mon, is flying around the huge words “Home Shopping Spree” as if in a helicopter for 15 seconds really an open for that show? Does it really tell you something useful about the show? Does it really set the scene? Does it, in short, get the job done?
Well, sometimes you find yourself working on a show open that defies any attempts to depict it graphically, but I always try and give it my best shot. if nothing else, I like to include enough layers of visual information that communicate a general impression, a feeling, a mood. In a five or six second open, you may not be able to communicate much more than that, but I prefer that approach to “look, here are the letters that spell out the name of the show. They’re really big. They’re really shiny. Let’s fly around them in a helicopter for a while, shall we?”
The added plus to including these subtle elements is that, for the most part, opens run a lot. Week after week, or day after day, or, in the worst-case scenario of a project I did last year, forty-eight times a day. If all there is to the open is “look, here are these letters,” then the viewer gets burned out on it real fast.
But when you’ve got a limited budget, and for some reason you’re determined to use 3D to get the job done, sometimes all you can afford is one simple move around one simple element. If it’s well-designed, if it makes sense visually, that can be fine. And as the cutting edge in 3D technology moves on down the road, I can definitely detect a downward trend in the amount you have to pay for high-resolution 3D animation–if you know where to look, and if you know what shortcuts you can take without losing quality in the finished product. But that doesn’t mean that an attitude that says “I don’t care what it does, as long as it’s 3D” is a good idea.
Instead, for many of these kind of projects I would advocate considering using 2D techniques in 3D ways to achieve animation that has depth, complexity, subtlety–at a more reasonable cost. That doesn’t mean I don’t think there isn’t a time and a place for 3D animation–when I need it, I figure out exactly what I need, I budget for it, and I plan it so it can be smoothly and seamlessly integrated into backgrounds and the rest of the graphics in a package. But more about that later…let’s look a little more closely at the tools for doing 2D graphics well.
I mentioned “doing graphics with rocks” earlier, the era where we used press-type, things shot on camera, switcher wipes, holes punched in a card, monitor feedback, crosshatch from test generators, anything we could get our hands on to give our graphics a sophisticated look. That era ended for me personally between 1982 and 1983, when I finally got my hands on a couple of devices that were introduced just about at the start of the decade, and, I’m here to tell you, seem to remain the industry standard to this day.
I’m talking about the Ampex ADO and the Quantel Paintbox. The end of “graphics with rocks.” Now, before this starts sounding like too much of a commercial for Ampex and Quantel (two fine, fine companies), let me acknowledge that a number of paint systems and DVEs–that is, digital video effects units–have come on–and gone off–the market since. And the ADO and the Paintbox commanded huge price tags when they were introduced–prices that have dropped only slightly in the succeeding years, when technological progress have surely made their cost of manufacture now a fraction of what it was then. But these two have endured. Why?
Well, the ADO was the first to take a live picture and move it around in true three-dimensional perspective (or darn close to it) fairly transparently–that is, the picture you got out of the ADO was only slightly worse than the one that went in. There were other units on the market at the time that compressed and positioned a live video picture, but this one let you think of the picture as existing in a huge three-dimensional world of its own, one that you, the camera, could move around in as you looked at this rectangle of video–and, for that matter, one that this rectangle of video could move around in on its own. It established a coordinate system–numbers–that described where you were and where the object was in a way that set a standard–and prepared us for the very similar three-dimensional coordinate systems used by high-end 3d systems like the Wavefront and the Symbolics. It was a cool toy, and then some.
And so was the Paintbox. Its lasting contribution was the ability to capture a real-world video image, again, pretty darn transparently, and then use it as a canvas for your painting. You could pick up colors from it, subtly airbrush it. Cut a part of it out and put it somewhere else. Quickly, without having to go out for coffee while the machine crunched numbers. Oh, and the other unique thing about the paintbox–a very smart design move–was to create a way for video illustrators to do what print airbrush illustrators can do–precisely mask off and work with a very small part of an image. The ability to take a portion of a station logo–say, just the edge, and apply a smooth color graduation to just that portion as if the rest were covered with masking tape–was a clever innovation that Quantel still tenaciously holds patents on to this day.
Since the early eighties, the ADO has picked up some options to keep up with the competition–most notably the ability to control multiple channels and what it calls the `Infinity’ package, which is an additional framestore that lets you do all kinds of goofy trails and sparkles and delays off the edge of an ADO image that you usually see on used car spots–or “Star Search.”
The Paintbox has made a couple of improvements over the years in their software–but just this past year, the Quantel folk have released a version called the `V Series’ that basically is a redo of the whole box with a lot of custom chips in a much smaller and faster package and hey, it’s only about two-thirds of the hundred and fifty or sixty grand we paid for one back in 1985. Progress.
It’s important to understand just what a paint system is. It is not an `automatic converter’ of video, creating rendered type, airbrushed people and cities at the touch of a button. It is definitely an illustration tool that creates work only as good as the operator behind the bitpad. It’s also not instantaneous–I’ve run into a number of producers who seem to think that crisp, clean Paintbox illustration is a matter of five minutes, maybe ten. There are some very fast Paintbox artists out there, capable of cranking on deadline pressure–but if your project is a graphic that should withstand the test of time–if it’s part of an open, for example–you should be willing to budget for the time it takes to do the job right. One consequence of rushing the paint work is something I like to call `blurry paintbox’–you’ve all seen it. Merely taking a frame of video and hitting it with the airbrush, a little color here, a little scribbly stuff there–that doesn’t usually yield an image that’s better than the one you started with. Like most computers, the quality of the image out is no better than the image in…and that’s why it’s important to capture images from crisp, clean originals. Not sloppy 3/4″ dubs…not fuzzy Xeroxes of logos. This is one place where a little extra effort in pre-production pays off–not only in a cleaner rendered image, but in less time on the paint system cleaning it up.
There are other paint systems out there, although it’s been my experience that the majority of post-production facilities use the Quantel Paintbox. Most of them–the Ampex AVA3, the Aurora, the paint portion of the Symbolics software, the Artstar, which a lot of TV stations picked because hey, they bought weather computers from the same people…most of them have a subset of most of the Paintbox’s features, but usually with a speed or interface penalty. By `interface penalty’ I mean that it’s a big pain to do certain things, like work with cutouts, interact with stencils, or do type. It is safe to say that none of these has as smooth and subtle an airbrush as the Paintbox–especially the new V Series model. A number of the lower-end models–and I definitely include the weather-computer-based systems here–have, in my opinion, no business at all being in a television station or production environment. Their sluggishness, miniature storage space, and clunky bitpads add up to something like the Fisher-Price version of a Paintbox–in other words, they’re toys. I’ve seen artists at stations–usually in smaller markets–stuck with the lower-end systems, trying to crank out graphics for a news broadcast on deadline, and believe me, it’s not a pretty sight.
It does seem that the ADO doesn’t have the field as much to itself these days as does the Paintbox. A number of contenders in recent years have put the fire to Ampex’s feet, and the technology seems ripe for another quantum leap (or Quantel leap?) in features. I’ve seen a growing number of facilities with the Abekas A53-D, a DVE system with most of the features and feel of the ADO–and it offers a `warp option package’ that does nifty page turns and curls, and has the advantage of a very sensible `live control room’ interface. Disadvantages? The picture quality, especially in an enlarged picture, is, to my eye, not as good as an ADO…but close, very close. The DVEs at CNN and Headline News are all A53-Ds. Then there is the Kaleidoscope from Grass Valley, in theory, everything a Digital Video Effects device should be. It’s a big mama in the racks, and it integrates seamlessly into a Grass Valley 300 production switcher or runs out of its own box, which looks like a small Grass Valley switcher. The positioning and coordinate system is very ADO-like, which is good, in my view. It’s a very very clean (and expensive) system, and has a lot of flexibility in terms of allowing for component or digital inputs or outputs. It also has this built-in feature that you see on a lot of LA-produced stuff these days that puts a `glow’ or `highlight’ across the picture as it turns…but, as you’ll see in a minute, you can do the same thing with a switcher wipe half-dissolved out. The Kaleidoscope seems to have the architecture to grow into a remarkable machine, especially in light of developments and the possible synergy between Grass Valley and Sony.
What synergy? Well, one device that shows what in Southeastern Ohio we would call `po-tential’ is the Sony System G, a high-end picture manipulator positioned to compete with–and surpass the Quantel Mirage, which is, you may know, a very high-end system for wrapping pictures into spheres, Coke bottles, and all kinds of other goofy shapes. It is, in my experience, a clever machine that is cranky, difficult to program even with the newer software, and often frustrating in that it will give you almost what you want–an almost perfect, but noisy sphere. If you handle the Mirage with tender loving care, which is what a number of large post houses have done–it is possible to get some great stuff out of it. On the air, it’s most often seen folding and sphere-ing and ripping on Entertainment Tonight. But back to the Sony product, which I understand will be at this year’s NAB in a more fully functional form. It uses parallel processing–that is, a symphony of tiny chip-computers all pumping numbers together–to achieve real-time texture mapping, creation of these strange shapes, mutating one shape to another, all under mouse control. For four hundred grand or so, the System G could be a box that makes it easier to do a lot more 3d-esque things without going to the high-end rendering equipment.
We pause here for a warning from the graphics police: This new machine, like so much of this stuff, is an example of what graphic designer Harry Marks like to call “dangerous in the wrong hands.” Just because you can wrap Peter Jennings’ head into the shape of a Coke bottle doesn’t mean you should do it. All too often with a new piece of an equipment there is a natural tendency among tech types to play with it. All well and good–I feel that playing with TV stuff is the best way to learn how to run it–but then some idiot says “gee, that purple and green modulated switcher wipe looks great–let’s put it on the air.” Just say, uh-uh, please. It’s not enough to do something “just because we can.” End of warning.
Somewhere back in those last few paragraphs I mentioned `type’, and in some ways it’s surprising that I’ve waited as long as I have to talk about my favorite subject. I’ve always been fascinated with letterforms and typefaces. Elegant curved, metal type in fine magazines and newspapers, precisely spaced….huge, perfectly formed letters on billboards. Type in and of itself is not only everywhere we look in modern life these days, it is art in its own right. And on television in the early seventies, the state of that art was…the Vidifont, from what was then CBS Labs. Now the Vidifont was a remarkable technological achievement at its time…but the resulting squared-off letters on the screen were distinctly low-res. They were, it seems amazing to consider this now, digital letters created from the intersections of tiny copper wires in the machine’s core memory. Two fonts, big and small, and the small was always all caps. It’s no wonder that when the Chyron–eventually, the Chyron IV–made it onto the scene, we all breathed a sigh of relief. Here was a machine that could reproduce different typefaces–well, kind of, sort of `typefaces’, well…actually camera captures of letters from type catalogs and God know what other unlicensed sources. And the letters, well, they weren’t that bad…a bit jaggy, actually, pretty darn jaggy, but hey, easier than stats and camera cards.
And then somewhere in there in the mid-eighties, this synergy I’ve been talking about came into play. The phototypesetting business, land of those print people, was undergoing its own quiet revolution to digital systems that used digitized typefaces based on the outlines–what they called curve descriptions of the thousands of typefaces that print folk use. These machines used the curve descriptions–also called vectors–of a font to create type proportionally of any size by very quickly mathematically scaling the curves, and then converting the outline to a rasterized font–that is, a bitmap of ons and offs–at the thousands of dots per inch resolution that print requires.
It turns out that once these outlines have been digitized, they’re very portable between systems. And that means portable to television systems as well as print. I think Quantel was the first to realize this, and their initial offerings of text on the Paintbox were conversions from these phototypesetting outlines. All well and good, except that on a Paintbox, you couldn’t make type bigger than 72 scan lines without blowing–and blurring–it up. A ridiculous restriction. But then Chyron came along and realized that although the Chyron IV was the industry standard, the cutting edge was passing them by. They released the Chyron Scribe–which uses smooth, anti-aliased fonts created from digital outlines supplied by a firm in Boston called Bitstream–who has thousands of them. Suddenly, a television character generator was available that could produce clean, anti-aliased representations of real typefaces…that is, fonts designed by type designers dating back several centuries–and, most importantly to me, it could make letters on the screen as big as the whole screen. Me, I like big letters. In fact, a lot of my design is based on looking at big, big type. So I became a big fan of the Scribe, even back when Chyron was almost keeping it a secret, for fear it would hurt the sales of the Chyron IV, which they were still trying to hustle.
And at this moment, the competition for these high-resolution character generators is, to say the least, heating up. Chyron is continuing to soup up the Scribe’s processing and manipulating power, and they promise that the new Infinit! system–which has to be the goofiest name I’ve heard since `Harry’ for a piece of equipment–will do all sorts of neat character-display stuff fast, fast, fast. Like the Kaleidoscope, I detect here the architecture for a machine that could end up doing all kinds of things beyond just throwing up letters onto the screen. Fine, as long as it keeps doing that well.
Meanwhile, Abekas has released, finally, the A72, which, according to my secret Abekas decoder ring, stands for `neat character generator.’ It’s a flexible machine that uses huge hi-res bitmaps from a well-known type supplier called Compugraphic to create type on the screen that it scales up and down in size nearly instantaneously. It also deals with character transparency and animation in fairly intuitive ways. The A72, like the A53-D, does a lot of things right and gives Chyron a real challenge in the market.
Quantel, meanwhile, has never had much luck with character generators. It released the Cypher several years ago, and although it has a virtual overkill processing system, it had then the clunkiest interface for a character generator I had ever seen. Getting one line of type on the screen was a huge ordeal, and getting the letters squished together right was yet another one. But it got a new lease on life when, for the 1988 Summer Olympics, NBC said `we’ll use it if you completely redesign the interface.’ It may well be better now, and its ability to use cutouts created on the Quantel paintbox is a plus. Still on the minus side, though, is Quantel’s now comparatively-tiny library of typefaces, and the heavy-duty charge they place on obtaining new ones. The machine itself ain’t cheap, either.
And Ampex has a character generator almost out there called the `Alex’–another goofy name. I know very little about it at this point, but I expect it will become a more `real’ product at this year’s NAB.
That concludes the Consumer Reports portion of our program. Well, not quite, because although I’ve rambled on at some length here about some of the tools that make clean graphics, I’ve neglected the ways and the means to get all these neat things layered together on one piece of videotape.
Yep, we’re back to layers again. And a pop quiz: what was that archeological term? Stratigraphy. That’s right. Now try and spell it.
While you’re trying, let’s discuss two basic paths, two basic roads toward `first-generation’ layered graphics. What exactly do we mean by `first-generation’, anyway? Well, of course, the term came into use when talking about working with videotape, because, again back in those early days, a recording, on quad or that newfangled one-inch, looked pretty darn good in its first generation–that is, when it went from the camera to that piece of tape and that was it. But then, as part of the post-production process you had to play back on that piece of tape and record on another one–you know, to add dissolves and graphics and stuff? And every time you want through that playback-on-one-machine, record-on-another cycle, that was a generation. The picture quality degraded, some. Every time. And if you were talking about a sitcom recorded live on tape before a studio audience, you might be talking six or seven generations before it actually came into America’s living rooms. And doing that to camera video was one thing–but the crisp edges and sharp transitions of graphics showed the errors and degradation even more. (After all, what is a test pattern but…a graphic?) These errors–stop me if I’m telling you the obvious–come from the analog process–and that is, indeed, the great promise of digital: once you get it across that analog-to-digital doorway–once the picture becomes binary numbers–well, then you can do all kinds of stuff with it and nothing will degrade it until it crosses that doorway again, back out into the cold cruel analog world.
So that’s the challenge…keeping things in the digital world as long as possible, and minimizing the need to cross that threshold, from analog to digital–because the very process of crossing introduces some noise and error into the picture.
That’s why Abekas introduced the A62. And why Quantel introduced the Harry. I don’t know what’s worse, code numbers or goofy names. Two approaches to digital layering, each with their own advantages, and each found (sometimes side-by-side) at many major post-production houses.
The Abekas is, basically, a digital keyer placed between two hard-disk drives that simulate two videotape recorders. You record something on one `side’ of the A62, and then play back that `side’ and record on the other, while adding something new from the outside world. Once safely inside the A62, the video does not degrade, no matter how many hundreds of times it is passed back and forth between the two sides of the A62. The hard disks have a capacity of 50 seconds of composite NTSC video on each side, long enough for most chunks of animation, I’ve found.
Meanwhile, at Quantel, they knew they had a good thing with the Paintbox, so they designed a layering machine–but–and this is a big but–using the Paintbox as the keyer, as the focal point between the layers stored off on those huge hard disk drives. This approach has certain advantages–it certainly is easy to do subtle keying, pasting, and retouching on the Paintbox–but since the Paintbox (and thus the Harry) composites elements using the whole stencil thing I mentioned much earlier–that is, it needs to have everything onto its hard disk drives–including the white key signal to define what layer goes where–before you can actually say take strip `A’ here with matte `B’ there and put it over background `C’ here and store the whole thing off onto strip `D.’ To make a really strange analogy, you have to have all the ingredients in the refrigerator before you can begin to make sandwiches.
I have to say that I am more used to the Abekas approach, where it just catches whatever you toss in `on the fly’ from the outside analog world and lays it down as one more digital layer. And from a hardware standpoint, the smart guys at Abekas included software that made it interface to the CMX editing system as if it were two plain old tape machines–very clever, and a good way to think about it. Two tape machines and a switcher, and you go back and forth, but the video doesn’t degrade into mush. Until it leaves the A62.
Or until it leaves the Harry. And in fact, for the first year or so of these products’ existence, they both had the same drawback, sure to drive compulsive types like me up the wall. Once you got it into that perfect digital world, you never wanted it to leave, because if it did, you could never bring it back without losing some quality. Both manufacturers experimented with computer tape-drive backup systems, you know, but they were expensive, cumbersome, and had very little capacity. Both Quantel and Abekas seemed to be waiting for a digital tape format that they could interface with and pass binary numbers to–and get those numbers back from–perfectly.
Leave to Sony, right? Well, Harry, as a component system–that is, one where the pictures were stored internally as R,G,B signals…got its D1 format first. But it wasn’t long before Sony’s composite NTSC format, D2, made a perfect match for the A62.
(I should explain that there is a component version of the A62 from Abekas, called the A64, but that begins to complicate things. For now, let’s stick with the A62.) Mostly because that’s what I use at the facility I use, and I’ve found that the trend seems to be this: if you’re fitting digital into an existing post-production suite, it’s best to stay composite, but if you want to build the ultimate component room from scratch, then component digital–D1–is really a cleaner way to go. And with D1, one new alternative I’m doing some studying on is the new Abekas A84 component digital switcher–a high-end unit that offers remarkable super-subtle keying, color-correction, and multiple layering capabilities.
So now we’ve got a viable system, right? And now, it may relieve some of you to see, we have some visual aids, too. I want to show you a more-or-less block diagram of the setup I often work with. It’s a plain old composite post-production suite, controlled by a CMX editing system, that has entered a digital world.
[slide 1] Start with source material. This place has a Quantel Paintbox, a Chyron Scribe, Betacam SP and one-inch machines, an Ampex Century switcher, and two Sony D2 VTRs. And except for the initial pass, when you’re creating layered animation, you want everything to move. So, everything goes to and through the ADO in analog form. That’s an important point from an engineering standpoint. Everything is only as clean and as transparent and as tweaked as the ADO–which at this facility, has its good and bad days.
[slide 2] In the case of the Paintbox and Scribe, or any irregularly-shaped object that you want to fly through the ADO, the key signal goes into the ADO, too, and…
[slide 3] Analog NTSC video and a key signal go out of the ADO and into the Abekas A62. Here’s the doorway into the digital world. And once in the A62, we can add more video from these sources, through the ADO, over and over.
[slide 4]…and when we want to get that video out of the A62, we can, of course, record it–master it–on plain old one-inch or beta, or, preferably, transfer it digitally to the Sony D2 videotape machine–by the way, Abekas sells an add-on `black box’ option that makes this possible. And the neat thing is, if on another day you want to go back, or reload an intermediate layer, you can transfer it digitally back into the A62 and pick right up where you left off. Very powerful ability.
[slide 5] So this means we have, in this hybrid system, an analog pathway, subject to loss, noise, hum, and general signal degradation until the last stage–but it works, because it’s digital where it counts.
[slide 6] I guess in compact disk terms, this is an `A A D system.’ We could do better, though. There’s talk of a `black box’ interface between the ADO and D2 format that would make the link here between the ADO and the A62–or the D2s–a digital one…and that would make a perceptable difference.
[slide 7] And compare this to the Quantel Harry setup, an even more idealized system, because once you get source material into the Harry, either directly or through one of the Quantel picture manipulators, like the Mirage, or the encore (which is, by the way, their ADO equivalent), the pathway is digital for both the video and the key signal into the Harry…
[slide 8] …and out of the Harry and to and from the Sony D1 component videotape machine, we’re again talking a no-loss digital pathway, using the component digital standard, known by the in crowd as `601′.
[slide 9] So the Harry/Encore/D1 combination is an `A D D’ pathway, except for stuff that starts right in the paintbox from scratch, which is then `D D D’, totally digital.
…but enough block diagrams. If you don’t have the basic idea of layering down now, I don’t know what would help, except maybe a videotape that goes through a handful of layers using the A62 setup I just showed you to give you an idea of how layers combine to make an animation, and, importantly, how the idea of a global move–an overall identical ADO camera move that is repeated for every pass of the individual layers leads to a very dimensional feel from 2D animation. Fortunately, I have just such a videotape. Let’s take a look.
I hope you enjoyed at least some of that, and I hope it gave you some idea of how this kind of video stratigraphy works. It is a flexible and cost-effective approach to animation that, with the right kind of pre-production and the right kind of attention to detail can be completed quickly and on budget.
And it’s my belief that knowing something about working in 2D animation gives you a big head start when you decide your project needs 3D graphics. First, it gives you a sense of whether you need or want to create an entire world in 3D–or whether you can take the more economical step of designing an element or a series of elements that can be blended in with other 2D animation.
Unlike my discussion of the Paintbox and the ADO and character generators, I’m not about to launch into a `Consumer Reports’ listing of equipment, advantages, and disadvantages. To an extent that’s because a lot of 3D work still lies out there on the cutting edge, where producers have developed their on custom software and/or hardware, or they’ve adapted commercial or scientific rendering packages for their own needs.
My axiom applies even more here: just because a place has a Wavefront system doesn’t mean that it can do great 3D work. And just because an operator knows how to run the software, it doesn’t mean he or she is an artist on the machine. Because 3D work takes a command of illustration, certainly a command of geometry and mathematics, and it doesn’t hurt to understand photography, lighting, and maybe furniture-making too.
Here’s the one-paragraph description of how most 3D animation works. Hang in there. It’s all based on creating mathematical representations of objects. This process, called digitizing, is basically the same thing as I described earlier when we were talking digital typography: a path around an object is described using curves and straight segments. Usually, this means a stat or xerox of the shape is taped down on a bitpad, and it’s traced by a pointer device with a crosshair, plotting points on a curve. Then, the shape is taken into three dimensions, either by extruding it–which is just what it sounds like–creating a thick triangular solid, for example, from three points. Thick letters from thins. Cubes from squares. Or in some cases, the shape is rotated about a point, like a piece of wood on a lathe. Even curved shapes, and spheres can be created, although they’re really not spheres–all objects are composed of flat surfaces, and in the case of a sphere, we’re talking about many, many tiny flat surfaces.
In any case, an object is created, and just like our ADO planes earlier, they’re placed into three-dimensional coordinate space. That is, they’re in a place we can look at them with our global viewpoint–our `camera’–and they can move about on their own coordinate paths. And in many animation projects, we’re talking about a lot of these individual objects. A representation of a building could have a triangular solid for the roof, vertical rectangular solids for pillars and windows, and so on. The ground is an object, too.
These objects are assigned colors–in fact, each facet, or flat surface of each of these objects can be assigned colors, or graduations of colors, or, and this is where it gets real interesting, they can be assigned the image of a two-dimensional painting–like a paintbox image. The image–a texture, say, can be mapped to the specified tiny facets making up a surface, and a flat paintbox painting can seem to have been seamlessly wrapped around a sphere, or other dimensional shape. If the texture is meant to change (like `moving video’ in an ADO), the frames of texture must be stored and each frame in sequence must be programmed to be mapped onto the same shape for each frame in the animation.
Now let’s talk lighting. Because we’re talking a simulation of a three dimensional world–that is, we want the computer to display what a certain object of a certain color would look like viewed from a certain angle, we also have to tell it what kind of light is hitting it, how bright, from what angle. Most systems allow you to assign any combination of point lights or ambient lights–which are pretty much what they sound like. They can be any color, too.
So hey, it’s easy. Now all the computer has to do is calculate the moves of the objects you’ve specified in relation to the global camera move you’ve specified, and of course then calculate how bright, how dark, what color each pixel–each picture element of a displayed object is, based on the lighting and the mapped textures. Phew.
Here’s the catch. At this point, that’s too much number crunching to happen in real time. But you knew that. That’s why these workstations–Wavefronts, Alias, SiliconGraphics, Pixar, whatever–display these objects at first in wireframe form, in low-res simulation, while you work at positioning things and locking down the move. Then and only then do you hit the big button and go out for coffee as each frame–excuse me, each field of animation–60 of them for each second of finished product–is calculated, and rendered–that means displayed, pixel by pixel, on a framebuffer (just like a stillstore) which is then usually recorded to a non-keying version of the Abekas A62 (the Abekas A60), field by field.
In fact, for complex animation, you go out for a lot of coffee. Maybe you go out and get a good night’s sleep, or go out and have a good weekend while it cooks. That’s the incredibly frustrating thing about 3D–you have to wait for it to render–wait to see if you did it right or whether that red light should have been a little higher, or whether you should have made that logo slow down a little more as it came to rest. It’s a big pain, and 3D animators look to technology, the cutting edge, with baited breath to find faster machines, more memory, more storage, new tricks–anything to speed the process along.
Is it worth it? Unfortunately, yes, it is. When 3D is done well, the results can be refreshing–and amazing. There’s a lot of good stuff out there, but I called two companies I’ve done work with to get a couple of things to show you. One of them, Pacific Data Images, is a pioneer in the field, a bunch of mellow Californians who work with proprietary software on Ridge minicomputers–last I checked–people who have a great sense of design,color, and movement and, as of late, have done a lot of work pushing the edge of 3D into more and more real simulations of reality. You’ll also see some stuff here from Crawford Design/Effects in Atlanta, where they’ve got their Wavefront system cooking around the clock for a variety of commercial clients and broadcast stations. They’re newer to the game, and I’d say a bit hungrier. They’re also growing very very fast. So here’s 3 or 4 PDI pieces, followed by 3 or 4 Crawfords.
There is a growing trend for systems–for equipment–to try and become `everything in a box’–after all, it’s all digital stuff, all chips tap dancing with numbers, right? An aggressive supplier of 3D systems, Symbolics, is making their paint software–which is quite Quantel Paintbox-esque–a major selling point. You get one system which is your 2D and your 3D workstation. And Digital F/X of California is offering an all-in-one digital switcher/paint/DVE/layering combination called the `Composium’–which I’ve seen the literature on, but I’d like to see more. The advantage of these combination systems is obvious–but the disadvantages–at least at this early stage of the game–are that the individual parts may not each be the best at the job at hand. For example, is the character generator portion the most flexible, with the most fonts available? I’d want to know before I make a major purchase like that.
Beyond combinations, what I’d like to see more of is a trend toward establishing standards where equipment from different manufacturers can talk to each other transparently. It’s fun–and productive–to have an Abekas A62 and a Sony D2 working together under CMX control. Here’s hoping that standardized digital file formats, machine control, and coordinate systems will make things easier as the cutting edge rolls on.
In the 3D world, that standardization is likely to come–slowly–with the implementation of the Renderman interface, a description language developed by Pixar that would allow 3D systems to share objects, numbers, moves, lighting, and more. It means I could work out the rough details of a 3D move on my Macintosh at home–and then take it to a service bureau–that is, a graphics house, for fine-tuning and final rendering. I really will be doing this stuff from my home office in a couple of years–it’s an exciting thought.
I want to leave you with just a smidge more than optimism for the future, though. As you sit down to think about design projects, I’d like you to keep in mind that the best technology can still be used to complete absolute garbage–nicely lit, 3D absolute garbage, sometimes. So remember the importance of good design, and think about this list–my top ten graphic suggestions for projects big and small.
- Start with the cleanest possible sources.
- Always use graduations instead of solid surfaces.
- Make your drop shadows go the same direction.
- Keep type white or high-luminance. Add color in lines, edges, rules.
- Keep your background in the background.
- Use big elements: big type, bold images.
- Save intermediate layers and components so you can go back.
- Use textures from the real world.
- Type kerning (squish): either very loose or very tight, not wimpy
- Use smooth movements that start and end smoothly. Don’t rush the moves.
That’s the list, and I’ll be glad to talk with you about them in more detail in the time we have remaining.