Encoding Animation as Video Data

As an animator, you can’t get around working with digital video formats, whether it’s during the production process or for the final distribution. Sooner or later, you need to store your moving pictures on a hard drive, DVD, Blu-ray or tape. Also, there is a lot of confusion about what resolution, frame rate and color depth are good to use when setting up your project. This article gives you some thoughts on which form, which file formats and which codecs you could choose for what occasion.

In this article I will try my best to answer the following frequently asked questions:

What are good settings for my project?

Can’t I just use a video standard preset?

What do I use to bring rendered footage from one program to another?

How do I need to export my film for sound editing?

How do I encode my movie once it’s done?

What are good settings for my project?

The first step in to get the image quality that you are aiming for starts before you begin creating. First of all, you have to choose an aspect ratio: Most movies nowadays are made in 16:9, you know, this widescreen thing. There is an even wider one called cinema scope and that old squarish one that nobody uses anymore called 4:3 (It’s a shame… you can do great stuff with 4:3). One day you might want to do something totally crazy like 9:16 (wouldn’t that be cool?). Actually, if you work in the web or gaming industry, someone might request an asset from you in an unusual size – you need to know that as early as possible.

In most cases, you have to decide which is the highest quality you want your pictures to have right from the beginning to set up your equipment and project files accordingly. If you shoot a stop motion film in SD, there is no way you can bring it up to a crispy, sharp Full HD quality. If you shoot HD, it would be a shame to edit it in an SD project setup. And even if you use calculation based animation like computer generated 3D (CGI) or vector drawings (Flash, ToonBoom), it can’t hurt to know your level of quality as it effects the resolution of bitmap textures and the amount of details you need in general. Keep in mind that high resolutions often require a lot more detailed work, as well as a lot more time and storage space for rendered data. If you aim for internet distribution, there is no reason you should go higher than Full HD. If you work with live action footage, that should determine the upper end of your your resolution, framerate, etc.

Generally, you want as few format changes as possible, and you want to avoid scaling up under every circumstance, because it might cause the pixel structure to show up, as well as miscalculations, artifacts and a loss of color values.

Accordingly, you need to think about your framerate. I recommend using 24 fps or 25 fps as they cover a lot of cinema and TV standards. Keep away from the horrible 29.99 fps of NTSC (what kind of number is that anyway?).

Can’t I just use a video standard presets?

A lot of your software will offer you video standard presets, but you should think twice before using them. They are helpful if you work with live action footage, but for animation, they can cause some problems. Your animation data doesn’t have some of the technical limitations that the live action people have to battle with. For example, some video standards (PAL, NTSC, even the HD ones) work with non-square pixels. But your animation is most likely to have nice square pixels, because that is the computer standard. Now, what happens if you shovel your precious square pixeled pictures into an editing program that assumes a video standard? In the worst case, it gets streched. But no modern video software is that stupid anymore – it does a conversion for you. Nice, eh? But you have no control over how exactly it does it, and in any case you lose a tiny bit of your picture. If your pixels are slightly wider now instead of square, there will be less of them in total = loss of picture information = evil.

Another issue of the video standards that you should try to avoid as long as possible, is interlacing. This is another dinosaur from the times where projecting a TV picture couldn’t be done any other way. Instead of showing the whole picture immediately, TV devices could only project every other line, and consequently had to draw the even numbered lines first and then, in a second pass, the odd numbered ones (or vice versa). Again, on a computer you don’t need this, modern displays show the whole picture immediately. If you render interlacing at any point of your workflow into your picture (which you could), it will look weirdly rippled in fast motion and further editing from there on could cause uggly flickers. HD video standards don’t require interlacing anymore. And if you absolutely need it, your video software can calculate it in sufficent quality later on.

Some video standards you can base your projects on (pixel sizes are for square pixels):

Video standard Resolution in square pixels Aspect ratio Standard frame rate per second Annotations
NTSC 720×540 / 864×486 4:3 / 16:9 29,97 American SD TV standard. Use progressive frames while you animate, interlace and convert to non-square pixel later. Worst video standard.
PAL 768×576 / 1024×576 4:3 / 16:9 25 SD TV standard used in many countries. Use progressive frames while you animate, interlace and convert to non-square pixel later. Good SD video standard.
HD 720p 1280×720 16:9 24, 25, 30, 50, 60 and more A small HD TV format (not much better than PAL). Use progressive frames! Interlacing and non-square pixels are possible, though.
Full HD 1080p 1920×1080 16:9 24, 25, 30, 50, 60 and more Choose this standard for HD projects. Use progressive frames! Interlacing and non-square pixels are possible, though.
Digital Cinema 2k Flat 1998Ă—1080 Widescreen 1,85:1 24 or 48 A widescreen standard for digital cinema projectors. Stereoscopic projection only with 24 fps.
Digital Cinema 2k Scope 2048×1858 Cinemascope 2.39:1 24 or 48 A cinemascope standard for digital cinema projectors. Stereoscopic projection only with 24 fps.
Digital Cinema 4k Flat 3996Ă—2160 Widescreen 1,85:1 24 fps This is a damn huge one for high-end professional needs. Not supported for stereosocopic projection in cinemas.
What do I use to bring rendered footage from one program to another?

Usually, you at least have to bring your animation from your animation software to your editing software to cut all your shots together. In some workflows, you might even have a couple of programs involved (e.g. one for animation, one for compositing and one for editing). And most likely they can’t read each others file format – if they do, you should use that in most cases. If you work with vector graphics, try keeping them vectors for as long as possible (After Effects and Premier can import swf files from vector based drawing software like Flash and ToonBoom).

But often times you don’t get around exporting and importing rendered image data from software to software. This data has to be exported carefully, because you want to loose as little quality as possible in the transition. More quality means high resolution and high color depth.

The resolution during your workflow should constantly be the resolution of the best quality you are aiming for. If you do a Full HD production, your picture resolution has to stay Full HD as you go from program to program.

If you plan on manipulating or grading the colors of your footage, you need to render it with at least 16-bits per color channel. For any color adjustments, you need as many shades as possible. This way, you can, for example, bring back very bright or very dark parts of your image that otherwise would be burnt out and lost. Also, you avoid steps in gradients. Once all the adjustments are done, 8-bits per color channel are fine in most cases. But beware, once you go down to 8-bit you can’t go up again!

I recommend using image sequences for saving rendered footage during production. If you spot 10 frames with a render mistake, you only need to replace those 10 frames instead of rendering the whole thing again. The most important thing is that you don’t need an image compression, because that would cause a loss of quality and give you artifacts. A good lossless image format to use is tiff. It can hold up to 32-bit per color channel, plus an alpha channel, in case you need transparency (e.g. for light and shadow passes). Make sure to use the zip compression to reduce the file size – it’s still lossless. PNG saves even more disk space and can be used for up to 16-bits per color channel and alpha.

However, if you want to use one single video file instead of an image sequence, all uncompressed and lossless codecs are fine during production (keep an eye on the bits per color channel), but they take a lot of disk space. A nice compressed but lossless video format is the Quicktime Animation codec.

Notice that these files are not meant for playback outside of an editing or compositing software. Most media players can’t play such huge files fluently. Depending on your hardware, even your editing software might have to render a low quality version for you to work with.

How do I need to export my film for sound editing?

As the sound guys don’t care about super-high-image-quality, you can use a compressed video format. But there is one thing that is vital for sound editing: Every frame needs contain the full image, so all the sounds can be positioned at an exact frame. Have you ever noticed that you can’t jump to an exact frame on a DVD or on YouTube? It always brings you slightly back or slightly forward than were you scrolled to. That’s because the frame you landed on doesn’t contain the full image. In those codecs, only a keyframe has the complete data, and the next couple of frames contain the changes from there on until the next keyframe comes. When you scroll in the movie, it can only continue playing on the keyframes. This is too inaccurate for working with sound.

Quicktime has a couple of nice codecs, like the jpeg and png ones, where every frame contains the full image – it’s like an image sequence in one file. Oh, and of course you can use image sequences too, if the sound editing software supports them (now you can use a lossy format like jpeg to keep the file small). As this version is intended to be a reference during sound editing only, you don’t use it for anything else in your workflow! When sound editing is done, you get a high quality sound file (wav or aiff) that you can bring together with the high quality version of your film.

How do I encode my movie once it’s done?

One fine day you’ve made it. Your film is done, completely animated, went through compositing and is put together in an editing software – which should hold it in high resolution, with square pixels, progressive frames (no interlacing) and the highest possible color depth. Once again, don’t use a video preset, but custom settings that fit to your source material, with none of the limitations given by cameras or TV standards.

One of the first things you should do at this point is make a master copy of your film. This master copy should be exported like the transition exports during your workflow using the original resolution, framerate and color depth. As you do not need to replace single frames anymore, you can stuff it together with the uncompressed sound in one file, an uncompressed Quicktime or AVI (with at least 8-bits per color channel). Check twice if resolution and frame rate match, and if all the TV standard stuff is deactivated. One day, all your source files might be lost or deleted and only this master copy can be used to generate further versions of your film. Export it carefully and store it well on several hard drives.

You also might want to make tapes of it like on HDCAM, or Digital Betacam (which is meant for SD material), which are required for a lot of festivals and, of course, television. Exporting for cinema is a whole different chapter. At least digital projectors are standard nowadays (running DCPs which hold the movie as a JPEG 2000 and PCM WAV stream), so you don’t need to make a copy on actual film. Let the people assist you who own the equipment that you need to generate those versions.

Now, for the fun part: Sharing your video with the world. This is the time to throw yourself into the arms of your editing software’s presets to make a DVD and Blue-ray version. You still should change the pesky little lower or upper frame interlacing option to progressive, as most DVD players and monitors can handle that nowadays.

For spreading in the internet, you could encode your video in H.264. As many web players (like YouTube and Vimeo) support Full HD it can’t hurt to keep the resolution that high. If you limit the bitrate to max. 10 mbps, it should still get a handy file size.

These guidelines are only meant to give you an overview about the decisions you have to make while handling digital video data for animation. This article is only supposed to advise you on what to look for in the different steps of your production. Video encoding is a complex topic, and I described many things here very, very superficially. Of course, the exact settings would always depend on the individual project and workflow, and should constantly be kept in mind.

Similar Posts

Subscribe
Notify of
guest
3 Comments
oldest
newest most voted
Inline Feedbacks
View all comments
jeffO

I think my brain just exploded. 0.0

Natalie

<3
Thorough and awesome. Thank you!

emma

I think my brain exploded trying to read this. Why does it have to be so complicated!! ;.; #sadface