Designing for HD: An Essential Checklist

Many motion graphics artists are tackling their first high-definition jobs. In some respects, HD is just like SD, only larger. However, HD also comes with a number of issues that can throw some major curves at you. As with all problems-in-waiting, it’s best to solve them before you start the job, rather than when you think you’re almost finished. Here are questions you need to ask your clients before your next HD job, and what the implications are- technical and artistic the answers you may get.

Frame rate issues

With SD video, frame rate is dictated by the format, such as 29.97 fps (frames per second) for NTSC and 25 fps for PAL. The video is also probably interlaced. If the footage originates on film at 24 fps but is going to be played at 29.97 fps, chances are it is slowed down to 23.976 fps and then had 3:2 pulldown applied in a pattern that spreads every 4 film frames across 10 video fields.

However, the governing body that set the HD standards, the Advanced Television Standards Committee (ATSC), allows HD frame rates of 23.976, 24, 29.97, 30, 59.94, and 60 fps progressive scan.

HD Frame Rates and Sizes
size rates
1920 x 1080 23.976, 24, 29.97,

or 30 fps progressive; 29.97

or 30 fps interlaced

1280 x 720 23.976, 24, 29.97, 30, 59.94,

or 60 fps progressive

To complicate matters further, the 29.97 and 30 fps variants may also be interlaced or progressive (Table 1). Therefore, the first question to ask the client is: What frame rate should I use for the final deliverable I hand you? That’s the rate you should use when building graphics animations (in other words, what to enter for the composition’s or sequence’s frame rate). If you are going to deliver an interlaced file, HD is always upper-field first, in contrast to the lower-field first order of DV.

The second question you need to ask is: What frame rate is the footage you’re giving me? This is more devilish than you may think. Quite often, a studio won’t deliver footage as a QuickTime or AVI movie with the frame rate already embedded; it will come as a sequence of TIFF, SGI, or even Cineon DPX frames, with no inherent frame rate attached. It will be up to you to then assign the correct frame rate when you import the footage into a program such as Adobe After Effects (Figure 1).

Even if you receive footage as a QuickTime or AVI file, you can’t necessarily trust the frame rate embedded in the file. Not all HD decks automatically detect the frame rate a tape was shot at, and might play it at a different speed. Verify if the frame rate was transcribed from the tape-and preferably from the shooting notes themselves. (And if you are the shooter, please remember to mark these details on your tapes.)

In most cases, the answer to these first two questions for the jobs I’m working on is usually 23.976 fps, progressive scan. However, it is worth double-checking, as some cameras can shoot at either 24 or 23.976 fps. The difference causes subtle but obvious audio synchronization issues that become noticeable within a minute, and need to be corrected by speeding up or slowing down the audio track. For example, if the audio track was meant to go along with 24 fps footage, but you are conforming all of your footage to 23.976 fps for a final delivery, you need to slow down or stretch the audio track by 100.1 percent (some software thinks in terms of speed rather than stretch; in that case, set the speed to 99.9 percent).

Beware of complacency! On one recent job, virtually all of the dozens of clips we received (which were delivered as SGI sequences) were at 23.976 fps, except for one which was at 29.97 fps with 3:2 pulldown added. If you have footage that’s supposed to be progressive scan-such as all 23.976 or 24 fps footage-but you see the telltale “comb teeth” of interlacing on moving objects (Figure 2), you know something is wrong. Set the field order to upper-field first and ask your software to detect the pulldown sequence. And don’t automatically trust what it says: Manually step through the resulting footage to make sure you don’t see interlacing artifacts. In After Effects, Option+double-click on Mac (Alt+double-click on Windows) on the footage item in the Project window to open it in a special Footage window, and use the Page Up and Page Down keys (above the normal cursor arrows) to step through several frames to make sure you don’t see those artifacts. If you do, go back and try different pulldown phases until those artifacts disappear.

Another important issue involving frame rate is the smoothness of motion. When objects move, you get to see their new positions only 40 percent as often at 23.976 fps than you would at 29.97 fps interlaced. That means formerly smooth motion can now take on a strobing appearance.

The easiest solution to this problem is to add motion blur. If your program doesn’t support this, or if you received sources that weren’t rendered with motion blur, you may need to add it using a plug-in such as Re:Vision Effects’s ReelSmart Motion Blur ( The downside of this added blur is that you’ll lose some clarity on items such as fast-moving text (Figure 3). You may need to back off on the motion blur amount to find a compromise between smoothness and readability. Render some tests, and run them by the client before delivering the final.

Frame size and bit depth issues

Just as there are a wide variety of legal frame rates in HD, there are a variety of sizes to contend with as well. The standard HD sizes are 1920 x 1080 pixels and 1280 x 720 pixels. The larger size is far more common, but again, check to be sure, and don’t assume all of your source files are going to come in the same sizes.

In addition, some hybrid “production” sizes have emerged. A 1920 x 1080 HD frame has nearly six times more pixels than a typical 720 x 486 pixel SD frame, which can mean it takes up to six times as long to render (although it’s not always that bad; if your project is at 23.976 fps, you have to render only 40 percent as many frames as you would with a 29.97 fps interlaced project). That large frame size is also difficult to display comfortably on most monitors, and requires more bytes to store and move around a network. Some stations have started using a “half HD” size of 960 x 540 pixels, which they then scale down slightly for their SD broadcasts, and double for their HD feeds. Don’t be shocked if you receive a request to supply graphics at this size, or even the square-pixel widescreen SD sizes of 864 x 486 (NTSC) or 1024 x 576 (PAL).

But wait-there’s more. The one silver lining in the ATSC specification was that all of the higher-resolution formats used square pixels. Alas, that last refuge has been taken away from us by the HDV and DVCPRO HD formats. In HDV, a “1920 x 1080” frame is actually captured at a size of 1440 x 1080; the pixels must be stretched horizontally by a factor of 1.333 to become square again. DVCPRO HD uses the same size for PAL frame rate projects (25 frames per second, interlaced), but a different size-1280 x 1080, with a pixel aspect ratio of 1.5-for NTSC frame rate projects.

When a 1280 x 720 frame is called for, DVCPRO HD captures it at 960 x 720 pixels, also requiring a horizontal stretch of 1.333 to make the pixels square. Some software, such as Apple Motion 2 (Figure 4), supports these sizes and manages them automatically, but not all programs do yet. You may need to perform these stretches manually in the short term.

There are other size implications beyond pixels. Along with larger dimensions, HD projects are usually captured and rendered at greater bit depths. Whereas a 10-bit YUV capture and output was often considered a luxury in SD video, it is common in HD, with some systems supporting 12-bit YUV. Ask the clients what bit depth they expect delivery at: Anything over 8 bits means you need to be working in at least 16-bit RGB to render these greater bit depth files. Yes, that means longer render times (and more disk space, etc.). On the plus side, working at this greater depth often cures many issues with banding and posterizing.

You’ll need higher-resolution sources to fill these larger frames, which requires you to capture and scan at a larger size than you have before. But what if that crucial shot or photo isn’t available at a higher resolution? Scale it up-but carefully. If After Effects has an Achilles’ heel, it is scaling up objects: Sharp edges can start to look jagged once you get past 125 percent or so. I’ve been using the Resizer plug-in-part of the Anarchy Toolbox set from Digital Anarchy (click here for more info)-for this task with good results (Figure 5). Resizer offers several algorithms that have varying degrees of sharpness or smoothness; try them and see which one looks best on a particular shot (I tend toward the two Mitchell Netravali algorithms; Bicubic and Gaussian work well on soft material). Apple Shake is also famous for its scaling.

If you are re-creating or rerendering images to use in HD, don’t just make them larger-consider adding in a bit more fine detail as well. The ability to see fine details is the reason consumers are buying HD sets (aside from bragging rights).

Deliver it in your content by increasing the detail in your 3D texture maps and other elements of your design.

The widescreen format

Once you get these technical issues under control, you can address the aesthetic ones. HD always has a different aspect ratio than SD: 16:9 versus 4:3.

What are you going to do with that extra real estate on the sides? More important, what is your client going to do with it?

If you are creating separate SD and HD versions, you can take one of two initial paths: scale the SD design so that its left and right edges match up with the HD frame and cut off the excess on the top and bottom, or scale it up so the top and bottom edges match and add imagery to fill out the left and right edges. Ask the clients which they prefer. More often than not, the second path is going to be the way to go. Most HD sets are larger than SD sets; therefore, if an object appears relatively smaller in a HD frame, it will still be viewed at the same size or larger in the real world.

Figure 6 shows an excellent example of this kind of issue. The LePrevost Corporation was asked to update the logo for Buena Vista Television, which resolves to a blue rectangle in a field of white. They did the SD version first, and later were asked to do an HD version. The best solution ended up being a compromise between the “match the sides” and “match the tops” solutions.

In reality, it’s a luxury to create separate SD and HD versions. As a designer, you might prefer to do two versions because it gives you a chance to optimize the design for the different aspects and resolutions. However, more of often than not, the HD version will also be used for the SD broadcast. Therefore, the last question to ask the clients is: How are they going to go from the HD version to SD? Are they going to letter-box it or perform a “center cut,” which is when they fill out the top and bottom, and chop off the left and right sides? It’s probably going to be the latter-and that has huge design implications.