RealTime Graphics - December 1994

Tutorial: Key Image Generator Specifications: Polygon Capacity and Pixel Capacity

By Roy Latham
Copyright © 1994 All rights reserved.

There is much more to specifying an automobile than its acceleration and top speed. There is also much more to specifying an image generator or graphics accelerator than polygon capacity and pixel capacity. But in both cases, never mind how much people ought to care about other things, they want the big numbers first. In the case of image generators, we can explain what the numbers mean.

The polygon capacity of an image generator is most often expressed in polygons per second. Generally, the polygons per second number can be divided by any frame rate to yield the polygons that might be in each image. An image generator rate at 30,000 polygons per second can be expected to produce 1000 polygons in each of 30 frames per second. Occasionally, an image generator will have a significant overhead with starting a frame. Start up overheads are more common in machines designed for CAD/CAM work than for simulation. High end simulation IG's may be specified at a certain frame rate, such as "5000 polygons at 30 Hz." It is safe to multiply the numbers to get the polygons per second (in this case 150,000) for comparison purposes.

Will the image generator make any kind of polygon at its rated capacity, or just certain kinds? Inevitably there are restrictions. The polygon rate measures the capacity of the machine to perform transformations from a three-dimensional database into two dimensional screen coordinates. No IG will accommodate 100 vertex polygons at the specified rate. Usually, the specified rate is for triangles, and occasionally for quads (having four vertices). The specified rate may occasionally be for fewer than three vertices per polygon. If adjacent polygons share vertices in a mesh, each coordinate transform may be shared by two or more triangles. In large meshes, the average number of transformations may approach only one per triangle. To avoid the ambiguity in the mesh size, it is conventional to specify the polygon rate for unmeshed triangles. CAD/CAM users often deal with surfaces approximated by a mesh of many small triangles, and so the rate for meshed triangles is more relevant to those users.

The specified polygon rates will not be achieved if the polygons are too large on the screen. If the polygons have too many pixels, the machine will bog down in making the pixels. The pixel rate is the second key IG specification, but specifiers want to keep the two specifications independent. Consequently, the polygon rate will be specified for triangles so small that the pixel rate limitations do not come into play. Sometime this may be as small as 25 pixels per triangle. If the polygon’s size at the specified rate seems small for your application, it means the performance limitation will be in the pixel capacity not the polygon capacity.

Depending on the machine architecture, the polygon capacity may vary depending on whether the polygon is to be shaded, smooth shaded, or textured. Each of these attributes adds calculations to the transformation process, so unless the calculations are done in parallel performance will suffer. As it becomes more common to perform polygon processing with software, the limitations on polygon attributes are more likely to come into play. High end machines may be sure to cover worst case sets of attributes, while lower end machines are subject to more elaborate sets of terms and conditions.

Pixel capacity specifications for an image generator involve even more caveats than polygon capacity specifications. Usually the pixel capacity is specified in terms of generated pixels per second. Because some generated pixels are covered (i.e., occluded) by other pixels, the image generator must generate more pixels than are eventually displayed. Pixels from a nearer object occlude the pixels of the more distant objects. A scene that ends up with a million pixels per output frame of video takes many more than a million generated pixels per frame to render. The ratio of the image generator capacity to generate pixels to the number of displayed pixels is called the overwrite capacity of the machine. If the overwrite capacity is one, there can be no occlusion whatsoever in the scene, rarely the case in a practical

In addition to generating occluded pixels, the image generator must generate extra pixels for antialiased edges. If a pixel is half covered by one polygon and half covered by another polygon with which the first shares an edge, then the pixel will have to be visited twice by the image generator even though there is no occlusion. Because fractional pixels take as long to generate as full pixels, the pixel demand is increased by the number of edge pixels. The number of edge pixels in a scene depends upon many factors, including the average size of a polygon. Simulator scenes tend to have large polygons, so the edges are a small fraction of the total number of overwrites. For example, if 2000 polygons generate four overwrites of a one million pixel screen, the average polygon has (4 x 1,000,000 /2000 =) 2000 pixels. If the polygons were square, they would then average about 45 pixels on an edge. The ratio of edge pixels to total polygon pixels would then be 4 x 45 /2000 = 9.5%. Realistically, the aspect ratio of the polygons is less favorable than if the polygons were square, consequently the percentage of edge pixels is higher, perhaps closer to 20%. The numbers are strongly scene dependent, and the best number to choose is controversial, with some vendors arguing for very low numbers (10% or below) and others for much higher numbers (30% or more).

That brings us back to the question of how many overwrites are needed for occlusion in the first place. That is the number to which the fraction for edges will be added. One rarely hears of overwrite requirements much less than three, including the factor for edges. An aircraft at altitude mainly looks down at the terrain, so there is little occlusion. Landing areas are flat, and hills and buildings are usually distant. Adding a layer of semi transparent clouds will add additional overwrites. A more severe case is for armored vehicle simulation, where nearby vehicles, layers of trees, smoke and fire effects, nearby buildings, and rows or hills all come into play. Some say that four overwrites will do for armor simulation, others say that not a bit less than six will suffice. Care in database design helps avoid overwrites, but the expense saved in the overwrite capacity of the machine will at least be partially offset by the extra work in crafting the databases.

Restrictions on operations with the simulator may help reduce the overwrite requirement. For example, there may be no reason related to the purposes of the simulation to require that many other vehicles be close to the one being simulated. Prohibiting such operation could reduce the overwrite capacity requirement, and thereby the cost of the image generator.

A good starting point for preparing an image generator specification is to make budgets for both the polygon and overwrite requirements. Both polygons and overwrites can be attributed to categories of objects in the scene, such as the own vehicle, other moving objects, the terrain, buildings, clouds, special effects, and so forth. discussions of the requirements can then at least be focused on the particulars.

Once an overwrite requirement is established, the next hurdle is to understand the different ways that the requirement can be met. Simple, brute force Z buffered machines are the easiest to understand. Every pixel of every object takes the same time to generate, and every pixel is generated whether or not it ultimately ends up being occluded. Silicon Graphics' popular RealityEngine products are of this type, for example. Other architectures have means that avoid generating at least some of the pixels that the machine somehow knows will ultimately be occluded. It is more correct to say that such systems lower the requirements for overwrite capacity rather than saying that they have ways of providing extra overwrite capacity. However, there is some trend to saying that avoiding overwrites is another way of "providing" the capacity.

The most efficient way of avoiding overwrites is to run a separate algorithm, called a list priority algorithm, that identifies the correct order to write the objects in the frame buffer that is building up the picture. Once the pixels of an object are written, the algorithm guarantees that no other pixels will ever occlude them. Consequently, any portion of the screen that has been written can subsequently be skipped over. Skipping pixels is much faster than generating them; only the address of the prospective new pixel need be computed, not the color or other attributes.

List priority machines nonetheless require the capacity to generate more than one overwrite. Edge pixels must be visited more than once. Also, partially transparent objects require pixels to be visited more than once; each layer of transparency requires a contribution to the pixel in addition to the underlying opaque pixel. Budgeting edge pixels and transparent pixels is important for a list priority image generator. The point of a list priority architecture is to lower the system cost by minimizing the amount of pixel generating hardware. If the overwrite capacity is kept near one, the system will be least expensive, but the ability to use partial transparency for smoke and foliage effects will be severely limited.

Nowadays, pure list priority machines are quite uncommon, because list priority algorithms have trouble processing moving objects in a scene. More common are hybrid architectures that use a list priority algorithm for fixed objects and Z buffering for moving objects. The ESIG 2000 and ESIG 3000 image generators manufactured by Evans & Sutherland are good examples of the hybrid architecture approach.

All other things equal, a list priority machine will be less expensive than a Z-buffer machine, at least as far as hardware costs are concerned. However, a Z-buffered machine is easier to use, and will generally have lower software development and database construction costs. Consequently, if the programming costs are high compared to the costs of the production run of the hardware, then a Z-buffered machine would be preferred. Otherwise, a hybrid architecture may have an advantage. Large production runs occur on large military simulation contracts, for example. It is interesting to speculate if hybrid or list priority machine might have a resurgence if a market develops for 3-D games. The emergence of more clever hierarchical Z-buffering algorithms, and the general lower cost of hardware relative to software might head that off. We'll have to wait and see.