One of the main frustrations in modern research is that our ability to collect data is overtaking our ability to present it in an understandable way. In medicine, this has long been a problem, because much of what a doctor knows about what is beneath a patient's skin is gleaned from static X-ray photos, CT scans or MRI scans. These are often hard to interpret, and it's impossible to see the area from a different angle without putting the patient through another costly, and often uncomfortable, imaging process.
Fortunately, emerging techniques based on voxels - or volume pixels - provide a clearer picture. They allow the physician to view internal tissues as they exist within the body, highlight certain features for maximum contrast and rotate images to get the best point of view. They create a realistic and reliable 3D model of structures that have never seen the light of day.
Just as a pixel is a point on a computer screen with a specified colour and an x, y position, a voxel is a point in three-dimensional space with a defined x, y, z position, colour and density. The exact meaning of the density value depends on the type of scan performed. CT scans, for example, measure a tissue's transparency to X-rays, while MRIs gauge the concentration of water. These density values are used to control the opacity of a voxel when it is drawn on the screen.
Once an MRI scan or other 3D data set is represented in terms of voxels, a rendering algorithm must be used to map the results onto a two-dimensional display. This requires many calculations for each point, so the process is sometimes speeded up by ignoring voxels that have been made transparent and therefore won't contribute to the final image. To isolate such regions, the data set is divided into what is known as an octree. First, the entire voxel set is divided along the x, y, and z axes to create eight cubic regions. The computer then analyses each region to determine whether it contains any "interesting" (ie nontransparent) voxels. If so, the region is subdivided into eight more. The process continues recursively until none of the octree cubes in question contain interesting voxels, or until they can't be divided any further. The cubes that remain mark the relatively large regions of the data set that can be safely ignored during rendering.
It's a clever scheme but comes with a significant caveat: You can quickly rotate the image or change the lighting, but if you alter the opacity of any tissue within scan, the entire octree must be recomputed. This is a slow process on desktop machines and precludes real-time display.
On the other hand, if your pockets are much deeper and you can get a machine optimised for image rendering, such as SGI's Onyx/Reality Engine at US$100,000, the octree step isn't necessary. These specialised machines can blindly process every single voxel and still achieve real-time performance. Marc Levoy, an assistant professor at Stanford University well-known for his work in this field, predicts that within five years the average desktop machine will be powerful enough to skip this step as well.
There are several ways to render volume data, be it as an octree or the entire voxel set. One of the most common methods is known as alpha-blending. In this method, each pixel is defined by projecting an imaginary light ray through the space between voxels in a straight line. Most rendering programs take the average values for colour and opacity from the eight voxels closest to the location of the projected light ray. This solves the problem of which data to use when the ray intersects the data set at a point that is not clearly on any single voxel.
This process can be done in a front-to-back or back-to-front fashion. In back-to-front rendering, each voxel occludes the preceding one in proportion to its colour and opacity. More opaque voxels will contribute more to the final pixel than the more transparent ones. The algorithm for a front-to-back rendering process is only slightly more complicated but uses the same basic process. The benefit of the front-to-back rendering is that once the maximum opacity for that pixel is reached, the pixel can be drawn even if the entire data set hasn't been traversed.
Alpha-blending produces clear, easy-to-comprehend images. The relative opacities of certain tissues can be manipulated for heightened contrast, and the result looks a lot like the physical sample. There are, however, simpler rendering methods available for specialised diagnostic needs. For example, a common medical procedure is to inject a patient with a contrasting agent - usually a sugar compound containing iodine - that shows up as a bright region in diagnostic imagery. The best rendering process for this type of image consists of displaying only the very brightest voxel along each ray, producing a solid image of the tissues reached by the agent. Another rendering method sometimes used is to simply add all the voxel colours and opacities together like a stack of transparencies, which yields the functional equivalent of a standard X-ray.
The medical profession is making the most extensive use of volume-rendering technology, but other fields have begun to take advantage of the technology as well. Geologists can get a picture of what lies underground without having to extract a single core sample. By analysing the sound waves produced by a carefully placed explosion, geologists can get a volume whose renderings show a realistic picture of how various mineral and rock deposits are positioned in relation to each other. Engineers can identify imperfections in a machine part before the thing actually breaks. Meteorologists can get a more coherent model of earth's atmosphere than is possible with a 2D chart of highs and lows. While volume rendering won't advance our ability to gather data in any of these fields, it will go a long way toward helping us understand what the data means.
Andrew Rozmiarek is a section editor at Wired Online.