This page compares interleaved sampling to the standard texture-based volume rendering approach.
The first example shows a side-by-side
comparison of the two methods for a simple RGB colorcube volume. An
MPEG video with this data set is also available.
At 20 volume slices, interleaved sampling (second image, right) produces images with significantly reduced aliasing compared to the traditional approach (first image, left).
With 60 slices, the aliasing in the traditional approach is still quite visible, while it is almost gone with interleaved sampling.
Even with 120 slices, still some aliasing is visible with the traditional approach. In addition, the limited depth of the framebuffer starts to produce quantization artifacts that show up as banding artifacts in both methods. It is therefore not possible to increase the sampling rate further (it would actually be advisable to reduce it to avoid the quantization problems).
The first example shows the data set rendered with 70 slices, corresponding to an undersampling by a factor of 3.7. The aliasing artifacts in the traditional algorithms are so severe they mask most of the actually existing detail. The image that interleaved sampling produces is also not of a very high quality, but at least some of the brain structures become apparent.
With 120 slices, the aliasing artifacts are still quite severe and mask detail in the data set. In the image produced by interleaved sampling most of the detail is visible. Note that some quanitzation artifacts are already present. These don't show up as clearly as in the RGB cube example, but it is not advisable to increase the sampling rate even further.