What the f*ck is Z-Buffer Algorithm for Visible Surface Detection?

DhruvDhruv
4 min read

Please note that this algorithm can be daunting and a bit difficult to understand for beginners.

When we view a picture containing non-transparent objects and surfaces, then we cannot see those objects from view which are behind from objects closer to eye. We must remove these hidden surfaces to get a realistic screen image. The identification and removal of these surfaces is called Hidden-Surface problem.

There are two approaches for removing hidden surface problems − Object-Space method and Image-space method. The Object-space method is implemented in physical coordinate system and image-space method is implemented in screen coordinate system.

When we want to display a 3D object on a 2D screen, we need to identify those parts of a screen that are visible from a chosen viewing position.

Now, let us talk about Z-Buffer Algorithm. Z-buffer, which is also known as the Depth-buffer method is one of the commonly used method for hidden surface detection. It is an Image space method. Image space methods are based on the pixel to be drawn on 2D. For these methods, the running time complexity is the number of pixels times number of objects. And the space complexity is two times the number of pixels because two arrays of pixels are required, one for frame buffer and the other for the depth buffer.

The Z-buffer method compares surface depths at each pixel position on the projection plane. Normally z-axis is represented as the depth.

The algorithm can simply be stated as follows:

First of all, initialize the depth of each pixel.
i.e,  d(i, j) = infinite (max length)
Initialize the color value for each pixel 
as c(i, j) = background color
for each polygon, do the following steps :

for (each pixel in polygon's projection)
{
    find depth i.e, z of polygon
    at (x, y) corresponding to pixel (i, j)

    if (z < d(i, j))
    {
        d(i, j) = z;
        c(i, j) = color;
    }
}

Now, let us understand what it actually means.

Assume the polygon given is as below:

In starting, assume that the depth of each pixel is infinite.

As the z value i.e, the depth value at every place in the given polygon is 3, on applying the algorithm, the result is:

Now, let’s change the z values. In the figure given below, the z values goes from 0 to 3.

In starting, the depth of each pixel will be infinite as :

Now, the z values generated on the pixel will be different which are as shown below :

Therefore, in the Z buffer method, each surface is processed separately one position at a time across the surface. After that the depth values i.e, the z values for a pixel are compared and the closest i.e, (smallest z) surface determines the color to be displayed in frame buffer. The z values, i.e, the depth values are usually normalized to the range [0, 1]. When the z = 0, it is known as Back Clipping Pane and when z = 1, it is called as the Front Clipping Pane.

In this method, 2 buffers are used :

  1. Frame buffer

  2. Depth buffer

This approach compares surface depths at each pixel position on the projection plane. Object depth is usually measured from the view plane along the z-axis of a viewing system. For example:

Let S1, S2, S3 are the surfaces. The surface closest to projection plane is called visible surface. The computer would start (arbitrarily) with surface 1 and put it’s value into the buffer. It would do the same for the next surface. It would then check each overlapping pixel and check to see which one is closer to the viewer and then display the appropriate color. As at view-plane position (x, y), surface S1 has the smallest depth from the view plane, so it is visible at that position.

Now, if you have a blurry image of the algorithm as of now, let's put a clear image in your head. Look at the image below:

The depth of rectangles is given below:
Closest: RED
In Middle: GREEN
Deepest: BLUE

So, whenever RED is over GREEN, it hides GREEN and only RED is visible. Similarly, when BLUE is under GREEN, then BLUE gets hidden. This was all about the Z-Buffer Algorithm.

Z-Buffer Algorithm is an efficient algorithm due to the following reasons:

  • The Z-buffer algorithm, also known as the depth-buffer method, is fast because it can discard pixels as soon as their depth is known. This allows the algorithm to skip the process of lighting and texturing pixels that are not visible. It also scales well with increasing scene complexity, unlike older techniques like the painter's algorithm.

  • It doesn't require pre-sorting of polygons.

  • It doesn't require object-to-object comparison.

  • It can be applied to non-polygonal objects.

Please note that Z-Buffer Algorithms have some limitations as well. It cannot be applied to transparent objects. If only a few objects in the scene are to be rendered, then this method is less attractive because of additional buffer and the overhead involved in updating the buffer.

10
Subscribe to my newsletter

Read articles from Dhruv directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Dhruv
Dhruv

A Flutter Developer, Unity Developer and Product Manager.