Project 1

Three spheres of different colors, rendered by simplified raycasting


Image has been scaled down. Click on image for fullsize viewing.

Image Specification:
(Note: x increases to the right, y increases downward, and z increases into the screen.)
Camera at (0, 0, -1)
Viewing plane at z = 1
Red sphere at (-1, 0.5, 1), with radius 0.5
Green sphere at (0, 0, 1.1), with radius 0.8
Blue sphere at (1, -0.5, 1.2), with radius 0.7

Performance evaluation here

Design choices:

  1. Distinction between Point and Vector. I have distinguished between Points and Vectors
    by making them separate classes in the code, including the use of the explicit keyword
    to prevent instances of the classes being used in places where they should not be. The operators
    belonging to each class properly produce objects of the right sort when invoked. In other words, I
    am relying on C++'s type system to keep the concepts separated.

  2. Image formats supported. Currently only PPM level 6 is supported. I rely on
    ImageMagick's convert utility to convert the PPM file into whatever other image format
    is appropriate or necessary. However, the software architecture allows for the easy creation of
    modules that can output directly to other formats (see item below about software architecture).
    Also, I have implemented two runtime GL viewers that can be used, among other functions, to view
    a partially rendered scene.

  3. Software architecture. I have made use of C++'s inheritance capabilities to create a
    dynamically growable software system whose ultimate goal is to provide ray tracing services. In
    particular, a whole suite of image renderers is possible via an interface class (i.e., abstract
    virtual class) named Image_Renderer, which has a single undefined method called render().
    The idea is that I can write any class inheriting from this one that renders an image in its own way.
    At this point, I have three concrete renderers: PPM_renderer (discussed above), glut_renderer (using the
    GLUT library that comes with GL distributions), and glx_renderer (a homebrew windowing library built from
    X and GL, meant to overcome certain limitations of GLUT).

    All of the code for the ray tracer is compiled into a single library (libRT.so), which is then linked
    to any codebase that serves as a ray tracing application. This software organization makes it easy to
    develop and test the ray tracer. Having multiple renderer types also makes it easy to use the system
    even if one of the components is not complete (for instance, the glx_renderer needs some work before
    it will be cooler than GLUT, but the library is still completely usable).

Extra Credit:

Click on picture for fullsize view

The glx_renderer class has a non-blocking render method which is meant to allow interactive rendering
(i.e. rendering even as the image is being computed). For this assignment, I have the program set
a period constant (which can be supplied on the command line). After computing a number of image pixels
equal to the period constant, the program renders the partial image. With a negative period constant,
the full image is computed before it is posted.

The glx_renderer class uses X Window primitives to accomplish windowing, and GLX and OpenGL to do
the actual drawing.