Radiosity - UNDER CONSTRUCTION


Last Update: 2nd Nov 2000

As this document is under construction, you may expect some level of errors and sentences that stop without

If you see anything that is clearly wrong, or poorly explained (if you find it hard to understand from the article then it's poorly explained) then please don't hesitate to contact me about it. ([email protected])


Lighting and shadow casting algorithms can be very roughly divided into two categories; Direct Illumination and Global Illumination. Many people will be familiar with the former category, and the problems associated with it. This article will briefly discuss the two approaches, then give an in-depth study of one Global Illumination method, Radiosity.


Direct Illumination

Direct Illumination is a term that covers the principal lighting methods used by old school rendering engines such as 3D Studio and POV. A scene consists of two types of entity: Objects and Lights. Lights cast light onto Objects, unless there is another Object in the way, in which case a shadow is left behind.

There are all sorts of techniques under this heading: Shadow Volumes, Z-Buffer methods, Ray Tracing . . . But as a general rule, they all suffer from similar problems, and all require some kind of fudge in order to overcome them.

Direct Illumination Problems and Advantages

Advantages Disadvantages
Ray Tracing: - Can render both mathematically described objects and polygons
- Allows you to do some cool volumetric effects
- Slow
- Very sharp shadows and reflections
- View dependent
     
Shadow Volumes: - View Independent
- Can be modified to render soft shadows (very tricky)
- Tricky to implement
- Very sharp shadows
- Polygons only
     
Z-Buffer: - Easy to implement
- Fast
- View dependent
- Sharp shadows with aliasing problems

The most important thing to consider is that, while these methods can produce hyper-realistic images, they can only do this when given a scene with point light sources, and perfectly shiny or perfectly diffuse objects. Now, unless you are some kind of rich simpleton, your house probably isn't full of perfectly shiny spheres and point light sources. In fact, unless you live in a universe with completely different physics, your house probably contains hardly any super-sharp shadows.

It it quite common for people to claim that ray tracers and other renderers produce 'photo-realistic' results. But imagine someone were to show you a typical ray traced image, and claim it was a photo. It would not be long before you were claiming in return that they were blind or lying.

It should also be noted that, in the real world, it is still possible to see objects that are not directly lit; shadows are never completely black. Direct Illumination renderers try to handle such situations by adding an Ambient Light term. Thus all objects receive a minimum amount of uni-directional light.


Global Illumination

Global illumination methods try to overcome some of the problems associated with Ray Tracing. While a Ray Tracer tends to simulate light reflecting only once off each diffuse surface, global illumination renderers simulate very many reflections of light around a scene.
While each object in a Ray Traced scene must be lit by some light source for it to be visible, an object in a Globally Illuminated scene may be lit simply by it's surroundings.
The reason this makes a difference will become clear soon.

Global Illumination Problems and Advantages

Images produced by global illumination methods can look very convincing indeed; in a league of their own, leaving old skool renderers to churn out sad cartoons. But, and it's a big 'but': 'BUT!' they are slower. Just as once you may have left your ray tracer all day, and come back to be thrilled by the image it produced, you will be doing the same here.
Advantages Disadvantages
Radiosity: - Very realistic lighting for diffuse surfaces
- View Independent
- Conceptually simple and easy to implement.
- Very Slow
- Does not handle point sources well
- nor shiny surfaces
- Always over complicated and poorly explained in books
     
Monte Carlo Method: - Very, very good results.
- Can simulate pretty well any optical phenomenon
- View Dependent

   

Lighting a simple scene with Direct Lighting

   I modeled this simple scene in 3D Studio. I wanted the room to look as if it was lit by the sun shining in through the window.

   So, I set up a spotlight to shine in. When I rendered it, the entire room was pitch black, except for a couple of patches on the floor that the light reached.
   Turning up the Ambient Light simply caused the room to appear a uniform grey, except for the uniformly red floor, and light patches.
   Adding a point light source in the middle of the room brought out the details, but the scene doesn't have that bright glow that you expect from a sunlit room.
   Lastly, I turned the background colour to white, to give the appearance of a bright sky.

   

Lighting a simple scene with Global Lighting

   I modeled the same scene in my own radiosity renderer. To provide the source of light, I rendered an image of the sky with Terragen, and placed it outside the window. No other source of light was used.

With no further effort on my part, the room looks realistically lit.
Interesting points to note:

  • The entire room is lit and visible, Even those surfaces facing away from the sun.
  • Soft shadows.
  • The subtle change in brightness across the wall to the left of the scene.
  • The grey walls, far from being grey, have a certain warmth to them. The ceiling could even be said to be ever so slightly pink.


The Workings of a Radiosity Renderer

Clear your mind of anything you know about normal rendering methods. Your previous experiences may simply distract you.

I would now like to ask an expert on shadows, who will explain to you everything they know about the subject. My expert is a tiny patch of paint on the wall in front of me.

So there you have it. The basic premise of Radiosity. Any light that hits a surface is reflected back into the scene. That's any light. Not just light that's come directly from light sources. Any light. That's how paint in the real world thinks, and that's how the radiosity renderer will work.

In my next article, I will be explaining how you can make your own talking paint.

So, the basic principal behind the radiosity renderer is to remove the distinction between objects and light sources. Now, you can consider everything to be a potential light source.
Anything that is visible is either emitting or reflecting light, i.e. it is a source of light. A Light Source. Everything you can see around you is a light source. And so, when we are considering how much light is reaching any part of a scene, we must take care to add up light from all possible light sources.

Basic Premises:

Now that you have the important things in mind. I will take you through the process of performing Radiosity on a scene.


A Simple Scene

We begin with a simple scene: a room with three windows. There are a couple of pillars and some alcoves, to provide interesting shadows.

It will be lit by the scenery outside the windows, which I will assume is completely dark, except for a small, bright sun.

Now, lets choose one of the surfaces in the room, and consider the lighting on it.

As with many difficult problems in computer graphics, we'll divide it up into little patches (of paint), and try to see the world from their point of view.

From now on I'll refer to these patches of paint simply as patches.

Take one of those patches. And imagine you are that patch. What does the world look like from that perspective?

View from a patch

Placing my eye very carefully on the patch, and looking outwards, I can see what it sees. The room is very dark, because no light has entered yet. But I have drawn in the edges for your benefit.

By adding together all the light it sees, we can calculate the total amount of light from the scene reaching the patch. I'll refer to this as the total incident light from now on.

This patch can only see the room and the darkness outside. Adding up the incident light, we would see that no light is arriving here. This patch is darkly lit.

View from a lower patch

Pick a patch a little further down the pillar. This patch can see the bright sun outside the window. This time, adding up the incident light will show that a lot of light is arriving here. This patch is brightly lit.

Lighting on the Pillar

Having repeated this process for all the patches, and added up the incident light each time, we can look back at the pillar and see what the lighting is like.

The patches nearer the top of the pillar, which would not see the sun, are in shadow, and those that can are brightly lit. Those that could see the sun partly obscured by the edge of the window are only dimly lit.

And so Radiosity proceeds in much the same fashion. As you have seen, shadows naturally appear in parts of the scene that cannot see a source of light.

Entire Room Lit: 1st Pass

Repeating the process for every patch in the room, gives us this scene. Everything is completely dark, except for surfaces that have received light from the sun.

So, this doesn't look like a very well lit scene. Ignore the fact that the lighting looks blocky; we can fix that by using many more patches. What's important to notice is that the room is completely dark, except for those areas that can see the sun. At the moment it's no improvement over any other renderer. Well, it doesn't end here. Now that some parts of the room are brightly lit, they have become sources of light themselves, and could well cast light onto other parts of the scene.

View from the patch after 1st Pass

Patches that could not see the sun, and so received no light, can now see the light shining on other surfaces. So in the next pass, this patch will come out slightly lighter than the completely black it is now.

Entire Room Lit: 2nd Pass

This time, when you calculate the incident light on each patch in the scene, many patches that were black before are now lit. The room is beginning to take on a more realistic appearance.

What's happened is that sun light has reflected once from the floor and walls, onto other surfaces.

Entire Room Lit: 3rd Pass

The third pass produces the effect of light having reflected twice in the scene. Everything looks pretty much the same, but is slightly brighter.

The next pass only looks a little brighter than the last, and even the 16 th is not a lot different. There's not much point in doing any more passes after that.

The radiosity process slowly converges on a solution. Each pass is a little less different than the last, until eventually it becomes stable. Depending on the complexity of the scene, and the lightness of the surfaces, it may take a few, or a few thousand passes. It's really up to you when to stop it, and call it done.

4th Pass 16th Pass


The Algorithm In More Detail: Patches

Emmision
Though I have said that we'll consider lightsources and objects to be basically the same, there must obviously be some source of light in the scene. In the real world, some objects do emit light, and some don't, and all objects absorb light to some extent. We must somehow distinguish between parts of the scene that emit light, and parts that don't. We shall handle this in radiosity by saying that all patches emit light, but for most patches, their light emmision is zero. This property of a patch, I'll call emmision.

Reflectance
When light hits a surface, some light is absorbed and becomes heat, (we can ignore this) and the rest is reflected. I'll call the proportion of light reflected by a patch reflectance.

Incident and Excident Light
During each pass, it will be necessary to remember two other things, how much light is arriving at each patch, and how much light is leaving each patch. I'll call these two, incident light and excident light. The excident light is the visible property of a patch. When we look at a patch, it is the excident light that we're seeing.

    incident light = sum of all light that a patch can see

    excident light = (incident light*reflectance) + emmision

	

Patch structure
Now that we know all the necessary properties of a patch, it's time to define a patch. Later, I'll explain the details of the four variables.

  structure PATCH

    emmision

    reflectance

    incident

    excident

  end structure


Now that I've explained the basics of the algorithm, I'll tell it again in pseudocode form, to make it concrete. Clearly this is still quite high level, but I'll explain in more detail later.

Radiosity Pseudocode: Level 1


  load scene



  divide each surface into roughly equal sized patches





  initialise_patches:

  for each Patch in the scene

    if this patch is a light then

      patch.emmision = some amount of light

    else

      patch.emmision = black

    end if

    patch.excident = patch.emmision

  end Patch loop

  





  Passes_Loop:



  each patch collects light from the scene

  for each Patch in the scene

    render the scene from the point of view of this patch

    patch.incident = sum of incident light in rendering

  end Patch loop





  calculate excident light from each patch:

  for each Patch in the scene

    I = patch.incident

    R = patch.reflectance

	E = patch.emmision

    patch.excident = (I*R) + E

  end Patch loop



  Have we done enough passes?

    if not then goto Passes_Loop



Explanation of Code

initialise patches:
To begin with, all patches are dark, except those that are emmiting light. So, those patches are initialised with some value of emmision, which would have been specified by the scene. All other patches are given zero emmision (black).

Passes Loop:
The code repeats this loop as many times as is necessary to produce acceptable lighting in the scene.

each patch collects light from the scene
As I explained earlier in the article, each patch is lit by what it can see around it. This is achieved by simply rendering the scene from the point of view of the patch, and adding up the light it sees. I'll explain this in more detail in the next section.

calculate excident light from each patch:
Having worked out how much light is arriving at each patch, we can now work out how much light is leaving each patch.


This process must be repeated many times to get a good effect. If the renderer needs another pass, then we jump back to Passes_Loop.


Implementing Radiosity: Hemicubes

The first thing we'll have to deal with, in implementing radiosity is to solve the problem of looking at the world from the point of view of each patch. So far in this article I have used a fish-eye view to represent a patch's eye view of the scene, but this isn't easy or practical. There is a much better way, the Hemicube!

The Hemisphere
Imagine a fish eye view wrapped onto a hemisphere. Place the hemisphere over a patch (left: red square), and from that patch's point of view, the scene wrapped on the inside of the hemisphere looks just like the scene from it's point of view. There's no difference.

Placing a camera in the middle of the hemisphere, you can see that the view looks just like any other rendering of the scene (right).

If you could find a way to render a fisheye view easily, then you could just sum up the brightness of every pixel to calculate the total incident light on the patch. However, it's not easy to render a fisheye view, and so some other way must be found to calculate the incident light.


Rendering from the centre of the hemisphere
The Hemicube
Surprisingly (or unsurprisingly, depending on how mathematical you are) a hemicube looks exactly the same as a hemisphere from the patch's point of view.

Rendering from the centre of the hemicube

Unfolding the Hemicube

Imagine unfolding the hemicube. What are you left with? One square image and four rectangular images. The square image in the center is a rendering from the point of view of the patch, looking directly forwards. The other four parts of the hemicube are the views looking 90° up, down, left and right.

So, you can easily produce each of these images by placing a camera on a patch, and render it pointing forwards, up, down, left and right. The four side images are, of course, cut in half, and so, only half a rendering is required there.

Compensating for the hemicube's shape

This is view of 3 spheres, rendered with a FOV of 90°. All three spheres are the same distance from the camera, but because of the properties of perspective transformation, objects at the edge of the image appear spretched and larger than ones in the middle.

If this was the middle image of a hemicube, and the three spheres were light sources, then those near the edge would cast more light onto the patch than they should. This would be inaccurate, and so we must compensate for this.

If you were to use a hemicube to calculate the total incident light falling on a patch, and just added together the values of all the pixel rendered in the hemicube, you would be giving an unfair weight to objects lying at the corners of the hemicube. They would appear to cast more light onto the patch.

To compensate for this, it is necessary to 'dim' the pixels at the edges and corners, so that all objects contribute equally to the incident light, no matter where they may lie in the hemicube. Rather than give a full explanation, I'm just going to tell you how this is done.

Pixels on a surface of the hemicube are multiplied by the cosine of the angle between the direction the camera is facing in, and the line from the camera to the pixel.

On the left is an image of the map used to compensate for the distortion. (shown half size relative to the image above)


Lambert's Cosine Law

Any budding graphics programmer knows Lambert's cosine law: The apparent brightness of a surface is proportional to the cosine of the angle between the surface normal, and the direction of the light. Therefore, we should be sure to apply the same law here. This is simply done by multiplying pixels on the hemicube by the relevant amount.

On the left is an image of the map used to apply Lambert's law to the hemicube. White represents the value 1.0, and black represents the value 0.0. (shown half size relative to the image above)


Calculating the Incident Light

This procedure takes a point in the scene (usually a patch), along with a normal vector, and calculates the total ammount of light arriving at that point.

First, it renders the 5 faces of the hemicube using the procedure RenderView(point, vector, part). This procedure takes as it's arguments a point, telling it where the camera should be for the rendering, a vector, telling it what direction the camera should be pointing in, and another argument telling it which part of the final image should be rendered. These 5 images are stored in hemicube structure called H (left column of images below).

Once the hemicube H has been rendered, it is multiplied by the multiplier hemicube M (middle column of images below), and the result is stored in the hemicube R (right column of images below).

Then the total value of the light in R is added up and divided by the number of pixels in a hemicube. This should give the total amount of light arriving at the point in question.


  procedure Calc_Incident_Light(point: P, vector: N)  



    light TotalLight

    hemicube H, R, M

    H = empty

    M = Multiplier Hemicube

    R = empty



    div = sum of pixels in M



    camera C

    C.lens = P



    C.direction = N

    H.front = RenderView(C, N, Full_View)



    C.direction = N rotated 90° down

    H.down = RenderView(C, N, Top_Half)



    C.direction = N rotated 90° up

    H.up = RenderView(C, N, Bottom_Half)



    C.direction = N rotated 90° left

    H.left = RenderView(C, N, Right_Half)



    C.direction = N rotated 90° right

    H.right = RenderView(C, N, Left_Half)



    multiply all pixels in H by corresponding

    pixels in M, storing the results in R



    TotalLight = black



    loop p through each pixel in R

      add p to TotalLight 

    end loop

    

    divide TotalLight by div



    return TotalLight

  end procedure

	

Explanation of Variable Types in Pseudocode

light: Used for storing any light value. For example:
  structure light

    float Red

    float Green

    float Blue

  end structure

hemicube: used for storing the view of a scene from the point of view of some point in the scene. A Hemicube would consist of five images, as illustrated above, where each pixel was of type light. In the case of the Multiplier Hemicube, what is stored is not a value of light, but some multiplier value less than 1.0, as illustrated above.

  structure hemicube

    image front

    image up

    image down

    image left

    image right

  end structure

camera: for example
  structure camera

    point  lens

    vector direction

  end structure


Increasing the accuracy of the solution

You'll be thinking to yourself, 'damn, this seems like a whole lot of rendering. A very processor intensive way of doing things.' You'd be right of course. Basically you have to render a texture mapped scene many thousands of times.

Fortunately, this is something people have been doing since the dawn of time. Um, since the dawn of the raster display, and since then there has been much work put into rendering texture mapped scenes as fast as possible. I won't go into a whole lot of detail here, I'm really not the person best qualified to be talking about optimised rendering. My own renderer is so slow you have to use cussing words to describe it. The algorithm also lends itself well to optimisation with standard 3D graphics hardware, though you have do some fiddling and chopping to get it to render (3x32) bit textures.

The speed improvement I'm going to discuss in this article does not concern optimising the actual rendering of the hemicubes, but rather reducing the number of hemicubes that need to be rendered. You will, of course, have noticed that the light maps illustrated in the black and white renderings above were somewhat blocky, low resolution. Don't fear, their resolution can be increased as far as you want.


Take a look at the surface on the left, outlined in red. The lighting is basically very simple, there's a bright bit, and a less bright bit, with a fairly sharp edge between the two. To reproduce the edge sharply, you would normally need a high resolution light map and, therefore, have to render very many hemicubes. But it hardly seems worthwhile rendering so many hemicubes just to fill in the bright or less-bright areas which are little more than solid colour. It would be more worthwhile to render a lot of hemicubes near the sharp edge, and just a few in the other areas.

Well, it is possible, and quite straightforward. The algorithm I will describe below will render a few hemicubes scattered across the surface, then render more near the edges, and use linear interpolation to fill in the rest of the light map.


The Algorithm: On the far left you can see the light map in the process of being generated. Next to it, you can see which pixels were produced using a hemicube (red) and which were linearly interpolated (green).


1: Use a hemicube to calculate every 4th pixel.

I'll show these pixels on the right as .

2: Pass Type 1: Examine the pixels which are horizontally or vertically halfway between previously calculated pixels . If the neighbouring pixels differ by more than some threshold amount, then calculate this pixel using a hemicube, otherwise, interpolate from the neighbouring pixels.
3: Pass Type 2: Examine the pixels which are in the middle of a group of 4 pixels. If the neighbours differ by much, then use a hemicube for this pixel, otherwise use linear interpolation.
4: Pass Type 1: Same as step 2.
5: Pass Type 2: Same as step 3.

You should be able to see, from the maps on the left, that most of the light map was produced using linear interpolation. In fact, from a total of 1769 pixels, only 563 were calculated by hemicube, and 1206 by linear interpolation. Now, since rendering a hemicube takes a very long time indeed, compared to the negligable time required to do a linear interpolation, it represents a speed improvement of about 60% !

Now, this method is not perfect, and it can occasionally miss very small details in a light map, but it's pretty good in most situations. There's a simple way to help it catch small details, but I'll leave that up to your own imagination.

####  CODE EDITING IN PROGRESS - BIT MESSY STILL ####



 float ratio2(float a, float b)

 {

     if ((a==0) && (b==0))    return 1.0;

     if ((a==0) || (b==0))    return 0.0;



     if (a>b)    return b/a;

     else        return a/b;

 }



 float ratio4(float a, float b, float c, float d) 

 {

     float q1 = ratio2(a,b);

     float q2 = ratio2(c,d);



     if (q1<q2)    return q1;

     else          return q2;

 }



 Spacing = 4



 for (y=0; y<Yres; y+=Spacing)

     for (x=0; x<Xres; x+=Spacing)

     {

         SamplePoint = Calculate coordinates of centre of patch

         incidentLight = Calc_Incident_Light(SamplePoint, normal)

         LightMap[x, y] = incidentLight

     }





 RePass:

     threshold = pow(.5, Spacing);



     // Part 1

     HalfSpacing = Spacing/2;

     for (y=HalfSpacing; y<=Yres+HalfSpacing; y+=Spacing)

     {

         for (x=HalfSpacing; x<=Xres+HalfSpacing; x+=Spacing)

         {

             if (x<Xres)

             {

                 x1 = x; y1 = y-HalfSpacing;

                 SamplePoint = Calculate coordinates of centre of patch



                 i  = x1 + y1*Xres;



                 I1 = LightMap[x1+HalfSpacing, y1];

                 I2 = LightMap[x1-HalfSpacing, y1];



                 if ( (ratio2(I1.R,I2.R) > threshold) &&

                      (ratio2(I1.G,I2.G) > threshold) &&

                      (ratio2(I1.B,I2.B) > threshold) )

                 {

                     incidentLight.R = (I1.R+I2.R) * 0.5;

                     incidentLight.G = (I1.G+I2.G) * 0.5;

                     incidentLight.B = (I1.B+I2.B) * 0.5;

                     tempPatch[i] = incidentLight;

                 }

                 else

                 {

                     incidentLight = Calc_Incident_Light(SamplePoint, normal)

                     LightMap[x1, y1] = incidentLight;

                 }

             }

             



             if (y<Yres)

             {

                 x1 = x-HalfSpacing; y1 = y;

              

                 i  = x1 + y1*Xres;



                 I1 = LightMap[x1,y1-HalfSpacing];

                 I2 = LightMap[x1,y1+HalfSpacing];



                 if ( (ratio2(I1.R,I2.R) > threshold) &&

                      (ratio2(I1.G,I2.G) > threshold) &&

                      (ratio2(I1.B,I2.B) > threshold) )

                 {

                     incidentLight.R = (I1.R+I2.R) * 0.5;

                     incidentLight.G = (I1.G+I2.G) * 0.5;

                     incidentLight.B = (I1.B+I2.B) * 0.5;

                     LightMap[x1,y1] = incidentLight;

                 }

                 else

                 {

                     incidentLight = Calc_Incident_Light(SamplePoint, normal)

                     LightMap[x1, y1] = incidentLight;



                     RenderHemicube(SamplePoint+(normal*.05), normal+vector3D(.01f, .01f, .01f));

                     NumRendered++;

                     incidentLight = SumHemicube();

                     tempPatch[i]  = incidentLight;

                 }



             }//end if



         }//end x/y loop





		

		// Part 2

		for (y=HalfSpacing; y<=(Yres-HalfSpacing); y+=Spacing)

		{

			for (x=HalfSpacing; x<=(Xres-HalfSpacing); x+=Spacing)

			{

				if (kbhit())

				{

					k=getch();

					switch (k)

					{

						case 27:

							exiting=1;

						break;

					}

				}

				if (exiting)

					goto GetOut;



				SamplePoint = Surface[SurfaceNum].corner + v1*(x+.5) + v2*(y+.5);

				i = x + y*Xres;

				I1 = tempPatch[ x + (y-HalfSpacing)*Xres ];

				I2 = tempPatch[ x + (y+HalfSpacing)*Xres ];

				I3 = tempPatch[ (x-HalfSpacing) + y*Xres ];

				I4 = tempPatch[ (x+HalfSpacing) + y*Xres ];



				if ( (ratio4(I1.R,I2.R,I3.R,I4.R) > threshold) &&

					  (ratio4(I1.G,I2.G,I3.G,I4.G) > threshold) &&

					  (ratio4(I1.B,I2.B,I3.B,I4.B) > threshold) )

				{

					incidentLight.R = (I1.R + I2.R + I3.R + I4.R) * 0.25;

					incidentLight.G = (I1.G + I2.G + I3.G + I4.G) * 0.25;

					incidentLight.B = (I1.B + I2.B + I3.B + I4.B) * 0.25;

					tempPatch[i] = incidentLight;

					C = GREEN;

				}

				else

				{

					RenderHemicube(SamplePoint+(normal*.05), normal+vector3D(.01f, .01f, .01f));

					NumRendered++;

					if (!TextMode)

						ShowHemicube();

						//ShowFrame();

					incidentLight = SumHemicube();

					tempPatch[i]  = incidentLight;

					BlitScreen(640*480*4,0);

					C = RED;

				}



				R = Gamma[Exposure[(int)(incidentLight.R*Expk)]];

				G = Gamma[Exposure[(int)(incidentLight.G*Expk)]];

				B = Gamma[Exposure[(int)(incidentLight.B*Expk)]];

				//offscreen[x+XstartG + (y+Voffset)*640] = (R<<16) | (G<<8) | (B);

				plotoffscreen(x+XstartG, y+Voffset+Yres, C);

				plotoffscreen(x+XstartG, y+Voffset, (R<<16) | (G<<8) | (B));

				//BlitScreen(640*480*4,0);

			}

			if (TextMode)

				cout << ".";

			else

				BlitScreen(640*480*4,0);

		}





		if (kbhit())

		{

			k=getch();

			switch (k)

			{

				case 27:

					exiting=1;

				break;

			}

		}



		if (exiting)

			goto GetOut;





					ScreenShot[12] = 0;

					ScreenShot[3] = (char)(65+(ShotNum/26));

					ScreenShot[4] = (char)(65+(ShotNum%26));

					ShotNum++;

					cout << ScreenShot << endl;

					cout << Camera.lens << "  " << Camera.target << endl << endl;

					SaveRectangle(ScreenShot,XstartG,Voffset,Xres,Yres*2);





		Spacing >>= 1;



		if (Spacing>1)

			goto RePass;










Misunderstanding and Confusion:

(What to do with the image once you've rendered it)

The output of a radiosity renderer is an image where each pixel consists of three floating point values, one for each of red, green and blue. The range of brightness values in this image may well be vast. As I have said before, the brightness of the sky is very much greater than the brightness of an average surface indoors. And the sun is thousands of times brighter than that. What do you do with such an image?

Your average monitor can at best produce only dim light, not a lot brighter than a surface indoors. Clearly you cannot display your image directly on a monitor. To do this would require a monitor that could produce light as bright as the sun, and a graphics card with 32 bits per channel. These things don't exist for technical, not to mention safety, issues. So what can you do?

Most people seem to be happy to look at photographs and accept them as faithful representations of reality. They are wrong. Photographs are no better than monitors for displaying real-life bright images. Photographs cannot give off light as bright as the sun, but people never question their realism. Now this is where confusion sets in.

Human Vision

Our vision is just about the most important sense we have. Every day I trust my life to it, and so far it hasn't got me killed. Frequently it has saved my life and limb. This was an important sense for our ancestors too, right back to the very first fish or whatever we evolved from. Our eyeballs have had a long time to evolve and have been critical in our survival, and so they have become very good indeed. They are sensitive to very low light levels (the dimmest flash you can see is as dim as 5 photons), and yet can cope with looking at the very bright sky. Our eyeballs are not the only parts of our vision, perhaps even more important is the brain behind them. An increadibly sophisticated piece circuitry, poorly understood, and consisting of many layers of processing takes the output of our eyeballs and converts it to a sense of what actually exists infront of us. The brain has to be able to recognise the same objects no matter how they are lit, and actually does an amazing job of compensating for the different types of lighting we encounter. We don't even notice a difference when we walk from the outdoors lit by a bright blue sky, to the indoors lit by dim yellow lightbulbs. If you've ever tried to take photos in these two conditions, you may have had to change to a different type of film to stop your pictures coming out yellow and dark.

Try this: Go out in a totally overcast day. Stand infront of something white. If you look at the clouds, you will see them as being grey, but look at the white object, and it appears to be white. So what? Well the white thing is lit by the grey clouds and so can't possibly be any brighter than them (in fact it will be darker), and yet we still perceive it to be white. If you don't believe me, take a photo showing the white thing and the sky in the background. You will see that the white thing looks darker than the clouds.

Don't trust your eyes: They are a hell of a lot smarter than you are.

So what can you do? Well, since people are so willing to accept photographs as representations of reality we can



Return to the Good Looking Textured Light Sourced Bouncy Fun Smart and Stretchy Page.