In ancient history (of the 1960's), well before I was born, people like Gouraud and Phong invented (or at least put to paper) most of the basic computer graphics algorithms. What we know as Gouraud shading is linear interpolation of light values over a surface. In Gouraud shading you only need normal vectors for each vertex. These normal vectors were then used to calculate lighting. Phong went one step further and Phong shading is linear interpolation of normal vectors over the surface.
Bump mapping is an extension to Phong shading, where the normal vectors are distorted according to a "bump map", just like the color of the surface can be changed with a texture map. Both texture and bump maps can be used to make relatively simple geometry look very detailed and complex. They still don't change the silhouette of the object, for that you'll need displacement maps, but those are way outside our scope.
We'll do bump mapping in 2d by calculating a lookup table which will then be used to distort our "light map". Thus, we're faking bump mapping. On the other hand, bump mapping is a fake effect to begin with.
Make a copy of the previous chapter as usual, and open the new one.
We'll need blend_add()
from chapter 5, the picture, heightmap and lightmap (right-click, save as) images.
Next, we'll load the images. Add the following after the screen surface definition, near top of the file:
// Picture
int *gPicture;
// Heightmap
int *gHeightmap;
// Lightmap
int *gLightmap;
// Picture size
int gPictureWidth, gPictureHeight;
and the loading code in init()
:
void init()
{
int x, y, n;
gPicture = (int*)stbi_load("picture.png", &gPictureWidth, &gPictureHeight, &n, 4);
gHeightmap = (int*)stbi_load("heightmap.png", &x, &y, &n, 4);
gLightmap = (int*)stbi_load("lightmap.png", &x, &y, &n, 4);
}
Since we care about the picture sizes, we take the values into our globals. The heightmap has to be the same size as the picture, and we know the lightmap to be 256 by 256.
Let's start by just drawing the image. Replace render()
again:
void render(Uint64 aTicks)
{
for (int i = 0; i < gPictureHeight; i++)
{
for (int j = 0; j < gPictureWidth; j++)
{
gFrameBuffer[i * WINDOW_WIDTH + j] = gPicture[i * gPictureWidth + j];
}
}
}
When you compile and run, you should get the picture in the top left corner of your window. If you get a crash, your window is not big enough.
Let's alter the rendering a bit to render the light on top of it. Here's new version of render()
again:
void render(Uint64 aTicks)
{
for (int i = 0; i < gPictureHeight; i++)
{
for (int j = 0; j < gPictureWidth; j++)
{
int u = j;
int v = i;
if (v < 0 || v >= 256 ||
u < 0 || u >= 256)
{
gFrameBuffer[j + i * WINDOW_WIDTH] = gPicture[j + i * gPictureWidth];
}
else
{
gFrameBuffer[j + i * WINDOW_WIDTH] =
blend_add(
gPicture[j + i * gPictureWidth],
gLightmap[u + v * 256]);
}
}
}
}
Here we're making u
and v
the x- and y-coordinates of the light texture. If the u and v coordinates are outside the bounds of the texture, we're plotting the image as is. If we're inside the texture, we add the texture to the picture before plotting.
There are reasons why we're doing things in this slightly awkward way. Let's modify things a bit. Change the int u..
and int v..
lines to the following:
int u = j + (int)(sin((aTicks + i * 5) * 0.01234987) * 7);
int v = i + (int)(sin((aTicks + j * 5) * 0.01254987) * 7);
Compile and run. We're distorting the lightmap with the couple sin()
values. Feel free to play with the values to see what happens.
The bad side of an effect like this is that it's difficult to predict when clipping is needed, and thus we're forced to check whether we're on the map at each pixel. One alternative would be to use a huge texture, or a wrapping one. In small resolutions this might be feasible.
Our eventual goal is to distort the light based on the height map.
First, let's add the lookup table, near the top of the file, after the definition of images:
// Bump lookup table
short *gBumpLut;
Next, paste the following in the init()
function, after the images are loaded:
gBumpLut = new short[gPictureWidth * gPictureHeight];
for (int i = 0; i < gPictureHeight; i++)
{
for (int j = 0; j < gPictureWidth; j++)
{
gBumpLut[i * gPictureWidth + j] =
gHeightmap[j + i * gPictureWidth] & 0xff;
}
}
Here we allocate the lookup table and then copy 8-bit values from the heightmap image to it.
Next, change the int u..
and int v..
lines in render() to:
int u = j + gBumpLut[i * gPictureWidth + j];
int v = i + gBumpLut[i * gPictureWidth + j];
When you run, you'll see that the light is distorted based on the heightmap. It's still a bit far from what we'll want eventually, but let's animate the light so you'll see what is happening.
Add the following lines before the for
loops in render()
:
int posx = (int)((sin(aTicks * 0.000645234) + 1) * gPictureWidth / 4);
int posy = (int)((sin(aTicks * 0.000445234) + 1) * gPictureHeight / 4);
Subtract posx
from u
and posy
from v
. Compile and run.
We'll need to adjust the lookup table a bit to reach the effect we're after. We'll want to distort the image both horizontally and vertically depending on the slopes of the height field image. Change the lookup table calculation to look like this:
gBumpLut = new short[gPictureWidth * gPictureHeight];
memset(gBumpLut, 0, sizeof(short) * gPictureWidth * gPictureHeight);
for (int i = 1; i < gPictureHeight - 1; i++)
{
for (int j = 1; j < gPictureWidth - 1; j++)
{
int ydiff =
((((unsigned int*)gHeightmap)[j + (i - 1) * gPictureWidth] & 0xff) -
(((unsigned int*)gHeightmap)[j + (i + 1) * gPictureWidth] & 0xff));
int xdiff =
((((unsigned int*)gHeightmap)[j - 1 + i * gPictureWidth] & 0xff) -
(((unsigned int*)gHeightmap)[j + 1 + i * gPictureWidth] & 0xff));
gBumpLut[i * gPictureWidth + j] = ((ydiff & 0xff) << 8) | (xdiff & 0xff);
}
}
So, what's changed? First off, inside the loops we're calculating x and y slopes for each pixel by subtracting the next pixel value from the last one. Then, we're combining the two values into one 16-bit value.
Since we're reading the previous and next pixel, we must be careful not to read outside the image. The for loops have been changed to start from the second pixel and to stop before the last pixel. The borders of our lookup table will be left uninitialized, so we need to add a memset()
to clear all the values to zero before we begin.
Note that the gBumpLut
table now contains two 8-bit values. Unlike with the 2d tunnel, both of the values are signed - the result from the subtraction can come out as negative. We'll have to be careful to keep the values signed when we're reading the lookup table. To make things safer, we could have defined the lookup table as an array of two signed 8-bit values.
In render()
the int u..
and int v..
lines need to change to:
int u = j + ((signed char)gBumpLut[i * gPictureWidth + j]) - posx;
int v = i + (gBumpLut[i * gPictureWidth + j] / 256) - posy;
Compile and run.
Note that we're taking special care to preserve the signedness of our values. The bottom 8 bits are handled by casting the value to signed char, which we expect to be a 8-bit value. The top 8 bits are handled by dividing the value by 256, which has the same effect as a signed shift by 8 bits would have. Since the C standard does not guarantee the shifts to be signed or unsigned, we're using a division here and hoping that the compiler is smart enough to convert it to a signed shift.
Finally, let's scale the effect to the whole window. Here's the final render()
:
void render(Uint64 aTicks)
{
int posx = (int)((sin(aTicks * 0.000645234) + 1) * gPictureWidth / 4);
int posy = (int)((sin(aTicks * 0.000445234) + 1) * gPictureHeight / 4);
for (int y = 0; y < WINDOW_HEIGHT; y++)
{
for (int x = 0; x < WINDOW_WIDTH; x++)
{
int i = y * gPictureHeight / WINDOW_HEIGHT;
int j = x * gPictureWidth / WINDOW_WIDTH;
int u = j + ((signed char)gBumpLut[i * gPictureWidth + j]) - posx;
int v = i + (gBumpLut[i * gPictureWidth + j] / 256) - posy;
if (v < 0 || v >= 256 ||
u < 0 || u >= 256)
{
gFrameBuffer[x + y * WINDOW_WIDTH] = gPicture[j + i * gPictureWidth];
}
else
{
gFrameBuffer[x + y * WINDOW_WIDTH] =
blend_add(
gPicture[j + i * gPictureWidth],
gLightmap[u + v * 256]);
}
}
}
}
Here we're scanning through every pixel in the window. i
and j
are still in the picture coordinates, while the new x
and y
are in window coordinates. i
and j
get calculated by scaling the x
and y
by the fraction of picture and window sizes. This is called "point sampling", and is the cheapest way to scale images. The negative side is that it tends to create aliasing.
Resizing images is a complete research field to itself, and since we're inventing new data out of thin air, there is no such thing as a perfect scaling algorithm. Bilinear is one relatively cheap one, but tends to blur the resulting image. On the other end of the spectrum we have AI scaling algorithms, which are extremely heavy compared to our humble point sampler.
blend_mul()
.Next up: 09 - Distorted Transitions..
Any comments etc. can be emailed to me.