So our demo finished first on the 20th pseudoannual text mode demo competition.
Now that the demo is out, there's a few things still I have to do; a writeup on the demo (which you're reading currently), and I should clean up and release TextFX8 and MunRay. But let's start with a retrospective on making the demo.
While having a rather stressful year, the current organizers of TMDC contacted me regarding the 2nd position I had won earlier, asking me for my postal address. They sent me a neat laser-engraved wooden floppy disk with the demo info printed on it.
When the time for TMDC20 was approaching, I figured what the heck, let's do one more. I started from thinking what I haven't done before, and real-time raytracing is one of those things.
So I started working on the first raytracer I've ever done. I'll call it MunRay, a name that is a bad joke on many levels.
(Of COURSE the raytracer has a logo. Every project needs a logo!)
Back in the 1990's, real time raytracing meant doing a dynamic grid of the screen and subdividing it depending on where object edges were, so you won't waste rays on more or less smooth surfaces, and can simply interpolate texture coordinates over those areas.
In the 2010's, real time raytracing is just raytracing. The only "fancy" optimization I did was that I put all the objects into a bounding sphere tree, which is not the best possible data structure for it, but it was easy to implement.
Okay, granted, thanks to the TMDC's resolution limits the render target is 160x100, and I threw eight worker threads at it, so the CPU power per pixel is rather high. I didn't even need to use the GPU, like some people =)
I implemented spheres, infinite planes and axis-aligned boxes, grabbing routines off various articles on the web and modifying them as needed. I also wrote a simple scene editor for it using Dear ImGui, but it's so horrible to use (not ImGui's fault) that it's not really "artist friendly". It was still way better than nothing, so I could plan camera positions, and some of the simpler scenes are written using it too.
I had this idea that, since various other rendering techniques try to fake raytracing (like environment maps and shadow maps etc), what kinds of effects could I "fake" with raytracing? Things like interference circles, checkerboard zoomers, magnifying glass on top of a bitmap, etc.
For this I planned on adding texture maps with alpha for holes, CSG, camera path editor and some more advanced lighting models, but never got around to any of that. There simply wasn't time, so I made a demo with what I had. I also had refraction and transparency, but managed to break those before starting to make the demo, so they didn't get used.
For the bitmap to text mode conversion I started looking at what kinds of approaches I had discarded earlier because they were so stupid. I ended up doing a couple of converters that use ridiculously huge look-up tables. Those tables weight at 33 megs and take several hours to brute-force calculate. I'm pretty sure the precalculation could be optimized to be much faster, but I didn't feel like it.. and once the precalculation was done, I didn't need to run it again. Much.
33 megs is quite a lot, but it can be compressed. I first used a plain zlib blob, but since I already had a png loader in the program, I dropped the lookup table into a png file instead (png:s being compressed with zlib). I then ran the resulting png through zopfli for optimal compression, which again took hours, but the final data file was, while much bigger than I had hoped, small enough to fit into TMDC size limits.
I had done much of this while waiting for Nitro to produce the soundtrack. What I was envisioning was some random raytracey scenes with slow camera pans or some such, but the music I got from Nitro naturally changed all those plans. I also only had a few days before the deadline, and a lot of other things in my life were demanding attention, so I just had to get it over with.
I started by chopping up the poem to phrases and timed them as scrollers on the screen. The speed of the speech varies a bit so different phrases need to scroll at slightly different speeds. In some cases I had to cut a phrase into two in order to get the timing work at all. The result is not perfect, but it's functional.
In retrospect I wonder if doing a writer or even subtitles instead of a scroller would have worked better. Maybe with a custom font, designed to be readable in such a low resolution, without having to take half the screen. That would have required much more timing work though..
I had some scene ideas which I implemented mostly by writing code that generates the scene, and dropped those in. The scenes I designed before I heard the soundtrack are pretty much the weakest parts of the demo. For the rest I read the phrases from the poem and tried to think of something to put there. But I had a serious content bottleneck. I didn't have time to make replacements for the things I wanted to throw out.
I wrote a 3d mesh loader to my raytracer editor which then takes the vertices of the 3d mesh, reduces their number to a desired number and replaces them with spheres based on the vertex density. This resulted in a couple of scenes, like the bunny.
I've always felt that it's important to have something that people can recognize in a demo. Dancing blobs aren't going to cut it. I'm pretty sure the bunny was the thing most people remembered after seeing this demo.
Decompressing the lookup table and loading the music and building the bounding sphere trees took so long that I started with a progress bar. Which is rendered with the raytracer. Even when it's "2d". So when the demo starts and the progress bar thingy rotates, it doesn't change scene, it was the same scene even when "2d".
The wavelength of color bit is one infinite plane and a bunch of colored lights that move.
The earth is not flat bit was pretty obvious, start near a sphere so it looks like a plane and zoom out.
For the photosynthesis bit I figured something something nature something.. I guess I could have made an l-system of boxes or spheres, but ended up just doing a very simple scene with a plane, sunset skybox and a reflective obelisk.
The well-lit future part is a reflective corridor of cubes, with moving lights. This is one part that should have looked much better than it ended up. When it is repeated later on it looks pretty cool at a glance, though. When it first plays, though, it goes on for way too long. I should have chopped it in two and added another scene.
The "darkness is not your friend" part is, naturally, just a black screen.
The "pits" part is an object generated with the raytracer editor by importing a 3d mesh. I wanted something grim here, so I downloaded a bunch of human skull meshes, and this is what came out from one of them. It definitely doesn't look like a skull, but it looks scary enough.
The light speed part is supposed to be a city created with boxes, with the sun rising and falling, but it doesn't quite work. This is also one of those scenes I had planned ahead of time. Would have needed some ambient light (which I never implemented) and maybe rotating skybox (ditto).
For the footloose scene I needed speed, so I just made a scene with a bunch of randomly placed spheres and spun it. There's also a skybox there to add more detail.
The "Empty as we are" scene is the Standford bunny, which I mentioned earlier.
The next, last proper scene is the footloose scene again, with a different camera run.
After that the soundtrack goes into this phase where it replays bits of the poem, so I restructured the whole demo so I can render any frame of the demo so far on a random access way, so I more or less rewind the demo (showing some bits several times, in scrambled order), and use a bunch of different bitmap to text mode converters. That fills most of the remaining time rather nicely, even if it's a bit lazy.
The credits scene is the one I wrote to test the raytracer; a bunch of spheres, a reflective cube and a gradient skybox.
Comments, questions, etc. appreciated.