What I love is it’s hacks all the way down. Yes, the stand-out stories are hats used as trains and other stories of lack of flexibility (unable to rapidly add a robust model movement system not expecting a humanoid payload*) with extreme flexibility (using the train model as a hat actually worked!) but even the most mundane thing is a rich history of refining a hack. Real-time rendering isn’t the story of doing the mathematically correct thing, it’s the story of doing extremely cheap calculations that somewhat approximate the desired output.
Even the most basic thing, rendering a textured surface, is a rich history of tweaking hacks.
You start out with those really early (textured) 3D engines looking for the nearest texture value to the screen pixel. Everything in the distance flickers as the nearest pixel jumps around textures that are really best viewed much closer. We can’t spend the time sampling several values every pixel every frame to get the average but then the next hack appears: pre-calculate them (mipmapping). For a bit of data bloat (storing the smaller versions of every texture), you can just look up the nearest texture value in the suitably scaled down version of the texture and use that. It’s already the average of the texture values that it covers in the full-res version. Everyone rejoices, the flickering mostly ends, stuff still basically looks like Quake 1/PS1.
Then the next hack arrives: bilinear filtering. Rather than sample one value in the texture, why not use that pixel centre to work out the four closest values and interpolate (vs just reading the value of the closest one). Yes, you’re reading four times as many texture values but hardware acceleration is here and bandwidth is growing quite quickly (for once). As things look better (definitely softer) then it becomes evident a limitation somewhat obscured by the blocky mess of nearest filtering: when you move between mipmap levels then there is a line. You either sample from one size or the other and at a certain point in the scene one textured surface can be partially in the zone where it is one and partially in the zone of another or the player moves the camera and notice the object jump from one zone to the next and the texture suddenly gets more detailed. The clever trick (using the pre-computed smaller versions of textures) creates a new artefact.
So how do you fix that? Trilinear filtering to the rescue! This is exactly the same as before except you sample 8 values from the textures. Four from the mipmap texture a bit smaller than ideal, four from the mipmap a bit bigger. Bilinear both of those and interpolate between them. Everything now smoothly transitions and the lines are gone! Job done, right?
This has worked for all triangles that point flat on to the camera. Trilinear filtering is all you need. But then you’re playing your favourite driving game and that road texture: it’s still showing those lines where the detail seems to jump. What’s going on? The answer is the density of texture in one direction isn’t the same as the other. It’s obvious when you think about it: a road going off into the distance repeats the texture many many times into the distance but not much side to side, despite being on the flat screen for possibly more width than height. The need to pick a suitable pixel density for the mipmap (to avoid that flickering we wanted to avoid at the start) conflicts between the two dimensions of a texture. So the next hack is anisotropic filtering. Don’t just make textures smaller in both dimensions for the mipmaps, make then with things like 1/8th the height and half the width (and vice versa). Then you’ve got loads of extra mipmaps that are a much better fit for the road. Sample from various points that are the nearest to the actual orientation of the triangle and interpolate again. Finally, the job is properly done and everyone can go home.
As this was all done in hardware, there was actually a lot more to it and if you’ve ever seen a flower test (tint each mipmap size to different colours so it’s really clear where they transition, render a textured tube; anything that’s not a ring is hacks/optimisations for “difficult” angles that will look worse than they should) then you probably vaguely remember the era before the hardware/driver hacks were quite right. Nowadays (like 2006 era onwards), it’s basically solved.
And that’s just a really compressed history of hacks around going from a texture for a triangle onto painting a pixel covered by that triangle on the final screen. Everything with real-time rendering has an incredibly detailed story for the various hacks and the hacks that cover up the next most terrible artefact from the last hack and everything desperately trying to avoid doing as much work as possible (because genuinely integrating over time and space to calculate the actual ground truth is just so expensive).
* Hey, if you like thinking about how these things get fixed as much as how they arise, you should totally look up how ECS (Entity Component System) architecture is designed to fix all these hierarchical component (OO) nightmares by totally separating component systems.