I Never Get Tired of Learning Game Dev Tricks and Hacks


#1

I guess it’s partially because I dabble a teeny bit in game development, but I really, really love hearing little tricks that went into the making of my favorite games. Far from spoiling the magic, learning how, say, the developers on BioShock rigged a bunch of lighting systems to make Fort Frolic look especially theatrical will always make me appreciate the craft just a little more.

So this rad thread from Vlambeer developer Rami Ismail made my heart go pitter-patter with nerdy joy yesterday. In it, he asked devs to share their favorite little “hacks” or solutions to development issues that they’ve used in their own games.

This is how I know the folks at Polytron implemented that gorgeous day/night cycle in Fez by sampling one little texture:

Or that, in one Adventure Time game, Jake the Dog was made out of repurposed work from a racing track editor from the dev’s previous game:

And how the folks at Telltale made the rearview mirrors work in The Walking Dead Season 1 (disclosure! Jake here is a friend and former podcast co-host of mine):

It goes on and on, and let me tell you, there is GOLD in that thread.

As our own Cameron Kunzelman stated in a column not long ago, “like all games are, put together with duct tape and glue,” but I live for finding out just how artful (or utterly goofy) the solutions are to making some semblance of a coherent system, world, or character.

What about you, dear readers? Do you have a favorite story about finding out how a favorite game pulls off the trick? Sound off in the forums.


This is a companion discussion topic for the original entry at https://waypoint.vice.com/en_us/article/bjp5mq/game-development-tricks-hacks-fez-walking-dead

#2

A lot of good work happens behind explosion animations.

On the 4th boss of Jamestown (Last Express), we swapped out the train tracks and background of the level for looping-treadmill-versions of the same when the boss speeds through the wall at the end of the level behind a giant explosion.

Also, in order to make the transition in the middle of the last boss fight work, I assembled a few different static arrangements of Mike’s boss art from both phases. I then used the thunder sound from the 2nd stage and a fade-to-white to make “lightning flashes” between those dioramas to give the impression of change without additional animation budget.


#3

Also, the traitor prince AND the last two bosses all contain art assets salvaged from a boss we ended up having to scrap.


#4

What I love is it’s hacks all the way down. Yes, the stand-out stories are hats used as trains and other stories of lack of flexibility (unable to rapidly add a robust model movement system not expecting a humanoid payload*) with extreme flexibility (using the train model as a hat actually worked!) but even the most mundane thing is a rich history of refining a hack. Real-time rendering isn’t the story of doing the mathematically correct thing, it’s the story of doing extremely cheap calculations that somewhat approximate the desired output.

Even the most basic thing, rendering a textured surface, is a rich history of tweaking hacks.

You start out with those really early (textured) 3D engines looking for the nearest texture value to the screen pixel. Everything in the distance flickers as the nearest pixel jumps around textures that are really best viewed much closer. We can’t spend the time sampling several values every pixel every frame to get the average but then the next hack appears: pre-calculate them (mipmapping). For a bit of data bloat (storing the smaller versions of every texture), you can just look up the nearest texture value in the suitably scaled down version of the texture and use that. It’s already the average of the texture values that it covers in the full-res version. Everyone rejoices, the flickering mostly ends, stuff still basically looks like Quake 1/PS1.

Then the next hack arrives: bilinear filtering. Rather than sample one value in the texture, why not use that pixel centre to work out the four closest values and interpolate (vs just reading the value of the closest one). Yes, you’re reading four times as many texture values but hardware acceleration is here and bandwidth is growing quite quickly (for once). As things look better (definitely softer) then it becomes evident a limitation somewhat obscured by the blocky mess of nearest filtering: when you move between mipmap levels then there is a line. You either sample from one size or the other and at a certain point in the scene one textured surface can be partially in the zone where it is one and partially in the zone of another or the player moves the camera and notice the object jump from one zone to the next and the texture suddenly gets more detailed. The clever trick (using the pre-computed smaller versions of textures) creates a new artefact.

So how do you fix that? Trilinear filtering to the rescue! This is exactly the same as before except you sample 8 values from the textures. Four from the mipmap texture a bit smaller than ideal, four from the mipmap a bit bigger. Bilinear both of those and interpolate between them. Everything now smoothly transitions and the lines are gone! Job done, right?

This has worked for all triangles that point flat on to the camera. Trilinear filtering is all you need. But then you’re playing your favourite driving game and that road texture: it’s still showing those lines where the detail seems to jump. What’s going on? The answer is the density of texture in one direction isn’t the same as the other. It’s obvious when you think about it: a road going off into the distance repeats the texture many many times into the distance but not much side to side, despite being on the flat screen for possibly more width than height. The need to pick a suitable pixel density for the mipmap (to avoid that flickering we wanted to avoid at the start) conflicts between the two dimensions of a texture. So the next hack is anisotropic filtering. Don’t just make textures smaller in both dimensions for the mipmaps, make then with things like 1/8th the height and half the width (and vice versa). Then you’ve got loads of extra mipmaps that are a much better fit for the road. Sample from various points that are the nearest to the actual orientation of the triangle and interpolate again. Finally, the job is properly done and everyone can go home.

As this was all done in hardware, there was actually a lot more to it and if you’ve ever seen a flower test (tint each mipmap size to different colours so it’s really clear where they transition, render a textured tube; anything that’s not a ring is hacks/optimisations for “difficult” angles that will look worse than they should) then you probably vaguely remember the era before the hardware/driver hacks were quite right. Nowadays (like 2006 era onwards), it’s basically solved.

And that’s just a really compressed history of hacks around going from a texture for a triangle onto painting a pixel covered by that triangle on the final screen. Everything with real-time rendering has an incredibly detailed story for the various hacks and the hacks that cover up the next most terrible artefact from the last hack and everything desperately trying to avoid doing as much work as possible (because genuinely integrating over time and space to calculate the actual ground truth is just so expensive).

* Hey, if you like thinking about how these things get fixed as much as how they arise, you should totally look up how ECS (Entity Component System) architecture is designed to fix all these hierarchical component (OO) nightmares by totally separating component systems.