Assets and technology (image intensive)


So there has been a lot of discussion of the cost of game development recently. Sometimes it has conflated the asset arms race (artists building ever more detailed, bespoke work for each game so that every inch of the world looks like someone spent months crafting every detail of it) with technological progress (both the advancement of rendering techniques and the increase in computational power of typical consumer-tier processors used for real-time rendering). The concern being that this sort of argument is forming: “games look better and cost more -> some are telling us games are now too expensive to make -> we must demand games look worse for them to be sustainable”. But what does looking better mean?

Is this the sort of visuals we would associate with cheap or expensive games (assuming it was a screenshot from a game, rather than the offline rendered bullshot that photo modes can create)?

I would suggest that this seven year old screenshot is actually detailed (maybe not class-leading but not so far behind that you’d consider it visually outdated) in all the ways we expect today, despite being generated from the assets being created for a xbox 360 well before the era of current demands. Because it’s offline rendered then fidelity came from good enough assets combined with great rendering technology. The final result: what looks to be a very expensive final shot. Photo modes and tweaked real-time photography collections like DET are a great way of seeing what the potential of various assets from game throughout the ages are.

In general we should pay more attention to costs that generally only need to be paid once. When real-time rendering technology advances, that’s generally quickly shared (SIGGRAPH, white papers, etc) and often partially financed by the silicon companies looking for something to justify their new, more powerful chips. It is by no means free or easy to implement but it’s relatively inexpensive (especially with some of the offers around for engines that do much of the heavy lifting).

As people who play games, we also have to pay for it in terms of buying new hardware. Luckily this is generally fixed to an expense we only need to pay every few years to stay current. There is a volume market for $150-300 which hardware tries to fit into (at least for most of a generation) which is relatively cheap compared to the price of the new games that run on it ($60 now turning into $80-120 with DLC and Gold/Complete/Ultimate editions). It’s worth noting that devices that offer significantly less performance than competitors are not priced radically cheaper so there are diminishing returns from slightly cutting hardware costs. In terms of computational power, each generation offers radically more (thanks in part to Moore’s law) and expands the range of previously offline-only techniques that can be faithfully approximated under real-time constraints - basically there is a window of affordable performance that’s always moving and so lowering hardware performance is not a massive saving. What it doesn’t automatically do is pay for higher fidelity assets.

There have been some technological attempts to solve the asset arms race. Procedural generation has always been pointed to as a potential answer and it is already used a lot (just not as the only technique - which is where you see the focus on discussion for things like No Man’s Sky). If you want trees for your game then you may well use SpeedTree (the original famous proc-gen middleware): insert some parameters and generate huge forests of assets which you can look through and pick out the ones you want. It isn’t free of labour (artist selection and tweaking of the algorithm-generated assets) but it’s often cheaper. When Bethesda built Oblivion, they started out by procedurally generating the world and then going in and editing it rather than starting artists in a white box.

More recently, physically-based rendering has arrived as one of those technological advancements. Rather than building assets for every different object and manually painting them, surfaces are defined by their material type and take the lighting conditions from real-time (their shader computations) which can reduce the labour costs while increasing fidelity. This increases reuse potential (realistic metal looks very similar in many games and as long as you paint it on with a quick stroke then PBR can make it look right for where it is in the scene) at the cost of some reskilling.

This is the sort of progress that can help to push back against increasing costs (which is ultimately controlled by project scoping combined with the intelligent use of many of these techniques to make worked hours go much further) while fidelity also increases. The initial choice between sustainable development costs and expensive looking games is far from as clear-cut as it can sometimes sound. When we look at the many photo modes that have come to games, often that produce extremely clean and detailed versions of what the game actually looks like in action, we should remember that this is already the potential visual detail of the current game assets. We’re just a bit of hardware performance and real-time techniques away from realising it.

Share your games writing/criticism!

This is an interesting argument and one that I don’t think I’ve heard before. I think it’s a strong case for the examples shown, but those examples are conspicuously heavy on cars and light on characters and small interactive props.

Building a small set of high-res models isn’t difficult if they don’t require extensive animation or scripting. I imagine building a full scale simulation of a city is substantially more difficult and that’s what’s now expected of AAA games.

I think this shows that games could achieve visually spectacular scenes fairly easily, but it could still be tough to scale that up to a full game without the visual fidelity of the graphics setting expectations for character animation and interactivity that a small-budget game would struggle to meet.


There’s definitely some truth there. Adding modern lighting and atmospheric effects to older assets can make really good looking scenes without spiraling costs. I guess my question is: is that enough? Can games with minimal new art assets compete with games with huge teams creating new art?

I agree with the point that games can continue to improve graphically without spiraling costs. If team sizes stayed constant, games would still keep looking better. The problem is that games are trying to push faster than that. Asking a game company to re-use art assets isn’t asking for graphics to regress to how they looked when those assets were first used, but limiting the amount of money spent on graphics is asking for graphics that look worse than they possibly could.

Those Dead End Thrills screenshots are really lovely, but that’s an artist looking for pretty shots. Games need to contend with players viewing every asset in them with a critical eye and comparing them to other games on the market. A few bad looking animations among hundreds can be seriously detrimental (Mass Effect: Andromeda has become the go-to example for this, but I think it’s universally true that the visual quality of games is judged by their weakest elements).

To be clear, I’m fine with slow but sustainable improvements in graphics. Then again, I’m also the type of person who actively wants games to sacrifice graphics for sustainability and mechanical complexity (and so they run on computers I own). I don’t know if a game that looks better than a previous game but not as good as other current release titles would do well with the type of player who buys the latest AAA games.


I’m fully on board with the idea that games will continue to look better regardless of what the biggest companies in the industry do and that clever improvements in tools and engines will allow smaller teams in the future to surpass what the largest AAA teams can produce today. I’d like to believe that improvements in tech will allow indie developers of the future to surpass mismanaged AAA projects of the future, but that strikes me as hoping for a technical fix to a social problem. As long as exploitative labour practices are more productive than the alternative, we have a problem. That problem isn’t solved by valuing mechanics over graphics, but it also isn’t solved by better ways of rendering assets.

My “sacrifice graphics” wording was imprecise. I don’t believe that the industry should turn off advanced rendering settings in modern engines or devalue artists. What I do believe is that lower representational fidelity can let games represent more complex systems for the same development cost. It’s not a coincidence that Dwarf Fortress looks the way it does or that prettier games with larger teams haven’t replicated its systemic complexity. Not needing detailed models of everything allows more things to be modeled. Dwarf Fortress is an extreme example, but I think the same idea holds in more conventional games. The more time is required for details, the less time is available for broader features and interactions between them. Trying to meet a specific artistic vision with a slashed art budget isn’t going to make the result better or the process more sustainable; changing the vision so that it doesn’t depend on having so many assets that they can only be created by exploiting people can make the process more sustainable and the result better.