So there has been a lot of discussion of the cost of game development recently. Sometimes it has conflated the asset arms race (artists building ever more detailed, bespoke work for each game so that every inch of the world looks like someone spent months crafting every detail of it) with technological progress (both the advancement of rendering techniques and the increase in computational power of typical consumer-tier processors used for real-time rendering). The concern being that this sort of argument is forming: “games look better and cost more -> some are telling us games are now too expensive to make -> we must demand games look worse for them to be sustainable”. But what does looking better mean?
Is this the sort of visuals we would associate with cheap or expensive games (assuming it was a screenshot from a game, rather than the offline rendered bullshot that photo modes can create)?
I would suggest that this seven year old screenshot is actually detailed (maybe not class-leading but not so far behind that you’d consider it visually outdated) in all the ways we expect today, despite being generated from the assets being created for a xbox 360 well before the era of current demands. Because it’s offline rendered then fidelity came from good enough assets combined with great rendering technology. The final result: what looks to be a very expensive final shot. Photo modes and tweaked real-time photography collections like DET are a great way of seeing what the potential of various assets from game throughout the ages are.
In general we should pay more attention to costs that generally only need to be paid once. When real-time rendering technology advances, that’s generally quickly shared (SIGGRAPH, white papers, etc) and often partially financed by the silicon companies looking for something to justify their new, more powerful chips. It is by no means free or easy to implement but it’s relatively inexpensive (especially with some of the offers around for engines that do much of the heavy lifting).
As people who play games, we also have to pay for it in terms of buying new hardware. Luckily this is generally fixed to an expense we only need to pay every few years to stay current. There is a volume market for $150-300 which hardware tries to fit into (at least for most of a generation) which is relatively cheap compared to the price of the new games that run on it ($60 now turning into $80-120 with DLC and Gold/Complete/Ultimate editions). It’s worth noting that devices that offer significantly less performance than competitors are not priced radically cheaper so there are diminishing returns from slightly cutting hardware costs. In terms of computational power, each generation offers radically more (thanks in part to Moore’s law) and expands the range of previously offline-only techniques that can be faithfully approximated under real-time constraints - basically there is a window of affordable performance that’s always moving and so lowering hardware performance is not a massive saving. What it doesn’t automatically do is pay for higher fidelity assets.
There have been some technological attempts to solve the asset arms race. Procedural generation has always been pointed to as a potential answer and it is already used a lot (just not as the only technique - which is where you see the focus on discussion for things like No Man’s Sky). If you want trees for your game then you may well use SpeedTree (the original famous proc-gen middleware): insert some parameters and generate huge forests of assets which you can look through and pick out the ones you want. It isn’t free of labour (artist selection and tweaking of the algorithm-generated assets) but it’s often cheaper. When Bethesda built Oblivion, they started out by procedurally generating the world and then going in and editing it rather than starting artists in a white box.
More recently, physically-based rendering has arrived as one of those technological advancements. Rather than building assets for every different object and manually painting them, surfaces are defined by their material type and take the lighting conditions from real-time (their shader computations) which can reduce the labour costs while increasing fidelity. This increases reuse potential (realistic metal looks very similar in many games and as long as you paint it on with a quick stroke then PBR can make it look right for where it is in the scene) at the cost of some reskilling.
This is the sort of progress that can help to push back against increasing costs (which is ultimately controlled by project scoping combined with the intelligent use of many of these techniques to make worked hours go much further) while fidelity also increases. The initial choice between sustainable development costs and expensive looking games is far from as clear-cut as it can sometimes sound. When we look at the many photo modes that have come to games, often that produce extremely clean and detailed versions of what the game actually looks like in action, we should remember that this is already the potential visual detail of the current game assets. We’re just a bit of hardware performance and real-time techniques away from realising it.



