For those wondering how the buildings are generated, as a GIS person to me it looks like there are at least four different approaches - For areas like GoldenJoel’s apartment it looks like the building models are being generated by photogrammetry from low to medium altitude aerial imagery. You can tell this is probably the approach they used if the buildings look a little blobby but have accurate textures. This technique requires a lot of overlapping images of an area though, so might not work for areas where all they have is satellite, or high altitude aerial, imagery. For places like North Korea, where the imagery quality is probably lower, they might be using an AI based technique to recognize buildings and then generate them procedurally. This looks like what they are using for the trees too.
Photogrammetry would not produce good results on cities with tall buildings however - either the resulting geometry would be too high poly to be rendered decently, or the results might just be too strange. So my guess is that for the bigger cities they are using local building footprint data to generate simple buildings, and then procedurally generating the textures based on building typology and location. A lot of cities and jurisdictions have databases of building footprint data. They are probably using shapefile data that also contains things like building height. This is really simple to do actually, you just extrude the footprint to the height stated in the data…but it’s why you get things like the office block Buckingham Palace and Washington Monument.
Then there are also some more important landmarks which are obviously hand modeled!