The thing is, Nintendo don’t keep the costs to consumers down. They, famously, provide less of a deal on hardware (their costs are lower, your retail price is not showing a similar gap) and worse-than-average scalping on official accessories because they don’t plan to recoup costs via licensing fees for 3rd party games on their platforms.
Going for the $30 SoC vs the $40 SoC in a portable isn’t making major differences to costs, especially when throwing parallax 3D displays, over-engineered (but still needing foam inserts to work properly) controllers, and other gimmicks at them clearly shown Nintendo have no issue with increasing costs for show. But it does seriously limit the visuals of their systems compared to equivalent hardware (either competing gaming devices or even, in the modern era, devices like mainstream portable communication devices that also happen to have a game-capable SoC inside).
I’d be a lot happier if Nintendo offered radically cheaper devices (including accessories) to the competition (which is extremely hard due to the BoM for most of these sort of devices) or enough underlying hardware performance to provide similarly rich experiences. Most of the time (exceptions like the GC are relatively rare), I feel like I get the worst of both worlds - it’s not enough of a price gap from what Sony etc offer to justify the significant performance void (and often extremely questionable sourcing decisions for their silicon designs/partners and which designs they order a custom chip around - who comes up with that 3DS GPU decision and thinks “oh, this is fine for driving a screen that needs two renders per perceived frame the player sees”?).
Edit: to be clear, this stuff is tough and involves predicting the future a bit. See the XB1 paying for a load of transistors as SRAM and going with four DDR3 memory controllers on their SoC because they knew they wanted 8GB of unified RAM (so you pick DDR3 early on because it’ll be cheap, have to go up to 4 channels to even possibly feed the system and throw in SRAM as an L4 cache on the chip, taking away from the GPU’s die area or ballooning the chip costs). Sony go the other direction, use that silicon for a 50% larger GPU and pick GDDR5 memory controllers which still end up hooked up to 8GB of (vastly faster) RAM as the prices work out during development. They end up both being about the same sized chip but not at all the same performance per mm2. It’s bad luck that the XB1 ends up having a significantly less powerful chip than the PS4 due to early design decisions. But also, purely looking at them as chips you’re buying to play games on, the PS4 made the right decisions and is a much better final chip.