Meta’s Board Gaming AI Learned Not To Lie

Late last year, Facebook parent company Meta announced the development of Cicero, a new machine learning tool designed to play the board game Diplomacy with human players, at a high level. In its announcement, the company makes lofty claims about the impact that the AI, which uses a language model to simulate strategic reasoning, could have on the future of AI development, and human-AI relations.

This is a companion discussion topic for the original entry at

[AI] are not terrifyingly powerful products of circumstance, but tools built by human hands to human ends—dangerous in the deeply mundane ways that all human tools can be.

Good insight here that is sadly lacking in a lot of discussions about ML/AI/AGI. It’s usually the breathless AI evangelists (fabulists) waxing poetic about what AI can do vs. the artificially ignorant cynics who read a long twitter thread once and now think they know everything about how dangerous AI is.


This was really well written!

However, Cicero does not lie.

I think this is one of the most interesting things you can actually add to a game where diplomacy is a necessary tactic. Having one player that is guaranteed to never lie to any other players is a really interesting game mechanic.