Non-human style Go AI


#1

So this is a kinda follow-up topic to that Dota2 AI.

To summarise, Go is a tricky game. Particularly tricky for computers that try to iterate over all the possible moves (which is why you can’t “brute force” it in a way similar to chess) even with modern computing power.

Last year, Google managed to beat top human players with their AI. It started out looking at all good games of Go and finding patterns and rules that it could apply that led to victory. Basically it did the computer equivalent of being immersed in the play of the masters to seed the AI with how Go is played well. From there, it refined the process by using the speed of computers to simulate endless games with slight variations and find more rules to increase the odds of winning. This made an AI that played Go roughly like good Go players. It played so well that it beat experts.

This year, Google took the same rough framework but didn’t start out with that huge database of expert human players’ games. The AI got the rules of the game and started playing itself. It didn’t start the rule-learning refinement process as the level of an expert but at the level of a total novice. It played variants of itself and worked out rules for winning. The important difference is this makes it play games in ways that are not bound by the current expert players. The task is harder (which is where the technical breakthroughs are involved in working out how to do this efficiently) but the reward is an AI that doesn’t play Go like a very good human.

For example, joseki are specialised sequences of well-known moves that take place near the edges of the board. (Their scripted nature makes them a little like chess openings.) AlphaGo Zero discovered the standard joseki taught to human players. But it also discovered, and eventually preferred, several others that were entirely of its own invention. The machine, says David Silver, who led the AlphaGo project, seemed to play with a distinctly non-human style. The result is a program that is not just superhuman, but crushingly so.

Some really good commentary/notes on this.


#2

Why we have this obsession in creating Skynet?

Nobody watched the documentary The Terminator?


#3

Not particularly surprised that as AI becomes more advanced it starts to become less imitative. There’s a kind of subconscious assumption that AI advancements could be measured on a scale of “how close it it to human”, but in the end a “digital intelligance” at the same level as our own isn’t going to resemble us on a fundimental level. It needs to be approached under the assumption that we’re creating a new species, not an imitation of ourselves. If that makes sense.

Like, we’ve had cases like this a few times in recent years, but on a larger scale, where machines have learned to do things like detect illnesses and the like, but we still don’t know how exactly they’re doing it. And to that end I reckon we should start hearing about research into getting machines to articulate these “thoughts” soon. We can’t just read the program files so easily anymore, so we need to basically teach them not only how to speak in our languages, but actually understand and explain things.

And yo, that’s when shit’s gonna get good. That’s when we get giant Axiom Verge style talking faces that we ask questions to and get complex answers.

Am psyched for when the machines replace us yall, can’t wait for our robot children to take over from our sorry asses.


#4

Good times for a misanthrope


#5

I am curious about the AI future, but would also like to live.


#6

There are two slightly different things at play here, but they are very different at a technical level: there are plenty of examples out there of machine learning systems where computers learn how to execute a task and we have no access to their internal representations of the data at play. These are fascinating in their own right (think of a speech recognition software that, somehow, has its own grammar and vocabulary representations that are different from ours).
However, these are still learning from human-generated data: when a system learns how to detect illness, we are telling it whether each patient file they get is ill or not. Even if its internal representation of illness is completely alien to us, it is still based on our own knowledge in a very direct way.
This weird new Go AI bypasses this issue altogether. The only human input other than the design of the AI itself is the rules of the game. This approach is still super limited (it only works in situations where there is an easy quantifiable way of measuring how good a certain way of playing is), but getting unsupervised machines to be better than humans without any external data is very very scary to me.


#7

it’s definitely pretty cool how using a different, non human imitating approach, ended up way more effective than the usual methods of imitation or brute force. i imagine approaches like this can be useful for learning new methods not limited by our experience or previous assumptions. i dunno, i think stuff like that is awesome.


#8

I think this is totally fascinating and, potentially, indicative of the future of AI more broadly. Making robots that can do what humans can really well is one thing; making things that totally revolutionise how we do things is another.

Now, is that a good thing? Not necessarily, but interesting all the same.

I’m scared of the robo-feudal future, y’all…