Alpha Go to play Ke Jie in May Go forum

22 replies. Last post: 2 hours ago

Reply to this topic Return to forum

Alpha Go to play Ke Jie in May
  • David J Bush ★ at 2017-04-13

    https://arstechnica.com/information-technology/2017/04/deepm...

    It’s a three game match.

    Hassabis: "Instead of diminishing the game, as some feared, artificial intelligence has actually made human players stronger and more creative. It’s humbling to see how pros and amateurs alike, who have pored over every detail of AlphaGo’s innovative game play, have actually learned new knowledge and strategies about perhaps the most studied and contemplated game in history."

  • purgency at 2017-04-13

    I read in some leaks earlier to the official reports, that the version of AlphaGo playing will be one that has learned the game by itself from scratch without learning from human games. There is no mentioning of that in the official news, so that may not be true; but it would be interesting. At the very least I don’t expect the version that has learned from human play to lose even one of the games; after the online games that AlphaGo won 60:0 against pro players that seems unlikely, although then again those were games with like 15 seconds per move which should be a disadvantage for the human.

  • Sighris at 2017-04-26

    I’m excited! 

    http://www.usgo.org/news/2017/04/world-1-ke-jie-9p-to-take-on-alphago-in-china/ 

    From an AI perspective, it makes sense to have a decent starting point, so it makes sense to start with pro games, but OTOH, it also makes sense to remove the possibility that we humans have a blind spot (for example maybe our joseki are actually not seeing the “big picture”/whole-board and are too focused on the corners... but as far as the next match goes, It matters little to me if AlphaGo has “learned” from Pro/human games or if "the version of AlphaGo playing will be one that has learned the game by itself from scratch without learning from human games." 

  • Carroll at 2017-05-02

    Do the good players here agree with Brady Daniels analysis of two similar joseki games of AlphaGo?

    https://www.youtube.com/watch?v=6rB2cYOeppQ

    In the second game are the ko threats as big as he says?


  • lazyplayer at 2017-05-02

    Carroll, i’ve no idea how big they’re exactly, but it’s at least plausible to me that they’re big...

    Very interesting video anyway, I’ve learned something from it, even if I’ll probably never play Go... :)

  • gamesorry at 2017-05-03

    Do you mean the one around 35’15'' ? I think he’s correct :)

  • Crelo at 2017-05-04

    Well, Brady Daniels tells nice stories about the games, quite entertaining. It helps to learn a certain way of thinking during the games but to really understand what is happening one should read ahead like AlphaGo. 

    I like AlphaGo games but I don’t think humans can imitate it. I mean we can play the same moves but the games will still be messy, humans don’t know as well as AlphaGo when they are ahead and for sure don’t know so well the value of moves, not even professionals. I really think AlphaGo can now give at least two stones handicap to anybody, 

    He is correct about the ko :-)

  • lazyplayer at 2017-05-04

    Crelo, but is AlphaGo play correct or it is only “working” in self-play and again human players?

    Probably nobody knows but anyway the question is interesting. I think probably it’s indeed the right way to play and it should be copied.

  • lazyplayer at 2017-05-04

    To put the same idea in another way, counting territory accurately in Go is clearly necessary for near perfect play. But the same can’t be said for counting “centipawns” in chess.

  • Crelo at 2017-05-05

    Lazyplayer, there is no way to tell if the AI plays correct (perfect) moves or is just slightly better than us. There is also the danger to trust the AI too much, maybe there realy are better moves. We must get used to this new world. :-)

  • Florian Jamain at 2017-05-05

    Do you think the AI is capable to beat the best players in a long game?

    Something like 1 month per player.

    I would believe it only when I gonna see it. Now, I just believe the AI is capable to use the fact that humans gonna make mistakes cause of too short time.

  • lazyplayer at 2017-05-05

    Florian, but in reality it’s very hard for an human to remember the analysis already done. We’ve very bad short term memory. If you solve this problem then indeed humans should still be able to win when given enough time, i guess. Well, maybe you would need more time than their lifetimes... :)

  • Florian Jamain at 2017-05-05

    Then, if you prefer, you can do a match like this:

    150 of the bests humans are playing the same game against the AI, during a year. I guess it is enough.

    The goal is just to know if really the AI is playing better then humans in general. If there is no hope anymore. Cause finally it is the real question: Is the AI playing better than humans in the general sense.
    I don’t believe it, they will need to show me.

  • David J Bush ★ at 2017-05-06

    Well Florian, I refer you back to the quote at the beginning of this thread. Pros are learning “new knowledge and strategy” from studying Alpha Go’s games. In the video, Hassabis briefly shows a couple of techniques that had previously been under-appreciated. If that’s not playing better than humans in the general sense, then what is?

  • Crelo at 2017-05-06

    Of course the AI is playing better, because it wins :-) AlphaGo is not an exception anymore, its software structure can be reproduced.

    https://qz.com/936654/googles-alpha-go-now-has-a-serious-game-playing-rival-with-tencents-jueyi-or-fineart/

    Anyway, the AI is still a human creation, we should feel proud, actually we will learn more this way.

  • Florian Jamain at 2017-05-06

    David, be sure that I don’t want to say that there is nothing to learn from AI or that AI is NEVER playing better than humans.
    This is completely different.

    My question is different. The fact is that AI in general are not playing “really” better with a lot of time to play, in a 2 hours or 3 hours game (per players), they are already playing very very well.
    A human is doing mistakes in a 2 hours game, these mistakes could be corrected in a 1 month game, the AI will also play better in a 1 month game but you can be sure that AI won’t play better as humans will.

    Crelo is agreeing with me, even if he does not realize it.
    He said that AI can give no more than 2 stones to best players in a 2 hours game, this almost implies that in a one month game humans can beat the AI.

  • Crelo at 2017-05-06

    I am afraid the computer will advance more in one month than humans.

  • Sighris at 2017-05-08

    When you give an AI significantly more time (assuming the computer resources are fully available) it can use all of that time; when you give humans more time they need to sleep & eat during some of that time... and if the AI programming were adjusted for the larger amount of time (again assuming the computer resources are fully available) think about how deeply the AI could probe into variations (I’m thinking about “brute force” checking of move choices made by the core process).  I’m with Crelo at 2017-05-06 ---> I think the computer AI would advance more than humans with the extra game time.

  • Carroll at 39 hours ago

    https://phys.org/news/2017-05-ready-rematch-machine-ancient-game.html

    It is tomorrow but I did not find if it will be streamed, anyone knows?

  • Arek Kulczycki at 8 hours ago

    The question that is the most interesting to me and also arises a couple of times here is “if AlphaGo plays better in general / if AlphaGo plays correctly”. The meaning of this is basically if AlphaGo plays better openings or just recovers in endgames.

    Experiment that should be done is:

    1) play N moves human vs AlphaGo
    2) finish that game AlphaGo vs AlphaGo
    3) assume that the winner played better opening
    4) try another N maybe

  • Tasmanian Devil at 2 hours ago

    It seems that nobody has spotted any particular weaknesses in AlphaGo’s opening play – or they would have exploited them. On occasion it plays surprising moves, but it does not mean they are bad, only that humans have not fully understood their benefits before. A top pro does not give away many points in the endgame anyway, so it needs to play good openings in order to win consistently.

    On Sensei’s Library they have started to collect joseki (local patterns) popularized by AlphaGo (I only looked at this briefly).

Return to forum

Reply to this topic




Include game board: [game;id:123456] or [game;id:123456;move:20] or [game;id:123456;move:20;title:some text]