Opening Keynote: "The Story of Alpha Go", presented by Aja Huang 7d and Fan Hui 2p
Ойындар
Recording of the 2016 USGC Opening Keynote presented by Google DeepMind's Aja Huang and European Champion Fan Hui 2p!
AGA Website: www.usgo.org/
Follow us on Facebook: / 2016usgocongress
Send your Tweets to: / theaga
Пікірлер: 20
We have a proverb to encourage our students -- “看棋涨三级”, literally meaning "watching games improves you 3 level"
52:55 "So the true thing is: We can play everywhere we want, the Go is totally free now for everyone"
what you want to know: will it be out commercially: under discussion
great work!
Demis Hassabis, David Silver, and Aja Huang, a 20-member team. The policy network, reading capability, and the value network. Learn to play joseki, fuseki, the direction of play, good shape, stones working together globally, and now playing freely.
great talk!
Hope to see more alphago games in the future.
1:03:05 "What do you mean by PEOPLE?" Aja is cute :D
very interesting
I really wish Fang Hui would notice the portable mic is barely on, and speak near the podium mic
"We spend several months trying to figure out what went wrong in game 4, then the neural network fixed the problem itself." Skynet is coming.
Did they post the result of game 3 on the Alphago machine?
How about giving alphago a game with standard fuseki and let it play against itself. Afterwards start from the end of the game and for every move you go back you let the program play itself with like the 10 best moves it finds then playing for every of those moves the game to the end. Do that until you are at move 50 or 30, depending on the fuseki. Afterwards take the winrate and put weighs on the possiblities so you can get the expected result after the first 50 moves. Then we should have a somewhat good estimation whether the fuseki favours black or white right? Maybe would also give results on whether 6,5 komi is enough not enough or too much if done on lots of fusekis, while making alphago even stronger.
@dannygjk
7 жыл бұрын
Some of what you said is what AlphaGo does.
Fan Hui is 2p not 4p right?
In terms of sheer playing performance, Monte Carlo and DCNN leave Gnu Go in the dust. But even Alphago has the fundamental flaw that it doesn't "know" what it's doing, because neither DCNN nor Monte Carlo embodies the basic concept of commonsense intelligence, namely being able to construct and reason about a meaningful computational model of the dynamics of the world. Here is one example of how a program design which does embody such commonsense can find a better move than Alphago did at move 79 in game 4 of its match with Lee Sedol, because it does "know" what it is doing. text: papers.ssrn.com/sol3/papers.cfm?abstract_id=2818149 video: kzread.info/dash/bejne/gKKnmZJpZJq4qNI.html The program is designed, but it's up to others to turn it in to software. Could it beat Alphago across the board, and with much less hardware? The design includes new algorithms for computing territory and influence, group strength, and strategic and tactical reasoning. It can talk about what it's thinking so would thus be useful to people learning Go.
@dannygjk
7 жыл бұрын
The fact that another piece of software finds something the first piece of software didn't find is not proof of superiority. Also is that other software also a neural net program? ( or using fuzzy logic?).
Hmmmmm, as soon as I saw those two cupboards of hardware my sense of being impressed faded.... they need that to have a chance to beat a tiny human brain. It's all about the computing and positional judgement and value network, etc. but something about the sheer material disparity then made it seem unimpressing to me.....
@dtracers
7 жыл бұрын
Themba Mabona that hardware is still not as powerful in raw compute power compared to the human brain. I know that is hard to believe. but our brains are very very powerful. we are just now getting computers that can process as fast as us. and those take up an entire room.
@pianoforte611
7 жыл бұрын
The new AlphaGo uses 90% less computing power and is much stronger. That's how technology works, you first learn how to do something inefficiently, then you improve the process until it's optimized.