-
(2018-08-29) We implement an AlphaGo-based Gomoku AI program in 8 by 8 Free Style Gomoku. You can also get access to our presentation PPT in 2018 Likelihood Lab Summer Research Conference.
-
(2018-09-22) We combine our original AlphaGomoku program with Curriculum Learning, Double Networks Mechanism and Winning Value Decay to extend our AI to 15 by 15 Free Style Gomoku. Before we adopt these methods mentioned above, training 15 by 15 AlphaGomoku is intractable since the asymmetry and complexity of the game compared to the 8 by 8 simplified gomoku.
-
(2018-9-25) Our Reseach Paper is available at: paper. or at arxiv.
-
The training is continuing...... We hope that AlphaGomoku can evolve into Gomoku grand master someday.
AI adopts deterministic policy with 400 simulations per move. The first four pictures are games where AI plays the black stone. The following eight pictures are games where AI plays the white stone.
Tencent Gomoku AI plays black stone. AlphaGomoku adopts deterministic policy with 400 simulations per move.
The left Gif is a game self played by AlphaGomoku; The right Gif is a game between human and ai, where human adopts balck stone. All AI simulate 400 times per move.
AI plays the white stone against human, adopting deterministic policy with 400 simulations per move.
- Zheng Xie
- XingYu Fu
- JinYuan Yu
- Likelihood Lab
- Vthree.AI
- Sun Yat-sen University
We would like to say thanks to Andrew Chen from Vthree.AI and MingWen Liu from ShiningMidas Private Fund for their generous help throughout the research. We are also grateful to ZhiPeng Liang and Hao Chen from Sun Yat-sen University for their supports of the training process of our Gomoku AI. Without their supports, it's hard for us to finish such a complicated task.
- 3.6
- tensorflow
- keras
- pygame
- threading
- numpy
- matplotlib
- easygui (optional)
- Execute run.py
- Select mode 2 (AI vs Human) to compete.
- You can also compete with different versions of AlphaGomoku by switching the network.