Git Product home page Git Product logo

katago's Introduction

KataGo

Overview

KataGo's public distributed training run is ongoing! See https://katagotraining.org/ for more details, to download the latest and strongest neural nets, or to learn how to contribute if you want to help KataGo improve further! Also check out the computer Go discord channel!

As of 2024, KataGo remains one of the strongest open source Go bots available online. KataGo was trained using an AlphaZero-like process with many enhancements and improvements, and is capable of reaching top levels rapidly and entirely from scratch with no outside data, improving only via self-play. Some of these improvements take advantage of game-specific features and training targets, but also many of the techniques are general and could be applied in other games. As a result, early training is immensely faster than in other self-play-trained bots - with only a few strong GPUs for a few days, any researcher/enthusiast should be able to train a neural net from nothing to high amateur dan strength on the full 19x19 board. If tuned well, a training run using only a single top-end consumer GPU could possibly train a bot from scratch to superhuman strength within a few months.

Experimentally, KataGo did also try some limited ways of using external data at the end of its June 2020 run, and has continued to do so into its most recent public distributed run, "kata1" at https://katagotraining.org/. External data is not necessary for reaching top levels of play, but still appears to provide some mild benefits against some opponents, and noticeable benefits in a useful analysis tool for a variety of kinds of situations that don't occur in self-play but that do occur in human games and games that users wish to analyze.

KataGo's engine aims to be a useful tool for Go players and developers, and supports the following features:

  • Estimates territory and score, rather than only "winrate", helping analyze kyu and amateur dan games besides only on moves that actually would swing the game outcome at pro/superhuman-levels of play.
  • Cares about maximizing score, enabling strong play in handicap games when far behind, and reducing slack play in the endgame when winning.
  • Supports alternative values of komi (including integer values) and good high-handicap game play.
  • Supports board sizes ranging from 7x7 to 19x19, and as of May 2020 may be the strongest open-source bot on both 9x9 and 13x13 as well.
  • Supports a wide variety of rules, including rules that match Japanese rules in almost all common cases, and ancient stone-counting-like rules.
  • For tool/back-end developers - supports a JSON-based analysis engine that can batch multiple-game evaluations efficiently and be easier to use than GTP.

Training History and Research and Docs

Here are some links to some docs/papers/posts about KataGo's research and training!

  • Paper about the major new ideas and techniques used in KataGo: Accelerating Self-Play Learning in Go (arXiv). Many of the specific parameters are outdated, but the general methods continue to be used.

  • Many major further improvements have been found since then, which have been incorporated into KataGo's more recent runs and are documented here: KataGoMethods.md.

  • KataGo has a fully working implementation of Monte-Carlo Graph Search, extending MCTS to operate on graphs instead of just trees! An explanation can be found here Monte-Carlo Graph Search from First Principles. This explanation is written to be general (not specific to KataGo) and to fill a big gap in explanatory material in the academic literature and hopefully it can be useful to others!

  • Many thanks to Jane Street for supporting the training of KataGo's major earlier published runs, as well as numerous many smaller testing runs and experiments. Blog posts about the initial release and some interesting subsequent experiments:

For more details about KataGo's older training runs, including comparisons to other bots, see Older Training History and Research!

Also if you're looking to ask about general information about KataGo or how it works, or about some past Go bots besides KataGo, consider the computer Go discord channel.

Where To Download Stuff

Precompiled executables for KataGo can be found at the releases page for Windows and Linux.

And the latest neural nets are available at https://katagotraining.org/.

Setting Up and Running KataGo

KataGo implements just a GTP engine, which is a simple text protocol that Go software uses. It does NOT have a graphical interface on its own. So generally, you will want to use KataGo along with a GUI or analysis program. A few of them bundle KataGo in their download so that you can get everything from one place rather than downloading separately and managing the file paths and commands.

GUIs

This is by no means a complete list - there are lots of things out there. But, writing as of 2020, a few of the easier and/or popular ones might be:

  • KaTrain - KaTrain might be the easiest to set up for non-technical users, offering an all-in-one package (no need to download KataGo separately!), modified-strength bots for weaker players, and good analysis features.
  • Lizzie - Lizzie is very popular for running long interactive analyses and visualizing them as they happen. Lizzie also offers an all-in-one package. However keep mind that KataGo's OpenCL version may take quite a while to tune and load on the very first startup as described here, and Lizzie does a poor job of displaying this progress as it happens. And in case of an actual error or failure, Lizzie's interface is not the best at explaining these errors and will appear to hang forever. The version of KataGo packaged with Lizzie is quite strong but might not always be the newest or strongest, so once you have it working, you may want to download KataGo and a newer network from releases page and replace Lizzie's versions with them.
  • Ogatak is a KataGo-specific GUI with an emphasis on displaying the basics in a snappy, responsive fashion. It does not come with KataGo included.
  • q5Go and Sabaki are general SGF editors and GUIs that support KataGo, including KataGo's score estimation, and many high-quality features.

Generally, for GUIs that don't offer an all-in-one package, you will need to download KataGo (or any other Go engine of your choice!) and tell the GUI the proper command line to run to invoke your engine, with the proper file paths involved. See How To Use below for details on KataGo's command line interface.

Windows and Linux

KataGo currently officially supports both Windows and Linux, with precompiled executables provided each release. On Windows, the executables should generally work out of the box, on Linux if you encounter issues with system library versions, as an alternative building from source is usually straightforward. Not all different OS versions and compilers have been tested, so if you encounter problems, feel free to open an issue. KataGo can also of course be compiled from source on Windows via MSVC on Windows or on Linux via usual compilers like g++, documented further down.

MacOS

The community also provides KataGo packages for Homebrew on MacOS - releases there may lag behind official releases slightly.

Use brew install katago. The latest config files and networks are installed in KataGo's share directory. Find them via brew list --verbose katago. A basic way to run katago will be katago gtp -config $(brew list --verbose katago | grep 'gtp.*\.cfg') -model $(brew list --verbose katago | grep .gz | head -1). You should choose the Network according to the release notes here and customize the provided example config as with every other way of installing KataGo.

OpenCL vs CUDA vs TensorRT vs Eigen

KataGo has four backends, OpenCL (GPU), CUDA (GPU), TensorRT (GPU), and Eigen (CPU).

The quick summary is:

  • To easily get something working, try OpenCL if you have any good or decent GPU.
  • For often much better performance on NVIDIA GPUs, try TensorRT, but you may need to install TensorRT from Nvidia.
  • Use Eigen with AVX2 if you don't have a GPU or if your GPU is too old/weak to work with OpenCL, and you just want a plain CPU KataGo.
  • Use Eigen without AVX2 if your CPU is old or on a low-end device that doesn't support AVX2.
  • The CUDA backend can work for NVIDIA GPUs with CUDA+CUDNN installed but is likely worse than TensorRT.

More in detail:

  • OpenCL is a general GPU backend should be able to run with any GPUs or accelerators that support OpenCL, including NVIDIA GPUs, AMD GPUs, as well CPU-based OpenCL implementations or things like Intel Integrated Graphics. This is the most general GPU version of KataGo and doesn't require a complicated install like CUDA does, so is most likely to work out of the box as long as you have a fairly modern GPU. However, it also need to take some time when run for the very first time to tune itself. For many systems, this will take 5-30 seconds, but on a few older/slower systems, may take many minutes or longer. Also, the quality of OpenCL implementations is sometimes inconsistent, particularly for Intel Integrated Graphics and for AMD GPUs that are older than several years, so it might not work for very old machines, as well as specific buggy newer AMD GPUs, see also Issues with specific GPUs or GPU drivers.
  • CUDA is a GPU backend specific to NVIDIA GPUs (it will not work with AMD or Intel or any other GPUs) and requires installing CUDA and CUDNN and a modern NVIDIA GPU. On most GPUs, the OpenCL implementation will actually beat NVIDIA's own CUDA/CUDNN at performance. The exception is for top-end NVIDIA GPUs that support FP16 and tensor cores, in which case sometimes one is better and sometimes the other is better.
  • TensorRT is similar to CUDA, but only uses NVIDIA's TensorRT framework to run the neural network with more optimized kernels. For modern NVIDIA GPUs, it should work whenever CUDA does and will usually be faster than CUDA or any other backend.
  • Eigen is a CPU backend that should work widely without needing a GPU or fancy drivers. Use this if you don't have a good GPU or really any GPU at all. It will be quite significantly slower than OpenCL or CUDA, but on a good CPU can still often get 10 to 20 playouts per second if using the smaller (15 or 20) block neural nets. Eigen can also be compiled with AVX2 and FMA support, which can provide a big performance boost for Intel and AMD CPUs from the last few years. However, it will not run at all on older CPUs (and possibly even some recent but low-power modern CPUs) that don't support these fancy vector instructions.

For any implementation, it's recommended that you also tune the number of threads used if you care about optimal performance, as it can make a factor of 2-3 difference in the speed. See "Tuning for Performance" below. However, if you mostly just want to get it working, then the default untuned settings should also be still reasonable.

How To Use

KataGo is just an engine and does not have its own graphical interface. So generally you will want to use KataGo along with a GUI or analysis program. If you encounter any problems while setting this up, check out Common Questions and Issues.

First: Run a command like this to make sure KataGo is working, with the neural net file you downloaded. On OpenCL, it will also tune for your GPU.

./katago.exe benchmark                                                   # if you have default_gtp.cfg and default_model.bin.gz
./katago.exe benchmark -model <NEURALNET>.bin.gz                         # if you have default_gtp.cfg
./katago.exe benchmark -model <NEURALNET>.bin.gz -config gtp_custom.cfg  # use this .bin.gz neural net and this .cfg file

It will tell you a good number of threads. Edit your .cfg file and set "numSearchThreads" to that many to get best performance.

Or: Run this command to have KataGo generate a custom gtp config for you based on answering some questions:

./katago.exe genconfig -model <NEURALNET>.bin.gz -output gtp_custom.cfg

Next: A command like this will run KataGo's engine. This is the command to give to your GUI or analysis program so that it can run KataGo.

./katago.exe gtp                                                   # if you have default_gtp.cfg and default_model.bin.gz
./katago.exe gtp -model <NEURALNET>.bin.gz                         # if you have default_gtp.cfg
./katago.exe gtp -model <NEURALNET>.bin.gz -config gtp_custom.cfg  # use this .bin.gz neural net and this .cfg file

You may need to specify different paths when entering KataGo's command for a GUI program, e.g.:

path/to/katago.exe gtp -model path/to/<NEURALNET>.bin.gz
path/to/katago.exe gtp -model path/to/<NEURALNET>.bin.gz -config path/to/gtp_custom.cfg

Other Commands:

Run a JSON-based analysis engine that can do efficient batched evaluations for a backend Go service:

  • ./katago analysis -model <NEURALNET>.gz -config <ANALYSIS_CONFIG>.cfg

Run a high-performance match engine that will play a pool of bots against each other sharing the same GPU batches and CPUs with each other:

  • ./katago match -config <MATCH_CONFIG>.cfg -log-file match.log -sgf-output-dir <DIR TO WRITE THE SGFS>

Force OpenCL tuner to re-tune:

  • ./katago tuner -config <GTP_CONFIG>.cfg

Print version:

  • ./katago version

Tuning for Performance

The most important parameter to optimize for KataGo's performance is the number of threads to use - this can easily make a factor of 2 or 3 difference.

Secondarily, you can also read over the parameters in your GTP config (default_gtp.cfg or gtp_example.cfg or configs/gtp_example.cfg, etc). A lot of other settings are described in there that you can set to adjust KataGo's resource usage, or choose which GPUs to use. You can also adjust things like KataGo's resign threshold, pondering behavior or utility function. Most parameters are documented directly inline in the example config file. Many can also be interactively set when generating a config via the genconfig command described above.

Common Questions and Issues

This section summarizes a number of common questions and issues when running KataGo.

Issues with specific GPUs or GPU drivers

If you are observing any crashes in KataGo while attempting to run the benchmark or the program itself, and you have one of the below GPUs, then this is likely the reason.

  • AMD Radeon RX 5700 - AMD's drivers for OpenCL for this GPU have been buggy ever since this GPU was released, and as of May 2020 AMD has still never released a fix. If you are using this GPU, you will just not be able to run KataGo (Leela Zero and other Go engines will probably fail too) and will probably also obtain incorrect calculations or crash if doing anything else scientific or mathematical that uses OpenCL. See for example these reddit threads: [1] or [2] or this L19 thread.
  • OpenCL Mesa - These drivers for OpenCL are buggy. Particularly if on startup before crashing you see KataGo printing something like Found OpenCL Platform 0: ... (Mesa) (OpenCL 1.1 Mesa ...) ... then you are using the Mesa drivers. You will need to change your drivers, see for example this KataGo issue which links to this thread.
  • Intel Integrated Graphics - For weaker/older machines or laptops or devices that don't have a dedicated GPU, KataGo might end up using the weak "Intel Integrated Graphics" that is built in with the CPU. Often this will work fine (although KataGo will be slow and only get a tiny number of playouts compared to using a real GPU), but various versions of Intel Integrated Graphics can also be buggy and not work at all. If a driver update doesn't work for you, then the only solution is to upgrade to a better GPU. See for example this issue or this issue, or this other Github's issue.

Common Problems

  • KataGo seems to hang or is "loading" forever on startup in Lizzie/Sabaki/q5go/GoReviewPartner/etc.

    • Likely either you have some misconfiguration, have specified file paths incorrectly, a bad GPU, etc. Many of these GUIs do a poor job of reporting errors and may completely swallow the error message from KataGo that would have told you what was wrong. Try running KataGo's benchmark or gtp directly on the command line, as described above.
    • Sometimes there is no error at all, it is merely that the first time KataGo runs on a given network size, it needs to do some expensive tuning, which may take a few minutes. Again this is clearer if you run the benchmark command directly in the command line. After tuning, then subsequent runs will be faster.
  • KataGo works on the command line but having trouble specifying the right file paths for the GUI.

    • As described above, you can name your config default_gtp.cfg and name whichever network file you've downloaded to default_model.bin.gz (for newer .bin.gz models) or default_model.txt.gz (for older .txt.gz models). Stick those into the same directory as KataGo's executable, and then you don't need to specify -config or -model paths at all.
  • KataGo gives an error like Could not create file when trying to run the initial tuning.

    • KataGo probably does not have access permissions to write files in the directory where you placed it.
    • On Windows for example, the Program Files directory and its subdirectories are often restricted to only allow writes with admin-level permissions. Try placing KataGo somewhere else.
  • I'm new to the command line and still having trouble knowing what to tell Lizzie/q5go/Sabaki/whatever to make it run KataGo.

  • I'm getting a different error or still want further help.

    • Check out the discord chat where Leela Zero, KataGo, and other bots hang out and ask in the "#help" channel.
    • If you think you've found a bug in KataGo itself, feel free also to open an issue. Please provide as much detail as possible about the exact commands you ran, the full error message and output (if you're in a GUI, please make sure to check that GUI's raw GTP console or log), the things you've tried, your config file and network, your GPU and operating system, etc.

Other Questions

  • How do I make KataGo use Japanese rules or other rules?

    • KataGo supports some GTP extensions for developers of GUIs to set the rules, but unfortunately as of June 2020, only a few of them make use of this. So as a workaround, there are a few ways:
      • Edit KataGo's config (default_gtp.cfg or gtp_example.cfg or gtp.cfg, or whatever you've named it) to use rules=japanese or rules=chinese or whatever you need, or set the individual rules koRule,scoringRule,taxRule, etc. to what they should be. See here for where this is in the config, or and see this webpage for the full description of KataGo's ruleset.
      • Use the genconfig command (./katago genconfig -model <NEURALNET>.gz -output <PATH_TO_SAVE_GTP_CONFIG>.cfg) to generate a config, and it will interactively help you, including asking you for what default rules you want.
      • If your GUI allows access directly to the GTP console (for example, press E in Lizzie), then you can run kata-set-rules japanese or similar for other rules directly in the GTP console, to change the rules dynamically in the middle of a game or an analysis session.
  • Which model/network should I use?

    • Generally, use the strongest or most recent b18-sized net (b18c384nbt) from the main training site. This will be the best neural net even for weaker machines, since despite being a bit slower than old smaller nets, it is much stronger and more accurate per evaluation.
    • If you care a lot about theoretical purity - no outside data, bot learns strictly on its own - use the 20 or 40 block nets from this release, which are pure in this way and still much stronger than Leela Zero, but also much weaker than more recent nets.
    • If you want some nets that are much faster to run, and each with their own interesting style of play due to their unique stages of learning, try any of the "b10c128" or "b15c192" Extended Training Nets here which are 10 block and 15 block networks from earlier in the run that are much weaker but still pro-level-and-beyond.

Features for Developers

GTP Extensions:

In addition to a basic set of GTP commands, KataGo supports a few additional commands, for use with analysis tools and other programs.

KataGo's GTP extensions are documented here.

  • Notably: KataGo exposes a GTP command kata-analyze that in addition to policy and winrate, also reports an estimate of the expected score and a heatmap of the predicted territory ownership of every location of the board. Expected score should be particularly useful for reviewing handicap games or games of weaker players. Whereas the winrate for black will often remain pinned at nearly 100% in a handicap game even as black makes major mistakes (until finally the game becomes very close), expected score should make it more clear which earlier moves are losing points that allow white to catch up, and exactly how much or little those mistakes lose. If you're interested in adding support for this to any analysis tool, feel free to reach out, I'd be happy to answer questions and help.

  • KataGo also exposes a few GTP extensions that allow setting what rules are in effect (Chinese, AGA, Japanese, etc). See again here for details.

Analysis Engine:

KataGo also implements a separate engine that can evaluate much faster due to batching if you want to analyze whole games at once and might be much less of a hassle than GTP if you are working in an environment where JSON parsing is easy. See here for details.

KataGo also includes example code demonstrating how you can invoke the analysis engine from Python, see here!

Compiling KataGo

KataGo is written in C++. It should compile on Linux or OSX via g++ that supports at least C++14, or on Windows via MSVC 15 (2017) and later. Instructions may be found at Compiling KataGo.

Source Code Overview:

See the cpp readme or the python readme for some high-level overviews of the source code in this repo, if you want to get a sense of what is where and how it fits together.

Selfplay Training:

If you'd also like to run the full self-play loop and train your own neural nets using the code here, see Selfplay Training.

Contributors

Many thanks to the various people who have contributed to this project! See CONTRIBUTORS for a list of contributors.

License

Except for several external libraries that have been included together in this repo under cpp/external/ as well as the single file cpp/core/sha2.cpp, which all have their own individual licenses, all code and other content in this repo is released for free use or modification under the license in the following file: LICENSE.

License aside, if you end up using any of the code in this repo to do any of your own cool new self-play or neural net training experiments, I (lightvector) would to love hear about it.

katago's People

Contributors

anoek avatar avtomaton avatar bernds avatar dwt avatar emanuel-de-jong avatar espadrine avatar farmersrice avatar fuhaoda avatar hyln9 avatar iopq avatar isty2e avatar kinfkong avatar lightvector avatar lpuchallafiore avatar nerai avatar omnipotententity avatar rimathia avatar rooklift avatar sanderland avatar shammyx avatar simon300000 avatar tfifie avatar ttxs123ok avatar tychota avatar y-ich avatar yawen-d avatar yenw avatar yffbit avatar yzyray avatar zakki avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

katago's Issues

2019“CHINA SECURITIES Cup” The World AI Weiqi Open

Will KataGo participate in the competition?


http://sports.sina.com.cn/go/2019-05-31/doc-ihvhiews6020777.shtml?cre=tianyi&mod=pcpager_focus&loc=22&r=9&rfunc=100&tj=none&tr=9

ALLOWANCE:
Free accommodation is up to 2 people per team。 Traveling allowance is shown as below:

  • Asia: 10000 Yuan for each team。
  • Rest region of the world: 20000 Yuan for each team。

PRIZES:
Total bonus: 800,000 ¥ before tax。
Rank | 1 | 2 | 3 | 4 | 5-8
Prize(Yuan) | 450,000 | 150,000 | 60,000 | 40,000 | 20,000
Human-AI Pair Weiqi competition: The top eight AI teams in the preliminary competition will be paired with a human player。 The knock-out system is adopted。 The winner will be rewarded with 20,000 RMB in total。(AI team and human player is awarded 10,000 RMB each)

Can I use your TensorFlow model?

Hi.

Thank you for your nice research!

I cloned this repository and downloaded b15c192-s279618816-d16499002.zip.
And I tried to use play.py etc. in python directory but failed.

Here is an error I got:

Traceback (most recent call last):
  File "play.py", line 833, in <module>
    saver.restore(session, modelpath)
  File "/Users/yuji/.virtualenvs/python-6ZMcOZYe/lib/python3.6/site- packages/tensorflow/python/training/saver.py", line 1268, in restore
+ compat.as_text(save_path))
ValueError: The passed save_path is not a valid checkpoint: ../models/b15c192-s279618816-d164990022/

I took a look at the expanded directory of b15c192-s279618816-d16499002.zip and added 'saved_model/variables/variables' to modelpath in the line of 'saver.restore(session, modelpath)' but did not succeeded.
The cause maybe be the lack of .meta file of checkpoint in saved_model directory.

I have no idea how to generate .meta file.
Can I use your TensorFlow model?

Thank you.

Implemeting for other board games?

I am trying to implment AlphaGo algorithm on my own board game. Is this repo capable of changing the game if I already have the game environment + logic already?

Questions on 1.1 release

I had a couple question related to the 1.1 release. I am running the windows compile created by AlreadyDone over on the Leela git page.

I notice in Sabaki that the GPU load is high and TDP is at 80-90% when the engine is "idle", i.e. no gtp commands have been given. Leela Zero, by contrast, will not load the GPU until a command is given. Does that mean I have some setting configured incorrectly, or perhaps something related to how Sabaki is trying to interface with it?

My second question is related to variations displayed in Sabaki. I notice that they are appear to be capped at 10 moves. Is there a way to expand this?

If these are issues purely with Sabaki I wouldn't expect anyone here to know the answers, but I thought I would ask in case it's a configuration thing with Katago.

Great work on this project, seems to be much more flexible than other "zero" methods, such as Leela.

suggest to remove models from the source code to reduce the file size

namely

g103-b6c96-s103408384-d26419149-info.txt
g103-b6c96-s103408384-d26419149.txt.gz
grun2-b6c96-s128700160-d49811312-info.txt
grun2-b6c96-s128700160-d49811312.txt.gz
grun50-b6c96-s156348160-d118286860.txt.gz
run4-s67105280-d24430742-b6c96-info.txt
run4-s67105280-d24430742-b6c96.txt.gz

How to set faster?

numSearchThreads is limitted by CPU or GPU?
nnMaxBatchSize should be equal to numSearchThreads?
Others?

why the time cost of opencl version is less than cuda version

D:\KataGo-1.1\katago12bc = 1.2beta cuda
D:\KataGo12b\katago =1.2beta opencl

# Black: KataGo
# BlackCommand: D:\KataGo-1.1\katago12bc gtp -model d:\model696.txt -config D:\KataGo12b\gtp_example.cfg
# BlackLabel: KataGo:1.2-beta
# BlackVersion: 1.2-beta
# Date: July 19, 2019 1:33:24 PM CST
# Host: PC
# Komi: 7.5
# Referee: -
# Size: 19
# White: KataGo
# WhiteCommand: D:\KataGo12b\katago gtp -model d:\model696.txt -config D:\KataGo12b\gtp_example.cfg
# WhiteLabel: KataGo:1.2-beta
# WhiteVersion: 1.2-beta
# Xml: 0
#
#GAME	RES_B	RES_W	RES_R	ALT	DUP	LEN	TIME_B	TIME_W	CPU_B	CPU_W	ERR	ERR_MSG
0	W+R	W+R	W+R	0	-	302	654	522	0	0	0	
1	B+R	B+R	B+R	0	-	247	523.1	438	0	0	0	
2	B+R	B+R	B+R	0	-	175	485.8	390	0	0	0	
3	B+R	B+R	B+R	0	-	281	689.5	557.9	0	0	0	
4	B+R	B+R	B+R	0	-	225	488.8	398.5	0	0	0	
5	B+R	B+R	B+R	0	-	253	586.4	514.7	0	0	0	
6	B+R	B+R	B+R	0	-	215	472.5	395.5	0	0	0	
7	W+R	W+R	W+R	0	-	208	431.5	374.8	0	0	0	
8	W+R	W+R	W+R	0	-	218	524.6	426.9	0	0	0	
9	W+R	W+R	W+R	0	-	156	305.7	258	0	0	0

Tried compiling on Ubuntu 18, here are some steps

  1. Install the latest cmake

had to install pip, remove old cmake, install newer version with pip

  1. install these packages:
    nvidia-cuda-toolkit
    libcudnn7-dev
    libzip-dev

there are maybe a few I didn't need to get again since I had previously installed lz

SGF files of KataGo training set and evaluations

Hi :) First of all, thanks for an intriguing experiment with alternative way of training the Go bot! It looks very promisingly.
My issue is simple. Do you have a plans to publish the sgf's with the freshest selfplay and/or evaluation games?
Would be great to take a look at KataGo's full strength evaluation games for study its opening ideas, compare it with the ones from Leela Zero and MiniGo.
I can use my own storage for saving these games to save the harddisk space for you.

Question of compatability

Perhaps it is already explained somewhere, so I apologise if I missed it. Is it possible to use KataGo with something like Lizzie, by transporting the weights file over to the installation folder?
I find your project interesting, but I don't have a c++ compiler.

Suspicious noResultValue and score in fillValueTDTargets

The fillValueTDTargets function appears to be accumulating time-decayed averages of the winValue, lossValue, noResultValue, score parameters, to use as training targets. However, the arithmetic for noResultValue and score use = rather than +=, which means that the last value wins and no accumulation takes place. Typo or intentional?

noResultValue = weightNow * targets.noResult;

Do you happen to plan to add a bit stronger weight?

difference.zip
1, Do you happen to plan to add a bit stronger weight? I continued to lose the game by 1.5 points or 0.5 points to MiniGo in Block 20 this time. If weight is added, do I have to realize anew? I'm asking this question because I'm curious about it.

2, I'm afraid the way of counting the total points acquired in both the website and HanQ Baduk is quite different. For example, when I lose by 0.5 points in HanQ Baduk, I win by 0.5 point in the website. I'm posting this because the problem must be solved and corrected soon.

1

The opponent's win

2

https://online-go.com/game/18355480
my black win

CUBLAS_STATUS_NOT_INITIALIZED

installed katago, works fine most of the time. But sometimes I have the following error.
But as you can see, just retrying fixes it.

afbeelding

loadsgf command shows warning, how to solve it

loadsgf test.sgf
WARNING: Loaded sgf has rules koSIMPLEscoreAREAsui0komi7.5 but GTP is set to rules koPOSITIONALscoreAREAsui1komi7.5

(;GM[1]FF[4]CA[UTF-8]
RU[Chinese]SZ[19]KM[7.5]TM[900]
PW[LZ_05db_ELFv2_p800]PB[h_15b_512k_v3200]WR[3195]BR[2678?]DT[2019-07-06]PC[(CGOS) 19x19 Computer Go Server]RE[B+Resign]GN[576490]
;B[dp]BL[899];W[pd]WL[897];B[pp]BL[897];W[dd]WL[893];B[cf]BL[897];W[qq]WL[889];B[qp]BL[893];W[pq]WL[885]
;B[nq]BL[892];W[nr]WL[882];B[fc]BL[889];W[op]WL[879];B[oq]BL[887];W[or]WL[875];B[mq]BL[887];W[oo]WL[873]
;B[rp]BL[887];W[lq]WL[871];B[mr]BL[886];W[rq]WL[870];B[qm]BL[885];W[po]WL[867];B[qo]BL[883];W[lp]WL[865]
;B[lr]BL[882];W[pm]WL[859];B[ql]BL[880];W[jq]WL[857];B[no]BL[878];W[pl]WL[854];B[pk]BL[876];W[qk]WL[850]
;B[nn]BL[874];W[nm]WL[846];B[rk]BL[872];W[qj]WL[844];B[rj]BL[872];W[qi]WL[841];B[ri]BL[870];W[mm]WL[839]
;B[ns]BL[867];W[ps]WL[834];B[lo]BL[866];W[qh]WL[829];B[ln]BL[864];W[sp]WL[824];B[qn]BL[862];W[so]WL[819]
;B[rh]BL[859];W[qg]WL[816];B[rg]BL[859];W[rf]WL[811];B[nk]BL[857];W[mk]WL[806];B[ok]BL[854];W[nl]WL[802]
;B[mj]BL[852];W[lk]WL[798];B[qf]BL[851];W[re]WL[792];B[pf]BL[850];W[oh]WL[787];B[ni]BL[849];W[nh]WL[781]
;B[mh]BL[846];W[nf]WL[775];B[kj]BL[842];W[kk]WL[769];B[jk]BL[840];W[jl]WL[763];B[lj]BL[838];W[il]WL[759]
;B[jn]BL[836];W[sl]WL[753];B[rl]BL[834];W[sm]WL[746];B[om]BL[832];W[ol]WL[743];B[pn]BL[832];W[on]WL[738]
;B[np]BL[830];W[om]WL[732];B[kl]BL[829];W[km]WL[724];B[lm]BL[827];W[ll]WL[716];B[mn]BL[827];W[ml]WL[710]
;B[kl]BL[826];W[oc]WL[703];B[rs]BL[824])


win64 opencl version cannot hold 20x256 model?

when i use 6x96 model, it works

D:\KataGo-1.1>main_opencl gtp -model 6x96.txt -config D:\KataGo-1.1\configs\gtp_example.cfg
KataGo v1.1
Loaded model 6x96.txt
GTP ready, beginning main protocol loop
play b c3
=


genmove w
= Q16

play b c16
=

genmove w
= Q4

its log

2019-06-26 07:42:26+0800: GTP Engine starting...
2019-06-26 07:42:26+0800: nnRandSeed0 = 7966693526900077070
2019-06-26 07:42:26+0800: After dedups: nnModelFile0 = 6x96.txt useFP16 false useNHWC false
2019-06-26 07:42:26+0800: Found OpenCL Device 0: GeForce GT 730 (NVIDIA Corporation)
2019-06-26 07:42:27+0800: Loaded neural net with nnXLen 19 nnYLen 19
2019-06-26 07:42:27+0800: OpenCL backend: Model version 5
2019-06-26 07:42:27+0800: KataGo v1.1
2019-06-26 07:42:27+0800: Loaded model 6x96.txt
2019-06-26 07:42:27+0800: GTP ready, beginning main protocol loop
2019-06-26 07:42:49+0800: Controller: play b c3
2019-06-26 07:42:49+0800: = 
2019-06-26 07:43:02+0800: Controller: genmove w
2019-06-26 07:43:24+0800: MoveNum: 1 HASH: E615BF555BD638A9B3D894DA619930F3
   A B C D E F G H J K L M N O P Q R S T
19 . . . . . . . . . . . . . . . . . . .
18 . . . . . . . . . . . . . . . . . . .
17 . . . . . . . . . . . . . . . . . . .
16 . . . . . . . . . . . . . . . @ . . .
15 . . . . . . . . . . . . . . . . . . .
14 . . . . . . . . . . . . . . . . . . .
13 . . . . . . . . . . . . . . . . . . .
12 . . . . . . . . . . . . . . . . . . .
11 . . . . . . . . . . . . . . . . . . .
10 . . . . . . . . . . . . . . . . . . .
 9 . . . . . . . . . . . . . . . . . . .
 8 . . . . . . . . . . . . . . . . . . .
 7 . . . . . . . . . . . . . . . . . . .
 6 . . . . . . . . . . . . . . . . . . .
 5 . . . . . . . . . . . . . . . . . . .
 4 . . . . . . . . . . . . . . . . . . .
 3 . . X . . . . . . . . . . . . . . . .
 2 . . . . . . . . . . . . . . . . . . .
 1 . . . . . . . . . . . . . . . . . . .

koPOSITIONALscoreAREAsui1komi7.5
Time taken: 22.121
Root visits: 100
NN rows: 79
NN batches: 79
NN avg batch size: 1
PV: Q16 Q4 D16 R16
Tree:
: T   2.91c W   3.45c S  -0.54c ( +1.2) N     100  --  Q16 Q4 D16 R16
---White(^)---
Q16 : T  -3.39c W  -3.89c S   0.49c ( -1.2) LCB    4.33c P 19.72% WF 11.30% PSV      23 N      23  --  Q16 Q4 D16 R16
D16 : T  -2.79c W  -3.35c S   0.56c ( -1.1) LCB    5.21c P 19.39% WF 11.17% PSV      21 N      21  --  D16 Q16 Q4 Q3
Q4  : T  -3.49c W  -3.93c S   0.45c ( -1.3) LCB    4.93c P 17.93% WF 11.32% PSV      20 N      20  --  Q4 D16 Q16 Q3
R16 : T  -3.47c W  -3.89c S   0.43c ( -1.4) LCB   15.01c P  6.31% WF 11.28% PSV       8 N       8  --  R16 D16 Q4 P17
C16 : T  -3.69c W  -4.12c S   0.42c ( -1.4) LCB   15.84c P  5.57% WF 11.32% PSV       7 N       7  --  C16 Q16 Q4 E17
R4  : T  -1.82c W  -2.47c S   0.65c ( -1.0) LCB   18.69c P  6.92% WF 11.02% PSV       6 N       6  --  R4 D16 Q16
D17 : T  -0.90c W  -1.78c S   0.88c ( -0.6) LCB   29.08c P  6.02% WF 10.89% PSV       5 N       5  --  D17 Q16 Q4
Q17 : T  -2.03c W  -2.69c S   0.66c ( -1.0) LCB   28.16c P  5.57% WF 11.06% PSV       5 N       5  --  Q17 D16 Q4
Q3  : T   0.85c W  -0.34c S   1.19c ( -0.1) LCB   54.16c P  5.13% WF 10.65% PSV       4 N       4  --  Q3 D16 Q16

2019-06-26 07:43:24+0800: = Q16
2019-06-26 07:44:07+0800: Controller: play b c16
2019-06-26 07:44:07+0800: = 
2019-06-26 07:44:11+0800: Controller: genmove w

but it failed to run with 20x256 as 106th reply at
leela-zero/leela-zero#2431

Fastest config for v1.1

Notice: This is fastest only, not the best.
CPU: Intel i5-8500
GPU: nVidia RTX 2060
gtp_example.cfg:

numSearchThreads = 36
nnMaxBatchSize = 36
cudaUseFP16 = true
cudaUseNHWC = true

about 2500 visits / second

Additional networks?

Is there a place we can download g1 through g65 networks? I am interested in running some experiments with them.

Startup message

How about writing a startup message to STDERR after initialization so that GUIs know when to begin sending GTP commands? Though I couldn't reproduce the problem by myself, GUIs may need to wait for initialization of the engine in a certain situation (kaorahi/lizgoban#4).

In addition, Leela Zero shows the network size in initialization. It is also convenient for GUIs.

Wishlist: Support traditional Chinese (scoring) rule?

In the abstract of an article, the author sketched the traditional Chinese (scoring) rule, and the difference between the traditional one and the modern one:

Under the traditional Chinese rules, a player's score was the maximum number of stones he could in theory play on the board. Since every groups needs two liberties to live, this rule created a two-point group tax: the player with the more separate groups lost two points for evey excess group. (In practice, one point per excess group was subtracted from the player's score and added to his opponent's score so that the total remained 361. {This does not compute; it could convert a clear winner into a loser. Only by adding two points per group could the total be made 361. --wjh}) Modern Chinese rules avoid this by counting both stones and surrounded points.

I wonder whether we would support this. It could be used to analyze traditional Chinese games, and to understand differences of good strategies between the traditional one and the modern one.

Document `./write`

Once again, nice works :)

The write command is undocumented. I think it is usefull to start "master" like bots:
I assume its usage would be something like:

./write -pool-size 50000 -train-shards 10 -val-game-prob 0.05 -gamesdir <DIRECTORY WITH SGF FILES> -output <TARGET>.h5

Once we get the target h5, how to include it in the selfpay loop ?

I will be happy to open a PR with those information.

OpenCL problem with Windows on Intel integrated CPU

I first waited for the tuning to finish then whenever I launch it it spits out

KataGo v1.2
Using OpenCL Device 0: Intel(R) HD Graphics 5500 (Intel(R) Corporation) OpenCL 2
.0
Loaded tuning parameters from: ./KataGoData/opencltuning/tune_gpuIntelRHDGraphic
s5500_x19_y19_c96_mv5.txt
Uncaught exception: CL_BUILD_PROGRAM_FAILURE
BUILD LOG FOR xgemmDirectProgram ON DEVICE 0
fcl build 1 succeeded.

and then it quits

what is the meaning of label 1,2,3 in broad of gtp.log

koPOSITIONALscoreAREAsui1komi7.5
Time taken: 24.305
Root visits: 1000
NN rows: 254
NN batches: 254
NN avg batch size: 1
PV: B13 N4 O3 M4 Q6 Q7 Q5 S5 S4 P7 M3 S3 T4 R3 R4 Q3 P3 L3 L2
Tree:
: T -25.82c W -24.00c S  -1.82c ( -2.3) N    1000  --  B13 N4 O3 M4 Q6 Q7 Q5
---Black(^)---
B13 : T  25.07c W  23.32c S   1.74c ( +2.2) LCB   27.33c P 64.57% WF 10.36% PSV     938 N     938  --  B13 N4 O3 M4 Q6 Q7 Q5 S5
B15 : T  49.00c W  44.68c S   4.31c ( +5.9) LCB   93.82c P  8.59% WF  6.38% PSV      11 N      11  --  B15 B16 B14 C13 B13 C12 C11
Q6  : T  40.84c W  37.66c S   3.18c ( +4.2) LCB   61.23c P  5.18% WF  7.61% PSV      10 N      10  --  Q6 Q7 Q5 S5 S4 P7 B13 S3
E15 : T  28.29c W  26.25c S   2.04c ( +2.7) LCB   52.24c P  1.55% WF  9.56% PSV       9 N       9  --  E15 F15 B13 B5 B6 C7
S17 : T  39.40c W  36.13c S   3.26c ( +4.4) LCB   81.55c P  4.30% WF  7.93% PSV       8 N       8  --  S17 S16 B13 F12
S5  : T  40.88c W  37.40c S   3.48c ( +4.8) LCB   70.97c P  4.07% WF  7.72% PSV       8 N       8  --  S5 Q5 P4 B12 O7 N5
N8  : T  35.00c W  31.90c S   3.10c ( +4.4) LCB  110.95c P  1.30% WF  8.78% PSV       4 N       4  --  N8 M4 P8
O7  : T  45.12c W  41.01c S   4.11c ( +5.8) LCB  167.06c P  2.37% WF  7.71% PSV       3 N       3  --  O7 N4 O3
M9  : T  36.93c W  34.18c S   2.75c ( +3.5) LCB  349.94c P  0.74% WF  8.64% PSV       3 N       3  --  M9 O9 S5
S16 : T  37.45c W  34.61c S   2.84c ( +3.7) LCB  280.00c P  1.05% WF  8.71% PSV       2 N       2  --  S16 S15

2019-06-21 13:07:50+0800: = B13
2019-06-21 13:08:19+0800: Controller: play W N4
2019-06-21 13:08:19+0800: = 
2019-06-21 13:08:19+0800: Controller: genmove b
2019-06-21 13:08:53+0800: MoveNum: 96 HASH: 90CD293F594E8971F188A2787AA416DF
   A B C D E F G H J K L M N O P Q R S T
19 . . . . . . . . . . . X . . . . . . .
18 . . . . . . . . X O X . X X . . . . .
17 . . X X X X X . X O X X O O . X X . .
16 . . O O O O X O X O O O X . O O O . .
15 . . . . . . O X . O X X X . . . . . .
14 . . O . X O1O X X X O X O O . . . . .
13 . X2. . X O X O . . O X X O . . . . .
12 . . . X . . . O X X O . O O . . . . .
11 . . . . X O . . . . . O . . . . . . .
10 . . . X O O . . . . . . . . . . . . .
 9 . . . X . . . . . . . . . . . . . . .
 8 . . . . . . . . . . . . . . . . . . .
 7 . . . . . . . . . . . . . . . . . . .
 6 . . O X . . . . . . . . . . O . O O .
 5 . . X . . X . . . . . . . O . . X . .
 4 . . . X . X O . . . . @ O3X . X . . .
 3 . O X O O O . . . . . . X . . . . . .
 2 . . O . . . . . . . . . . . . . . . .
 1 . . . . . . . . . . . . . . . . . . .

How to improve KataGo at high handicap?

There has been discussions about why KataGo is no so strong at high handicap (5 stones and above). Typically, KG will immediately invade the 4 corners, then somewhat poorly reduce black big moyo, leading to an easy win for Black without much complication. Humans players would like to see KG keep the game more 'open', with more uncertainty, so that humans may have more complex fights, unstable groups, etc....to deal with.

  • You have explained, @lightvector, that self-play games include only a small fraction of handicap games, and only at low handicap (max 2?). Self-playing more handicap games, at higher handicap might help, possibly.

  • Re §6.2.2 of your paper, 'Game Variety and Exploration', you explain that a fraction of games (5%) are definitely branched after choosing a 'best among few random moves', and after komi has been adjusted to balance the game. Handicap games, at least their early phase, are characterized by strong imbalance, at least in early phases. At high imbalance, utility is mostly driven by score component, I assume. Having more self-play games in strong (but not too much) imbalance would teach the policy to prefer moves mostly based on which moves reduce the score gap, even if failing to improve the position value. More of these adversarial positions in the training might help the policy to depart from its 'zero bot' approach of trading corners maybe...

  • In the same line, handicap games in the training might need to be played with more imbalance (no komi full compensation). But I lack the details on how they are actually played, so this remark might be moot.

  • Humans comments somewhat revolves on how the bot should keep the game complex, unstable. At the search level (match games, not self-play a priori), I was wondering how to tweak the utility function so that search would favor 'uncertainty'. Uncertainty that would help is not just variance. In a beginning of a high handicap game, maybe there is not much variance, at least in the value: it's a loss!
    Would it be possible to 'anneal' (through exponentiation e.g.) the ownership head output, so as to discount sure territory vs fuzzy territory? That might bent, through the search process, the initial inclination of the policy head for corner invasion for instance. And more generally, favor unsettled situations vs settled ones. That annealing parameter might be made dependent on the value, so that this effet fades when white catches up.

Just fuzzy ideas offered with no data oc, but intended to launch the debate... :-)

compiling on Windows

Will update Windows build here.

Complete package: https://drive.google.com/file/d/1bdIlVDJ3x6FZtX5fmuG6wNbb57GFU8S0/view (6/27/2019, use new cudnn DLL)


Original content:

I tried to compile the GTP engine on Windows but failed:

>------ 生成 已启动: 项目: CMakeLists,配置: RelWithDebInfo ------
  [1/53] "e:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.14.26428\bin\HostX64\x64\cl.exe"  /nologo /TP -DUSE_CUDA_BACKEND -IC:\KataGo\cpp\external -IC:\KataGo\cpp\external\tclap-1.2.1\include -IE:\zlib\include -IE:\CUDA\include /DWIN32 /D_WINDOWS /W3 /GR /EHsc /MD /Zi /O2 /Ob1 /DNDEBUG   -std:c++14 /showIncludes /FoCMakeFiles\main.dir\core\elo.cpp.obj /FdCMakeFiles\main.dir\ /FS -c C:\KataGo\cpp\core\elo.cpp
  FAILED: CMakeFiles/main.dir/core/elo.cpp.obj 
  "e:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.14.26428\bin\HostX64\x64\cl.exe"  /nologo /TP -DUSE_CUDA_BACKEND -IC:\KataGo\cpp\external -IC:\KataGo\cpp\external\tclap-1.2.1\include -IE:\zlib\include -IE:\CUDA\include /DWIN32 /D_WINDOWS /W3 /GR /EHsc /MD /Zi /O2 /Ob1 /DNDEBUG   -std:c++14 /showIncludes /FoCMakeFiles\main.dir\core\elo.cpp.obj /FdCMakeFiles\main.dir\ /FS -c C:\KataGo\cpp\core\elo.cpp
c:\katago\cpp\core\global.h(32): error C3646: “__attribute__”: 未知重写说明符
...

When I run code analysis, I was told
C:/KataGo/cpp/core/global.cpp(14): fatal error C1083: 无法打开包括文件: “dirent.h”: No such file or directory
and when I looked into core/global.cpp, I found
#include <dirent.h> //TODO this is not portable to windows, use C++17 filesystem library when C++17 is available

Is this the only obstruction to porting to Windows? (Seems it can work under Windows though: https://github.com/tronkko/dirent)

I experienced some problem with the Git portion in CMakeLists.txt, so I removed it (not sure about what it is for), but some source files require program/gitinfo.h; would it work if I rename gitinfotemplate.h to gitinfo.h?

Remaining game length as auxiliary prediction target

Did you consider or even already experiment with having the network predict the remaining number of moves in a game? Game length is admittedly kind of a dubious concept in the game of go (depending on the ruleset), but it's otherwise universal enough for it to be nice to find out whether predicting it might prove (significantly) beneficial. As a bonus, the info would be valuable for use in the time management (especially if the predictions are further divided by outcome (win/loss/draw)).

maximum memory usage

LeeLa has a way to limit the memory usage of the engine.
i.e. lz-setoption name maximum memory use (mib) value 1024

I was wondering if KataGo also has some way to define a memory usage limit. It's especially useful when you use multiple bots on the same server or use the bot for hours without interruption.

( I went through the settings in the example config folder, but perhaps overlooked something. )

Missing GTP command "undo"

The GTP command "undo" seems missing. It is widely used in GUIs (e.g. Lizzie).

$ git show --quiet --pretty=format:'%h%d'
fb23971 (HEAD -> opencl, origin/opencl)
$ git grep undo -- gtp.cpp
$ ./main gtp -model ../../g104-b6c96-s97778688-d23397744/model.txt.gz -config configs/gtp_example.cfg
play b d4
=

undo
? unknown command

Paper: questions and ideas

First of all, thank you for the experiments and the release of a paper with dazzling variety of methods and details!

This fact is ambiguous in DeepMind’s published papers due to ambiguity about whether a [-1,1] or [0,1] scale was used, and was only clarified much later by an individual researcher in a forum post here: http://talkchess.com/forum3/viewtopic.php?f=2&t=69175&start=70#p781765

If you are referring to Matthew Lai, he's a research engineer at DeepMind involved in the development of AlphaZero ...

image
Is a square root missing here?

Regarding one network for different sizes of boards: I am curious how your net perform against the 13x13 net converted from a 19x19 LZ 40b net (available at leela-zero/leela-zero#2240). Though it hasn't been trained on 13x13 games, it's the currently strongest 13x13 net available, and said to be of superhuman strength. Your 15b net achieves potentially superhuman strength on 19x19 as estimated in the paper, and though it hasn't been trained on as many 13x13 games, 13x13 should be easier to master due to smaller board size, so I think it would be a good matchup.

I am curious what you find about fair komi on even-sized boards. Are they usually closer to 4 or 6? (I guess the latter.)

The same method should allow the net to play on rectangular boards or disjoint unions of boards. If the two boards are identical the second player has a perfect mirroring strategy, which can only be broken by superko rules, I think. (I can imagine a double ko situation.)

Speeding up runs even faster?

leela-zero/leela-zero#2282

this patch lets you select moves with very few playouts, as long as they have a good winrate with SOME visits (10% and maximum lowest confidence bound)

But on higher visits it seems like a wash because Leela Zero prefers the higher winrate move to visit anyway. But on lower visits it lets you find a good move quickly! That means you can go up to a bigger network size faster - before you'd need to learn on say 1600 visits, but now maybe 300 visits can do the job while still incrementally getting stronger

CPU-only version

I would like to use KataGo since it has VERY attractive features: score/ownership estimation, variable komi, and handicap games. But I do not have a GPU and I wish for CPU-only one.

Even if CPU-only version is too slow for analysis in general, it must be useful for ownership estimation at least. I implemented ownership/endstate visualization based on a modified leela zero by ihavnoid, and found that it is so much fun. Change of ownership is interesting in particular. It reveals side effects of moves that I did not recognized well. I hope to watch KataGo's estimation in the same way.

lcb in version 1.1

The output that I get with version 1.1 is:

afbeelding

Shouldn't there be an lcb value in this format ?
Or is that something which is different from how LeeLa Zero works ?

Thank you in advance.

(PS: what is the utility value ?)

Plans for the future?

Sorry if opening an issue is not adequate, but I didn't see an alternative (ie a discord channel?).

First of all, huge congratulations, KataGo is an excellent program, very strong on 13x13 and 19x19, providing a lot of value added Go things: handicap, no weak play in end games, variable komi, variable board sizes, extremely fast and efficient improvement...
That's just AMAZING, congratulations : -))). LZ130 strength in 30 weeks x GPU, it's incredible: with current LZ computing power, that would take ~1 week where LZ took ~6 months!

Then a question: what are the next steps?
More precisely:

  • Is it possible to continue training it, and will you give us the opportunity to contribute (eg through an LZ approach, with an "autogtp" tool, including a windows version ; -)?
  • What about larger nets (20, 30 or possibly 40 block)?
  • Other ideas (over ones you already hinted: rectagular board sizes, japanese rules, ...)?

I'd be more than happy to switch my hardware from LZ and to dedicate it to train KataGo full time (on my Windows set-up)...

final_score outputs only W+

I've been experiencing the following problem with both Sabaki and Lizzie: final_score command outputs always W+ (ie. W+7.5) even if Black has a lead/is winning according to the winrate. The points can differ but the lead is always for white.

OS: Ubuntu 18.04

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.