Git Product home page Git Product logo

phoenixgo's Introduction

PhoenixGo

PhoenixGo is a Go AI program which implements the AlphaGo Zero paper "Mastering the game of Go without human knowledge". It is also known as "BensonDarr" and "金毛测试" in FoxGo, "cronus" in CGOS, and the champion of World AI Go Tournament 2018 held in Fuzhou China.

If you use PhoenixGo in your project, please consider mentioning in your README.

If you use PhoenixGo in your research, please consider citing the library as follows:

@misc{PhoenixGo2018,
  author = {Qinsong Zeng and Jianchang Zhang and Zhanpeng Zeng and Yongsheng Li and Ming Chen and Sifan Liu}
  title = {PhoenixGo},
  year = {2018},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/Tencent/PhoenixGo}}
}

Building and Running

On Linux

Requirements

  • GCC with C++11 support
  • Bazel (0.19.2 is known-good)
  • (Optional) CUDA and cuDNN for GPU support
  • (Optional) TensorRT (for accelerating computation on GPU, 3.0.4 is known-good)

The following environments have also been tested by independent contributors : here. Other versions may work, but they have not been tested (especially for bazel).

Download and Install Bazel

Before starting, you need to download and install bazel, see here.

For PhoenixGo, bazel (0.19.2 is known-good), read Requirements for details

If you have issues on how to install or start bazel, you may want to try this all-in-one command line for easier building instead, see FAQ question

Building PhoenixGo with Bazel

Clone the repository and configure the building:

$ git clone https://github.com/Tencent/PhoenixGo.git
$ cd PhoenixGo
$ ./configure

./configure will start the bazel configure : ask where CUDA and TensorRT have been installed, specify them if need.

Then build with bazel:

$ bazel build //mcts:mcts_main

Dependices such as Tensorflow will be downloaded automatically. The building process may take a long time.

Recommendation : the bazel building uses a lot of RAM, if your building environment is lack of RAM, you may need to restart your computer and exit other running programs to free as much RAM as possible.

Running PhoenixGo

Download and extract the trained network:

$ wget https://github.com/Tencent/PhoenixGo/releases/download/trained-network-20b-v1/trained-network-20b-v1.tar.gz
$ tar xvzf trained-network-20b-v1.tar.gz

The PhoenixGo engine supports GTP (Go Text Protocol), which means it can be used with a GUI with GTP capability, such as Sabaki. It can also run on command-line GTP server tools like gtp2ogs.

But PhoenixGo does not support all GTP commands, see FAQ question.

There are 2 ways to run PhoenixGo engine

1) start.sh : easy use

Run the engine : scripts/start.sh

start.sh will automatically detect the number of GPUs, run mcts_main with proper config file, and write log files in directory log.

You could also use a customized config file (.conf) by running scripts/start.sh {config_path}. If you want to do that, see also #configure-guide.

2) mcts_main : fully control

If you want to fully control all the options of mcts_main (such as changing log destination, or if start.sh is not compatible for your specific use), you can run directly bazel-bin/mcts/mcts_main instead.

For a typical usage, these command line options should be added:

  • --gtp to enable GTP mode
  • --config_path=replace/with/path/to/your/config/file to specify the path to your config file
  • it is also needed to edit your config file (.conf) and manually add the full path to ckpt, see FAQ question. You can also change options in config file, see #configure-guide.
  • for other command line options , see also #command-line-options for details, or run ./mcts_main --help . A copy of the --help is provided for your convenience here

For example:

$ bazel-bin/mcts/mcts_main --gtp --config_path=etc/mcts_1gpu.conf --logtostderr --v=0

(Optional) : Distribute mode

PhoenixGo support running with distributed workers, if there are GPUs on different machine.

Build the distribute worker:

$ bazel build //dist:dist_zero_model_server

Run dist_zero_model_server on distributed worker, one for each GPU.

$ CUDA_VISIBLE_DEVICES={gpu} bazel-bin/dist/dist_zero_model_server --server_address="0.0.0.0:{port}" --logtostderr

Fill ip:port of workers in the config file (etc/mcts_dist.conf is an example config for 32 workers), and run the distributed master:

$ scripts/start.sh etc/mcts_dist.conf

On macOS

Note: Tensorflow stop providing GPU support on macOS since 1.2.0, so you are only able to run on CPU.

Use Pre-built Binary

Download and extract CPU-only version (macOS)

Follow the document included in the archive : using_phoenixgo_on_mac.pdf

Building from Source

Same as Linux.

On Windows

Recommendation: See FAQ question, to avoid syntax errors in config file and command line options on Windows.

Use Pre-built Binary

GPU version :

The GPU version is much faster, but works only with compatible nvidia GPU. It supports this environment :

  • CUDA 9.0 only
  • cudnn 7.1.x (x is any number) or lower for CUDA 9.0
  • no AVX, AVX2, AVX512 instructions supported in this release (so it is currently much slower than the linux version)
  • there is no TensorRT support on Windows

Download and extract GPU version (Windows)

Then follow the document included in the archive : how to install phoenixgo.pdf

note : to support special features like CUDA 10.0 or AVX512 for example, you can build your own build for windows, see #79

CPU-only version :

If your GPU is not compatible, or if you don't want to use a GPU, you can download this CPU-only version (Windows),

Follow the document included in the archive : how to install phoenixgo.pdf

Configure Guide

Here are some important options in the config file:

  • num_eval_threads: should equal to the number of GPUs
  • num_search_threads: should a bit larger than num_eval_threads * eval_batch_size
  • timeout_ms_per_step: how many time will used for each move
  • max_simulations_per_step: how many simulations(also called playouts) will do for each move
  • gpu_list: use which GPUs, separated by comma
  • model_config -> train_dir: directory where trained network stored
  • model_config -> checkpoint_path: use which checkpoint, get from train_dir/checkpoint if not set
  • model_config -> enable_tensorrt: use TensorRT or not
  • model_config -> tensorrt_model_path: use which TensorRT model, if enable_tensorrt
  • max_search_tree_size: the maximum number of tree nodes, change it depends on memory size
  • max_children_per_node: the maximum children of each node, change it depends on memory size
  • enable_background_search: pondering in opponent's time
  • early_stop: genmove may return before timeout_ms_per_step, if the result would not change any more
  • unstable_overtime: think timeout_ms_per_step * time_factor more if the result still unstable
  • behind_overtime: think timeout_ms_per_step * time_factor more if winrate less than act_threshold

Options for distribute mode:

  • enable_dist: enable distribute mode
  • dist_svr_addrs: ip:port of distributed workers, multiple lines, one ip:port in each line
  • dist_config -> timeout_ms: RPC timeout

Options for async distribute mode:

Async mode is used when there are huge number of distributed workers (more than 200), which need too many eval threads and search threads in sync mode. etc/mcts_async_dist.conf is an example config for 256 workers.

  • enable_async: enable async mode
  • enable_dist: enable distribute mode
  • dist_svr_addrs: multiple lines, comma sperated lists of ip:port for each line
  • num_eval_threads: should equal to number of dist_svr_addrs lines
  • eval_task_queue_size: tunning depend on number of distribute workers
  • num_search_threads: tunning depend on number of distribute workers

Read mcts/mcts_config.proto for more config options.

Command Line Options

mcts_main accept options from command line:

  • --config_path: path of config file
  • --gtp: run as a GTP engine, if disable, gen next move only
  • --init_moves: initial moves on the go board, for example usage, see FAQ question
  • --gpu_list: override gpu_list in config file
  • --listen_port: work with --gtp, run gtp engine on port in TCP protocol
  • --allow_ip: work with --listen_port, list of client ip allowed to connect
  • --fork_per_request: work with --listen_port, fork for each request or not

Glog options are also supported:

  • --logtostderr: log message to stderr
  • --log_dir: log to files in this directory
  • --minloglevel: log level, 0 - INFO, 1 - WARNING, 2 - ERROR
  • --v: verbose log, --v=1 for turning on some debug log, --v=0 to turning off

mcts_main --help for more command line options. A copy of the --help is provided for your convenience here

Analysis

For analysis purpose, an easy way to display the PV (variations for main move path) is --logtostderr --v=1 which will display the main move path winrate and continuation of moves analyzed, see FAQ question for details

It is also possible to analyse .sgf files using analysis tools such as :

  • GoReviewPartner : an automated tool to analyse and/or review one or many .sgf files (saved as .rsgf file). It supports PhoenixGo and other bots. See FAQ question for details

FAQ

You will find a lot of useful and important information, also most common problems and errors and how to fix them

Please take time to read the FAQ

phoenixgo's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

phoenixgo's Issues

It's too hard to get tensorrt enabled in ubuntu.

I've installed PhoenixGo for several PCs with different os and different gpu cards. But every time it's like a blind torture and i had to try different settings to make tensorrt runnable. Sometimes it can be build ok, but would crashed just after initilization. For now, I could run phoenixgo with tensorrt enabled on ubuntu 1701 1804 with only tensoort-3.0.4 installed by tar file using geforce 10XX gpus. These two days I have been trying make it work on a ubuntu1804 pc with titan V gpu. I failed and I think the most newly code has some problem, coz when I built code of one month ago, it can pass the buiding, just crahed in running, but if i use most newly code, it always says some .o files not created and failed at link stage whatever I change the gcc or reinstall different drivers.
Although I have managed to make tensorrt work, I still want the authorcould give a general software instruction about software config. And could anyone tell me whether or what settings can make tensorrt run.

New image upgrade

Hi, would you be interested in having a new logo and banner?

I´m having a few ideas that could work for you, can I send you some drafts?

在mac上编译出错,能看一下问题出在哪里吗?

(virtual) ➜ PhoenixGo git:(master) bazel build //mcts:mcts_main
WARNING: /private/var/tmp/_bazel_xutao/ee4217f9ef26aaefbb212a4e6228709a/external/protobuf_archive/WORKSPACE:1: Workspace name in /private/var/tmp/_bazel_xutao/ee4217f9ef26aaefbb212a4e6228709a/external/protobuf_archive/WORKSPACE (@com_google_protobuf) does not match the name given in the repository's definition (@protobuf_archive); this will cause a build error in future versions
WARNING: /private/var/tmp/_bazel_xutao/ee4217f9ef26aaefbb212a4e6228709a/external/org_tensorflow/tensorflow/core/BUILD:1955:1: in includes attribute of cc_library rule @org_tensorflow//tensorflow/core:framework_headers_lib: '../../../../external/nsync/public' resolves to 'external/nsync/public' not below the relative path of its package 'external/org_tensorflow/tensorflow/core'. This will be an error in the future. Since this rule was created by the macro 'cc_header_only_library', the error might have been caused by the macro implementation in /private/var/tmp/_bazel_xutao/ee4217f9ef26aaefbb212a4e6228709a/external/org_tensorflow/tensorflow/tensorflow.bzl:1179:30
WARNING: /private/var/tmp/_bazel_xutao/ee4217f9ef26aaefbb212a4e6228709a/external/grpc/WORKSPACE:1: Workspace name in /private/var/tmp/_bazel_xutao/ee4217f9ef26aaefbb212a4e6228709a/external/grpc/WORKSPACE (@com_github_grpc_grpc) does not match the name given in the repository's definition (@grpc); this will cause a build error in future versions
INFO: Analysed target //mcts:mcts_main (1 packages loaded).
INFO: Found 1 target...
ERROR: /private/var/tmp/_bazel_xutao/ee4217f9ef26aaefbb212a4e6228709a/external/jpeg/BUILD:269:1: C++ compilation of rule '@jpeg//:simd_armv7a' failed (Exit 1)
error: unknown target CPU 'armv7-a'
Target //mcts:mcts_main failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 0.848s, Critical Path: 0.14s
INFO: 0 processes.
FAILED: Build did NOT complete successfully

编译错误

在 centos 7 下,编译 phoeixGo 出现下面错误,是什么原因呢?
[root@VM_29_117_centos PhoenixGo]# bazel build //mcts:mcts_main
DEBUG: /root/.cache/bazel/_bazel_root/e348275a4d1fb6b530556d22a2214eb7/external/bazel_tools/tools/build_defs/repo/http.bzl:46:9: patch file //third_party/tensorflow:tensorflow.patch, path /root/PhoenixGo/third_party/tensorflow/tensorflow.patch
ERROR: error loading package '': Encountered error while reading extension file 'tensorflow/workspace.bzl': no such package '@org_tensorflow//tensorflow': java.io.IOException: thread interrupted
ERROR: error loading package '': Encountered error while reading extension file 'tensorflow/workspace.bzl': no such package '@org_tensorflow//tensorflow': java.io.IOException: thread interrupted
INFO: Elapsed time: 109.216s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)

Strength + disable resign below 5%

What can I change to disable early resign on handicap and put max power?
My current settings are for 2 GPU:
num_eval_threads: 2
num_search_threads: 12
max_children_per_node: 128
max_search_tree_size: 800000000
timeout_ms_per_step: 60000
max_simulations_per_step: 0
eval_batch_size: 4
eval_wait_batch_timeout_us: 100
model_config {
train_dir: "ckpt"
}
gpu_list: "0,1"
c_puct: 2.5
virtual_loss: 1.0
enable_resign: 1
v_resign: -0.9
enable_dirichlet_noise: 0
dirichlet_noise_alpha: 0.03
dirichlet_noise_ratio: 0.25
monitor_log_every_ms: 0
get_best_move_mode: 0
enable_background_search: 0
enable_policy_temperature: 0
policy_temperature: 0.67
inherit_default_act: 1
early_stop {
enable: 1
check_every_ms: 100
sims_factor: 1.0
sims_threshold: 2000
}
unstable_overtime {
enable: 1
time_factor: 0.3
}
behind_overtime {
enable: 1
act_threshold: 0.0
time_factor: 0.3
}
time_control {
enable: 1
c_denom: 20
c_maxply: 40
reserved_time: 1.0
}

mac support?

Build stops with error on MAC

bazel build //mcts:mcts_main
DEBUG: /private/var/tmp/_bazel_tk_x/9c9fc22902407ae0d7f75c5052e8b3a8/external/bazel_tools/tools/build_defs/repo/http.bzl:51:5: patch file //third_party/tensorflow:tensorflow.patch, path /Users/tk_x/workspace/myproject/trunk/go/PhoenixGo/third_party/tensorflow/tensorflow.patch
WARNING: /private/var/tmp/_bazel_tk_x/9c9fc22902407ae0d7f75c5052e8b3a8/external/protobuf_archive/WORKSPACE:1: Workspace name in /private/var/tmp/_bazel_tk_x/9c9fc22902407ae0d7f75c5052e8b3a8/external/protobuf_archive/WORKSPACE (@com_google_protobuf) does not match the name given in the repository's definition (@protobuf_archive); this will cause a build error in future versions
DEBUG: /private/var/tmp/_bazel_tk_x/9c9fc22902407ae0d7f75c5052e8b3a8/external/bazel_tools/tools/build_defs/repo/http.bzl:51:5: patch file //third_party/glog:glog.patch, path /Users/tk_x/workspace/myproject/trunk/go/PhoenixGo/third_party/glog/glog.patch
WARNING: /private/var/tmp/_bazel_tk_x/9c9fc22902407ae0d7f75c5052e8b3a8/external/grpc/WORKSPACE:1: Workspace name in /private/var/tmp/_bazel_tk_x/9c9fc22902407ae0d7f75c5052e8b3a8/external/grpc/WORKSPACE (@com_github_grpc_grpc) does not match the name given in the repository's definition (@grpc); this will cause a build error in future versions
WARNING: /private/var/tmp/_bazel_tk_x/9c9fc22902407ae0d7f75c5052e8b3a8/external/org_tensorflow/tensorflow/core/BUILD:1955:1: in includes attribute of cc_library rule @org_tensorflow//tensorflow/core:framework_headers_lib: '../../../../external/nsync/public' resolves to 'external/nsync/public' not below the relative path of its package 'external/org_tensorflow/tensorflow/core'. This will be an error in the future. Since this rule was created by the macro 'cc_header_only_library', the error might have been caused by the macro implementation in /private/var/tmp/_bazel_tk_x/9c9fc22902407ae0d7f75c5052e8b3a8/external/org_tensorflow/tensorflow/tensorflow.bzl:1179:30
INFO: Analysed target //mcts:mcts_main (0 packages loaded).
INFO: Found 1 target...
ERROR: /private/var/tmp/_bazel_tk_x/9c9fc22902407ae0d7f75c5052e8b3a8/external/jpeg/BUILD:269:1: C++ compilation of rule '@jpeg//:simd_armv7a' failed (Exit 1)
error: unknown target CPU 'armv7-a'
Target //mcts:mcts_main failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 0.653s, Critical Path: 0.17s
INFO: 0 processes.

does not match the name given in the repository's definition

DEBUG: /home/ken/.cache/bazel/_bazel_root/d55109668bdfdb750ffd86a7a38971f1/external/bazel_tools/tools/build_defs/repo/http.bzl:63:5: patch file //third_party/tensorflow:tensorflow.patch, path /home/ken/PhoenixGo/third_party/tensorflow/tensorflow.patch
WARNING: /home/ken/.cache/bazel/_bazel_root/d55109668bdfdb750ffd86a7a38971f1/external/protobuf_archive/WORKSPACE:1: Workspace name in /home/ken/.cache/bazel/_bazel_root/d55109668bdfdb750ffd86a7a38971f1/external/protobuf_archive/WORKSPACE (@com_google_protobuf) does not match the name given in the repository's definition (@protobuf_archive); this will cause a build error in future versions

what happen on this 3096

My Mac is 2.6 GHz intel Core i7 8G DDR3 and Intel HD Graphics 4000 1536MB.
when i run star.sh. It show the problems as follow:

start.sh: line 4: 3096 Illegal instruction: 4 ./bin/mcts_main --config_path=etc/mcts_1gpu_notensorrt.conf --gtp --v=1 --log_dir=log

please help to solve it.

关于程序循环,高CPU占用的问题

在windows运行时,发现即使不做任何操作,CPU占用会到达100%...。
在检查了程序循环代码后,发现GTPServing函数里while循环存在一些问题。
std::getline这个函数在成功后返回字节数,失败后直接返回-1,即都是真(参考:https://baike.baidu.com/item/getline%E5%87%BD%E6%95%B0/3932106?fr=aladdin)。
然后问题出现了,程序在没有输入的时候,程序并不会阻塞,仍旧会不断的轮询并且不会主动释放CPU执行时间,造成大量无谓的CPU占用。

"./mcts/mcts_main.cc"

void GTPServing(std::istream &in, std::ostream &out) { std::string cmd, output; ... 这里存在问题 ... while (std::getline(in, cmd)) { ... ... std::tie(succ, output) = GTPExecute(*engine, cmd); ... ... if (cmd.find("quit") != std::string::npos) { break; } } LOG(WARNING) << "exiting gtp serving"; }

How to compile a static excutable for other linux system?

Follow the compiling guide can make an executable.
The executable "mcts_main" is not in the PhoenixGo folder.
If I copy it into PhoenixGo folder and execute it, a error will happened.

./mcts_main: error while loading shared libraries: libtensorflow_framework.so: cannot open shared object file: No such file or directory

But I need to run it in other linux system without these libraries.
What can I do?
Is there any way to build a static executable for system that doesn't install these libraries?

window 下编译

visual studio 2017 编译 phoenixGo ,请问 pb 文件如何产生,能否把windows下的编译说明也提供一下

错误 C1083 无法打开源文件: “mcts\mcts_config.pb.cc”: No such file or directory C:\Users\ligan\Source\Repos\PhoenixGo\mcts_main.vcxproj C:\Users\ligan\Source\Repos\PhoenixGo\c1xx 1
错误 C1083 无法打开包括文件: “mcts/mcts_config.pb.h”: No such file or directory C:\Users\ligan\Source\Repos\PhoenixGo\mcts_main.vcxproj c:\users\ligan\source\repos\phoenixgo\mcts\mcts_config.h 23
错误 C1083 无法打开包括文件: “model/model_config.pb.h”: No such file or directory C:\Users\ligan\Source\Repos\PhoenixGo\mcts_main.vcxproj c:\users\ligan\source\repos\phoenixgo\model\zero_model_base.h 25
错误 C1083 无法打开源文件: “dist\dist_zero_model.pb.cc”: No such file or directory C:\Users\ligan\Source\Repos\PhoenixGo\mcts_main.vcxproj C:\Users\ligan\Source\Repos\PhoenixGo\c1xx 1
错误 C1083 无法打开源文件: “dist\dist_zero_model.grpc.pb.cc”: No such file or directory C:\Users\ligan\Source\Repos\PhoenixGo\mcts_main.vcxproj C:\Users\ligan\Source\Repos\PhoenixGo\c1xx 1
错误 C1083 无法打开源文件: “dist\dist_config.pb.cc”: No such file or directory C:\Users\ligan\Source\Repos\PhoenixGo\mcts_main.vcxproj C:\Users\ligan\Source\Repos\PhoenixGo\c1xx 1
错误 C1083 无法打开包括文件: “model/model_config.pb.h”: No such file or directory C:\Users\ligan\Source\Repos\PhoenixGo\mcts_main.vcxproj c:\users\ligan\source\repos\phoenixgo\model\zero_model_base.h 25

Can't install PhoenixGo on osx

os10.13.4
iMac mid2011

here is my Term Message, it looks failure.

Macde-iMac:PhoenixGo mac$ sh -x start.sh
++ dirname start.sh

  • cd .
  • export LD_LIBRARY_PATH=:/Users/mac/Downloads/PhoenixGo/solib
  • LD_LIBRARY_PATH=:/Users/mac/Downloads/PhoenixGo/solib
  • ./bin/mcts_main --config_path=etc/mcts_1gpu_notensorrt.conf --gtp --v=1 --log_dir=log
    start.sh: line 4: 44510 Illegal instruction: 4 ./bin/mcts_main --config_path=etc/mcts_1gpu_notensorrt.conf --gtp --v=1 --log_dir=log
    Macde-iMac:PhoenixGo mac$

how to set the config file to let PhoenixGo stronger

the default config PhoenixGo lost 4:0 to lz 3ef82227
my command line, cpu.conf is same as mcts_cpu.conf
Black command: D:\tool\valid14\leelaz.exe -g -w D:\tool\valid14\3ef8
White command: D:\tool\PhoenixGo\bin\mcts_main.exe --config_path=D:\tool\PhoenixGo\etc\cpu.conf --gtp --log_dir=D:\tool\PhoenixGo\log --v=1
Black version: 0.15
White version: 1.0

How to solve it

~/PhoenixGo$ bazel-bin/mcts/mcts_main --config_path=etc/mcts_1gpu_notensorrt.conf --gtp
2018-05-11 19:07:26.504334: F external/org_tensorflow/tensorflow/core/common_runtime/device_factory.cc:77] Duplicate registration of device factory for type GPU with the same priority 210

Here is my environment.
/PhoenixGo$ uname -a
Linux abcdefg-MS-7A74 4.13.0-39-generic #44
16.04.1-Ubuntu SMP Thu Apr 5 16:43:10 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

~/PhoenixGo$ ls ckpt/
checkpoint zero.ckpt-20b-v1.data-00000-of-00001 zero.ckpt-20b-v1.FP32.PLAN.step
meta_graph zero.ckpt-20b-v1.FP32.PLAN zero.ckpt-20b-v1.index

~/PhoenixGo$ more etc/mcts_1gpu_notensorrt.conf
num_eval_threads: 1
num_search_threads: 8
max_children_per_node: 64
max_search_tree_size: 400000000
timeout_ms_per_step: 30000
max_simulations_per_step: 0
eval_batch_size: 4
eval_wait_batch_timeout_us: 100
model_config {
train_dir: "ckpt"
}
gpu_list: "0"
c_puct: 2.5
virtual_loss: 1.0
enable_resign: 1
v_resign: -0.9
enable_dirichlet_noise: 0
dirichlet_noise_alpha: 0.03
dirichlet_noise_ratio: 0.25
monitor_log_every_ms: 0
get_best_move_mode: 0
enable_background_search: 1
enable_policy_temperature: 0
policy_temperature: 0.67
inherit_default_act: 1
early_stop {
enable: 1
check_every_ms: 100
sims_factor: 1.0
sims_threshold: 2000
}
unstable_overtime {
enable: 1
time_factor: 0.3
}
behind_overtime {
enable: 1
act_threshold: 0.0
time_factor: 0.3
}

What's performancedifference after tensorrt enabled?

As Nvidia said the tensorrt GIE would improve much performance of inference. But i've never seen a report about improvement when tensorRT applied in Gtx 10xx GPUs, especially Gtx 1080ti. Could anyone tell me how much performance of PhoenixGo gained when tensorrt enabled versus tensorRT disabled in 1080Ti.

Errors occurred when building origin master's .sln by VS2015

I have tried to build the .sln used VS2015 but the builder occurs the errors followed:
1>------ 已启动全部重新生成: 项目: mcts_main, 配置: Release x64 ------
1> Performing Protoc Build Tools
1> Performing Protoc Build Tools
1> Performing Protoc Build Tools
1> Performing Protoc Build Tools
1> Performing Protoc Build Tools
1> go_comm.cc
1> go_state.cc
1> str_utils.cc
1> thread_conductor.cc
1> timer.cc
1> wait_group.cc
1> async_dist_zero_model_client.cc
1> dist_config.pb.cc
1> dist_zero_model.grpc.pb.cc
1> dist_zero_model.pb.cc
1> dist_zero_model_client.cc
1> leaky_bucket.cc
1> byo_yomi_timer.cc
1> mcts_config.cc
1> mcts_config.pb.cc
1> mcts_debugger.cc
1> mcts_engine.cc
1> mcts_main.cc
1> mcts_monitor.cc
1> checkpoint_state.pb.cc
1> 正在生成代码...
1> 正在编译...
1> checkpoint_utils.cc
1> model_config.pb.cc
1> trt_zero_model.cc
1> zero_model.cc
1>D:\kz\tensorflow\tensorflow/core/platform/default/logging.h(77): warning C4005: “LOG”: 宏重定义
1> D:\kz\glog-0.3.5\glog/logging.h(506): note: 参见“LOG”的前一个定义
1>D:\kz\tensorflow\tensorflow/core/platform/default/logging.h(86): warning C4005: “VLOG_IS_ON”: 宏重定义
1> D:\kz\glog-0.3.5\glog/vlog_is_on.h(93): note: 参见“VLOG_IS_ON”的前一个定义
1>D:\kz\tensorflow\tensorflow/core/platform/default/logging.h(91): warning C4005: “VLOG”: 宏重定义
1> D:\kz\glog-0.3.5\glog/logging.h(1094): note: 参见“VLOG”的前一个定义
1>D:\kz\tensorflow\tensorflow/core/platform/default/logging.h(99): warning C4005: “CHECK”: 宏重定义
1> D:\kz\glog-0.3.5\glog/logging.h(586): note: 参见“CHECK”的前一个定义
1>D:\kz\tensorflow\tensorflow/core/platform/default/logging.h(246): warning C4005: “CHECK_OP_LOG”: 宏重定义
1> D:\kz\glog-0.3.5\glog/logging.h(749): note: 参见“CHECK_OP_LOG”的前一个定义
1>D:\kz\tensorflow\tensorflow/core/platform/default/logging.h(248): warning C4005: “CHECK_OP”: 宏重定义
1> D:\kz\glog-0.3.5\glog/logging.h(764): note: 参见“CHECK_OP”的前一个定义
1>D:\kz\tensorflow\tensorflow/core/platform/default/logging.h(251): warning C4005: “CHECK_EQ”: 宏重定义
1> D:\kz\glog-0.3.5\glog/logging.h(788): note: 参见“CHECK_EQ”的前一个定义
1>D:\kz\tensorflow\tensorflow/core/platform/default/logging.h(252): warning C4005: “CHECK_NE”: 宏重定义
1> D:\kz\glog-0.3.5\glog/logging.h(789): note: 参见“CHECK_NE”的前一个定义
1>D:\kz\tensorflow\tensorflow/core/platform/default/logging.h(253): warning C4005: “CHECK_LE”: 宏重定义
1> D:\kz\glog-0.3.5\glog/logging.h(790): note: 参见“CHECK_LE”的前一个定义
1>D:\kz\tensorflow\tensorflow/core/platform/default/logging.h(254): warning C4005: “CHECK_LT”: 宏重定义
1> D:\kz\glog-0.3.5\glog/logging.h(791): note: 参见“CHECK_LT”的前一个定义
1>D:\kz\tensorflow\tensorflow/core/platform/default/logging.h(255): warning C4005: “CHECK_GE”: 宏重定义
1> D:\kz\glog-0.3.5\glog/logging.h(792): note: 参见“CHECK_GE”的前一个定义
1>D:\kz\tensorflow\tensorflow/core/platform/default/logging.h(256): warning C4005: “CHECK_GT”: 宏重定义
1> D:\kz\glog-0.3.5\glog/logging.h(793): note: 参见“CHECK_GT”的前一个定义
1>D:\kz\tensorflow\tensorflow/core/platform/default/logging.h(259): warning C4005: “CHECK_NOTNULL”: 宏重定义
1> D:\kz\glog-0.3.5\glog/logging.h(799): note: 参见“CHECK_NOTNULL”的前一个定义
1> 正在生成代码...
1> 正在创建库 D:\PhoenixGo-master\x64\Release\mcts_main.lib 和对象 D:\PhoenixGo-master\x64\Release\mcts_main.exp
1>mcts_debugger.obj : error LNK2001: 无法解析的外部符号 "int fLI::FLAGS_v" (?FLAGS_v@fLI@@3Ha)
1>mcts_engine.obj : error LNK2001: 无法解析的外部符号 "int fLI::FLAGS_v" (?FLAGS_v@fLI@@3Ha)
1>mcts_monitor.obj : error LNK2001: 无法解析的外部符号 "int fLI::FLAGS_v" (?FLAGS_v@fLI@@3Ha)
1>tf_core_lib.lib(numbers.obj) : error LNK2019: 无法解析的外部符号 "public: double __cdecl double_conversion::StringToDoubleConverter::StringToDouble(char const *,int,int *)const " (?StringToDouble@StringToDoubleConverter@double_conversion@@QEBANPEBDHPEAH@Z),该符号在函数 "bool __cdecl tensorflow::strings::safe_strtod(char const *,double *)" (?safe_strtod@strings@tensorflow@@YA_NPEBDPEAN@Z) 中被引用
1>tf_core_lib.lib(numbers.obj) : error LNK2019: 无法解析的外部符号 "public: float __cdecl double_conversion::StringToDoubleConverter::StringToFloat(char const *,int,int *)const " (?StringToFloat@StringToDoubleConverter@double_conversion@@QEBAMPEBDHPEAH@Z),该符号在函数 "unsigned __int64 __cdecl tensorflow::strings::FloatToBuffer(float,char *)" (?FloatToBuffer@strings@tensorflow@@YA_KMPEAD@Z) 中被引用
1>D:\PhoenixGo-master\x64\Release\mcts_main.exe : fatal error LNK1120: 3 个无法解析的外部命令
========== 全部重新生成: 成功 0 个,失败 1 个,跳过 0 个 ==========

Firstly,i am confused that how to make the" tensorflow logging glog redefined warnings " disappeared
Secondly, after i changed the thirdparty.props for adapt the packages' build/source paths,the errors LINK2001 still exits. I think the reason the the errors occurred may be some lib including missing error.And Who can tell me how can i fix the problem?
thx~

basic questions about compilation and running

Could I please ask how to compile and run PhoenixGo on a simple Laptop? My laptop is Ubuntu 14.04, processor is 4x i7-2640M CPU @ 2.80GHz. I have no GPU support under Linux.

  1. I managed to compile, but there are so many questions during config which I don't know how to
    answer properly, mainly about Tensorflow. I think I answered all questions with "no", and
    it did compile.

  2. How should my etc/my.config should look like? I have no clue what all these options mean.
    At the moment, when I just try

    bazel-bin/mcts/mcts_main --config_path=etc/mcts_dist.conf --gtp --logtostderr --v=1

It will eventually print

... "MCTSEngine: waiting all eval threads init"

and then nothing happens, and I can't get it running under gogui.

Thanks!

cpu version think more time each move is useless?

lz_pg50s.zip
lz_pg.zip
lz_lzp.zip
lz_pgp.zip

all leelaz weights are similar
timeout_ms_per_step: 5000

# Black: Leela Zero
# BlackCommand: D:\tool\valid14\leelaz.exe -g -w D:\tool\valid14\3ef8
# BlackLabel: Leela Zero
# BlackVersion: 0.15
# Date: May 24, 2018 10:25:52 AM CST
# Host: PC
# Komi: 7.5
# Referee: -
# Size: 19
# White: PhoenixGo
# WhiteCommand: D:\tool\PhoenixGo\bin\mcts_main.exe --config_path=D:\tool\PhoenixGo\etc\cpu.conf --gtp --log_dir=D:\tool\PhoenixGo\log --v=1
# WhiteLabel: PhoenixGo
# WhiteVersion: 1.0
# Xml: 0
#
#GAME	RES_B	RES_W	RES_R	ALT	DUP	LEN	TIME_B	TIME_W	CPU_B	CPU_W	ERR	ERR_MSG
0	B+R	B+R	B+R	0	-	121	1685.4	446.9	0	0	0	
1	B+R	B+R	B+R	0	-	185	2261.8	698.5	0	0	0	
2	B+R	B+R	B+R	0	-	109	1474.7	413.7	0	0	0	
3	B+R	B+R	B+R	0	-	117	1541.4	451.9	0	0	0	
4	B+R	B+R	B+R	0	-	153	1984.4	564.6	0	0	0	
5	B+R	B+R	B+R	0	-	129	1824.7	494.9	0	0	0	
6	B+R	B+R	B+R	0	-	67	453	253.7	0	0	0	
7	B+R	B+R	B+R	0	-	49	401.3	176.6	0	0	0	
8	B+R	B+R	B+R	0	6?	65	384.1	243.6	0	0	0	
9	B+R	B+R	B+R	0	-	117	1555.2	456.5	0	0	0	

timeout_ms_per_step: 50000

# Black: Leela Zero
# BlackCommand: D:\tool\valid14\leelaz.exe -g -w D:\tool\valid14\057a
# BlackLabel: Leela Zero
# BlackVersion: 0.15
# Date: May 24, 2018 3:45:34 PM CST
# Host: PC
# Komi: 7.5
# Referee: -
# Size: 19
# White: PhoenixGo
# WhiteCommand: D:\tool\PhoenixGo\bin\mcts_main.exe --config_path=D:\tool\PhoenixGo\etc\cpu.conf --gtp --log_dir=D:\tool\PhoenixGo\log --v=1
# WhiteLabel: PhoenixGo
# WhiteVersion: 1.0
# Xml: 0
#
#GAME	RES_B	RES_W	RES_R	ALT	DUP	LEN	TIME_B	TIME_W	CPU_B	CPU_W	ERR	ERR_MSG
0	B+R	B+R	B+R	0	-	155	1603.5	5021.4	0	0	0	
1	B+R	B+R	B+R	0	-	231	2068.8	7322.8	0	0	0	
2	B+R	B+R	B+R	0	-	265	2081.5	8242.5	0	0	0	
3	B+R	B+R	B+R	0	-	159	1505.2	4789.5	0	0	0	
4	B+R	B+R	B+R	0	-	39	298	1135.5	0	0	0	
5	B+R	B+R	B+R	0	-	199	1758.5	6331.1	0	0	0	
6	B+R	B+R	B+R	0	-	143	1432.3	4607.8	0	0	0	
7	B+R	B+R	B+R	0	4?	41	335.9	1199.2	0	0	0	
8	B+R	B+R	B+R	0	-	137	1263.1	4403.8	0	0	0	
9	B+R	B+R	B+R	0	-	123	1118	3860.7	0	0	0	

but use https://github.com/yenw/LeelaZero_PhoenixGo/

# Black: Leela Zero
# BlackCommand: D:\tool\valid14\leelaz.exe -g -w D:\tool\valid14\9e88
# BlackLabel: Leela Zero:0.15
# BlackVersion: 0.15
# Date: May 25, 2018 9:47:18 AM CST
# Host: PC
# Komi: 7.5
# Referee: -
# Size: 19
# White: Leela Zero
# WhiteCommand: D:\tool\lzp\lzgp.exe -g -w D:\tool\lzp\PhoenixGo_v1.txt
# WhiteLabel: Leela Zero:0.14
# WhiteVersion: 0.14
# Xml: 0
#
#GAME	RES_B	RES_W	RES_R	ALT	DUP	LEN	TIME_B	TIME_W	CPU_B	CPU_W	ERR	ERR_MSG
0	W+R	W+R	W+R	0	-	158	1864.8	1747.1	0	0	0	
1	B+R	B+R	B+R	0	-	131	1565	1487.7	0	0	0	
2	B+R	B+R	B+R	0	-	179	1934.8	1966.1	0	0	0	
3	B+R	B+R	B+R	0	-	113	1187.8	1101.8	0	0	0	
4	B+R	B+R	B+R	0	-	91	1111.6	1054.2	0	0	0	
5	B+R	B+R	B+R	0	-	93	1233.9	1031.6	0	0	0	
6	B+R	B+R	B+R	0	5?	93	995.8	1000.6	0	0	0	

PhoenixGo gpu version

# Black: Leela Zero
# BlackCommand: D:\tool\valid14\leelaz.exe -g -w D:\tool\valid14\3ef8
# BlackLabel: Leela Zero
# BlackVersion: 0.15
# Date: May 29, 2018 8:54:39 AM CST
# Host: PC
# Komi: 7.5
# Referee: -
# Size: 19
# White: PhoenixGo
# WhiteCommand: D:\tool\pgp\bin\mcts_main.exe --config_path=D:\tool\pgp\etc\1gpu_notensorrt.conf --gtp --log_dir=D:\tool\pgpo\log --v=1
# WhiteLabel: PhoenixGo
# WhiteVersion: 1.0
# Xml: 0
#
#GAME	RES_B	RES_W	RES_R	ALT	DUP	LEN	TIME_B	TIME_W	CPU_B	CPU_W	ERR	ERR_MSG
0	B+R	B+R	B+R	0	-	87	911.6	280.3	0	0	0	
1	B+R	B+R	B+R	0	0?	89	891.3	287.8	0	0	0	
2	B+R	B+R	B+R	0	0	87	924.1	279.8	0	0	0	
3	B+R	B+R	B+R	0	-	83	844.4	266.8	0	0	0	
4	B+R	B+R	B+R	0	-	193	2383.6	632.7	0	0	0	

Add time control info

By default, the GTP command time_settings is listed but not enabled.
If you want to use it, you will need to add

time_control {
    enable: 1
}

into the configure file.
I think you @wodesuck should add this info into readme : )

PS: the timeout_ms_per_step looks like the upper limit of boyomi time per move, and any time setting larger than this will be set to this value.

I can not see anyting if I run start.bat of windows version

but I can see following message if I run the commands listed in that file
D:\tool\PhoenixGo>set config=etc\mcts_cpu.conf

D:\tool\PhoenixGo>bin\mcts_main.exe --config_path=%config% --gtp --log_dir=log --v=1
2018-05-23 16:10:09.272340: I model\zero_model.cc:72] Read checkpoint state succ
2018-05-23 16:10:09.306540: I model\zero_model.cc:80] Read meta graph succ
2018-05-23 16:10:09.322140: I E:\Tensorflow\PhoenixGo\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your
CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2018-05-23 16:10:09.402140: I model\zero_model.cc:100] Create session succ
2018-05-23 16:10:09.491140: I model\zero_model.cc:107] Create graph succ
2018-05-23 16:10:09.998540: I model\zero_model.cc:119] Load checkpoint succ

The engine plan file is not compatible with this version of GIE, please rebuild.

After configued Tensorrt. I got PhoenixGo run in my Ubuntu 18.04 without Tensorrt enabled(using mcts_1gpu_notensorrt.conf), but I couldn't get it run under Tensorrt enabled(using mcts_1gpu.conf). I always got the message "The engine plan file is not compatible with this version of GIE, please rebuild.".
My software configuration is: Ubuntu 18.04, CUDA 9.0, Cudnn 7.0.5, Bazel 0.11, Tensorrt 3.04, default python 2.7. and i select GCC-6 as the compiler. The one thing i got messed is i firstly installed Tensorrt 4.0.1, and i got the message saying "The engine plan file is not compatible with this version of GIE". Then I tried hard to remove tensorrt 4.0.1 and reinstalled tensorrt 3.04 through dpkg command. I had learned that tensorrt 3.04 can only successfully installed by tar file but not deb file. So ,could you give me any suggestion how to get tensorrt worked. Many thanks!

java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty

bazel build //mcts:mcts_main
DEBUG: /home/ken/.cache/bazel/_bazel_ken/2b61279c4add7e259b51cb073b48b292/external/bazel_tools/tools/build_defs/repo/http.bzl:63:5: patch file //third_party/tensorflow:tensorflow.patch, path /mnt/ken-volume/Downloads/PhoenixGo/third_party/tensorflow/tensorflow.patch
ERROR: error loading package '': Encountered error while reading extension file 'tensorflow/workspace.bzl': no such package '@org_tensorflow//tensorflow': java.io.IOException: Error downloading [https://github.com/tensorflow/tensorflow/archive/v1.7.0.tar.gz] to /home/ken/.cache/bazel/_bazel_ken/2b61279c4add7e259b51cb073b48b292/external/org_tensorflow/v1.7.0.tar.gz: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
ERROR: error loading package '': Encountered error while reading extension file 'tensorflow/workspace.bzl': no such package '@org_tensorflow//tensorflow': java.io.IOException: Error downloading [https://github.com/tensorflow/tensorflow/archive/v1.7.0.tar.gz] to /home/ken/.cache/bazel/_bazel_ken/2b61279c4add7e259b51cb073b48b292/external/org_tensorflow/v1.7.0.tar.gz: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
INFO: Elapsed time: 16.168s
FAILED: Build did NOT complete successfully (0 packages loaded)

The era of strong AI Go bot proliferation is here

One benefit of pushing forward open bots that are superhuman is it may force the hand of others as well, now that Fb and Tencent have open sourced their bots that are both much stronger than LZ, maybe Deepmind will come back for thirds and get a fourth place prize in openness to cheapshot score one last PR hooray for Google by open sourcing their AGZ weights to spit in fb's face and to show whos the alpha by mastering the Chinese competition once again.. open competition is good for go. This is truly the end of a human era.

//////////
//////////

Possible hidden meaning?

image

and a day later:

https://www.theguardian.com/technology/2016/jan/28/go-playing-facebook-spoil-googles-ai-deepmind

I don't think "the" development is what people think it is refering to.

https://www.reddit.com/r/cbaduk/comments/81ri8b/so_many_strong_networksais_on_cgos/dv4pzo5/

I kinda predicted this a while ago actually:

image

//////////
//////////

So....

Win GPU v1 Start doesn't work

After installing Visual C++ Redistributable for Visual Studio 2015 (the file name is vc_redist.x64.exe) - as suggested in PDF guide, same XXX.dll pop-up appears. How to solve this? Thanks
P.S. CPU version works perfectly.

Cannot build with bazel 0.13

ERROR: .cache/bazel/bazel/f0defbf128b2eb9b315d11754028a380/external/jpeg/BUILD:126:12: Illegal ambiguous match on configurable attribute "deps" in @jpeg//:jpeg:
@jpeg//:k8
@jpeg//:armeabi-v7a
Multiple matches are not allowed unless one is unambiguously more specialized.
ERROR: Analysis of target '//mcts:mcts_main' failed; build aborted:

.cache/bazel/bazel/f0defbf128b2eb9b315d11754028a380/external/jpeg/BUILD:126:12: Illegal ambiguous match on configurable attribute "deps" in @jpeg//:jpeg:
@jpeg//:k8
@jpeg//:armeabi-v7a
Multiple matches are not allowed unless one is unambiguously more specialized.
INFO: Elapsed time: 78.411s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (61 packages loaded)

On Ubuntu 16.04
Linux version 4.4.0-92-generic (buildd@lcy01-17) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4) )

Workaround on issues with sabaki

Sabaki is a very good looking UI. However I do encounter some issues with this appimage app.

  1. Current working dir
    I assume the working dir for Sabaki is not the same as the PhoneixGo, which causes errors like config not found, and another file not found which is not passed through by argument. My solution is to use a script that wraps the binary.
#!/bin/bash
cd "$(dirname "$0")"
pwd
bazel-bin/mcts/mcts_main --config_path=etc/mcts_1gpu_notensorrt.conf --gtp --logtostderr --v=1
  1. Sabaki hangs without debug output, when the option 'start after attach' is chose.
    Turn off that option and use 'Engines - Generage Move' to start a move for black.

Ok. Now PhoneixGo works for me and I'll try it with online go games.

Is there any way to load sgf?

OS: Ubuntu 16.04

I have compiled an executable, and run successfully.
But... Is there any way to load sgf?
I enter "list_commands" in GTP mode, but there's no "loadsgf".

version
list_commands
quit
clear_board
boardsize
komi
time_settings
time_left
place_free_handicap
set_free_handicap
play
genmove
final_score
get_debug_info
get_last_move_debug_info

In Readme.md "Command Line Options",
there's a "--init_moves" option.
But I don't know how to do with it.
I have tried "--init_moves (;SZ[19];B[pd];W[dp];B[qp];W[dd];B[oq])" and "--init_moves xxx.sgf",
but it can't work correctly.

ubuntu 16.04 下sabaki设置问题

谢谢凤凰围棋团队的努力付出。

我想在ubuntu下使用sabaki测试对局,
路径是:/home/wsc/PhoenixGo/bazel-bin/mcts/mcts_main
参数是:--config_path=/home/wsc/PhoenixGo/etc/mcts_1gpu_notensorrt.conf --gtp --logtostderr --v=1
然而出现错误信息:E0519 12:43:54.538918 29392 checkpoint_utils.cc:33] Error reading "ckpt/checkpoint": No such file or directory [2]

直接在PhoenixGo目录下执行/home/wsc/PhoenixGo/bazel-bin/mcts/mcts_main --config_path=/home/wsc/PhoenixGo/etc/mcts_1gpu_notensorrt.conf --gtp --logtostderr --v=1
就没有问题,但不在PhoenixGo目录下执行程序就报上面但错误?

请问是否有训练文件路径的设置参数?

LeelaZero + PhoenixGo's weights

A seemingly very effective approach to handicap games

See leela-zero/leela-zero#1599 (comment). The latest news is that it won 5 and 6-stone handicap games against a Tygem 8d 弈城8段 (latest two games in https://www.gokgs.com/gameArchives.jsp?user=baymax). The idea is just to use the color plane inputs for komi information as in leela-zero/leela-zero@next...alreadydone:patch-16, and dynamically adjust komi during the game to keep the winrate within a certain range (5-12% in my implementation to make the play aggressive).
Would you try to implement it and test the effect? As far as I know PhoenixGo isn't playing handicap games on Fox, but Fine Art is, and it's reported that they are bugged by the low initial winrate problem, so please spread the words when appropriate. PhoenixGo itself is also one of the strongest bots and definitely in the position of testing this. According to kblomdahl/dream-go#25 (comment), my approach should work granted that the network isn't too overfitted.

question about the win64 gpu v1

I copied the dll missing from the exe and zip file download from nivida site into bin directory

2017/09/02  21:46        75,222,016 cusolver64_90.dll
2017/12/20  08:00        54,028,288 cublas64_90.dll
2018/05/05  11:17       331,322,880 cudnn64_7.dll
2017/09/02  21:46       131,197,952 cufft64_90.dll
2017/09/02  21:46        48,057,344 curand64_90.dll

then run following command. it shows some worning, is it ok?

D:\tool\pgp>bin\mcts_main.exe --config_path=etc\mcts_1gpu_notensorrt.conf --gtp --log_dir=log --v=1
2018-05-25 16:01:12.800699: I model\zero_model.cc:72] Read checkpoint state succ
2018-05-25 16:01:12.827699: I model\zero_model.cc:80] Read meta graph succ
2018-05-25 16:01:12.828699: I E:\Tensorflow\PhoenixGo\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your
 CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2018-05-25 16:01:13.441699: I E:\Tensorflow\PhoenixGo\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1344]
Found device 0 with properties:
name: GeForce GT 730 major: 3 minor: 5 memoryClockRate(GHz): 0.9015
pciBusID: 0000:01:00.0
totalMemory: 2.00GiB freeMemory: 405.59MiB
2018-05-25 16:01:13.450699: I E:\Tensorflow\PhoenixGo\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1423]
Adding visible gpu devices: 0
2018-05-25 16:01:20.394899: I E:\Tensorflow\PhoenixGo\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:911] D
evice interconnect StreamExecutor with strength 1 edge matrix:
2018-05-25 16:01:20.398899: I E:\Tensorflow\PhoenixGo\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:917]
    0
2018-05-25 16:01:20.414499: I E:\Tensorflow\PhoenixGo\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:930] 0
:   N
2018-05-25 16:01:20.432099: I E:\Tensorflow\PhoenixGo\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1041]
Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1024 MB memory) -> physical GPU (device: 0,
 name: GeForce GT 730, pci bus id: 0000:01:00.0, compute capability: 3.5)
2018-05-25 16:01:20.474299: I model\zero_model.cc:100] Create session succ
2018-05-25 16:01:20.519299: I model\zero_model.cc:107] Create graph succ
2018-05-25 16:01:21.845699: I model\zero_model.cc:119] Load checkpoint succ
play b c3
1th move(b): cc, winrate=-nan(ind)%, N=0, Q=-nan(ind), p=0.010924, v=-nan(ind), cost 11482.799805ms, sims=64, height=4,
avg_height=2.935897, global_step=639200
=

genmove w
2th move(w): dp, winrate=57.060974%, N=26, Q=0.141220, p=0.177330, v=0.130249, cost 22661.199219ms, sims=144, height=6,
avg_height=3.583851, global_step=639200
= D16

quit
=

Strength comparison to LeelaZero or Facebook ELF

Did you run any test for a strength comparison with previous published Go programs?
If so, could you publish your result too? Thanks.

BTW: is this published network the same as the one used on the World AI Go Tournament 2018?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.