Git Product home page Git Product logo

allie's Introduction

FAQ

What is Allie?

Allie is a chess engine heavily inspired by the seminal AlphaZero paper and the Lc0 project.

How is she related to Leela?

Like Leela, Allie is based off of the same concepts and algorithms that were introduced by Deepmind in the AlphaZero paper(s), but her code is original and contains an alternative implementation of those ideas. You can think of Allie as a young cousin of Leela that utilizes the same networks produced by the Lc0 project.

Ok, so details. How is she different?

Well, I was inspired during the original CCC to see if you could pair traditional Minimax/AlphaBeta search with an NN. This is still her main purpose and the focus going forward. However, the initial versions were using a similar pure MCTS algorithm as Lc0 and AlphaZero. The current versions of Allie use a modified hybrid search of Minimax and Monte Carlo.

Here is a non-exhaustive list of differences:

  • UCI protocol code
  • Input/Output
  • Time managment
  • Board representation
  • Move generation
  • Zobrist keys
  • Hash implementation
  • The threading model
  • Search algorithm
  • Tree structure
  • The multi-gpu scaling code
  • Fpu-reduction
  • Mate distance eval
  • Testing framework and tests
  • Debugging code

What bits are used from the Lc0 project?

Here is what Allie uses from the Lc0 codebase:

  • Protocol buffers for an NN weights file
  • Code for discovering/loading the NN weights file
  • Backend code for GPU to get evaluations given an NN weights file

All right, brass tacks how strong is she?

She is among the strongest chess engines in the world as of November, 2020.

Why did you develop her rather than just help out Leela?

A couple reasons. First, my original inspiration was to see if I could implement an alternative search using Minimax/AlphaBeta rather than MCTS. Second, I wanted to teach myself the AlphaZero concepts and algorithms and this was the best way to do it. Allie is now using a hybrid Minimax Monte Carlo search.

Also, I am contributing back some patches to Leela where appropriate.

Ok, so she uses Lc0's networks. Why don't you make your own?

Two reasons. First, I couldn't hope to compete with the machine learning and AI experts who are contributing to the Lc0 project. Second, it would take a lot of compute power to train a new network and I just don't have that.

Supporting further development of Allie

I've set up a patreon page here: https://www.patreon.com/gonzochess75 and would greatly appreciate any support from the community to be used to test Allie on more multi-gpu systems.

Anything else?

Yes, I'd like to wholeheartedly thank the developers of the Lc0 project which make Allie even possible. Only by standing on their shoulders could Allie exist. I'd also like to thank Deepmind for their work on AlphaZero which started the whole NN for chess era. Finally, I'd like to thank Andrew Grant who wrote the Ethereal engine whose code was a big inspiration for writing Allie and to CCC and TCEC for including the engine in their tournaments.

Quick start guide (Linux)

Prerequisites

You need:

  • working build environment: compiler (e.g., GCC or CLANG), binutils, and the rest of the usual stuff.
  • qmake (part of Qt SDK)
  • CUDA and cuDNN. If you can build and run Lc0 with the cuDNN backend, you're probably good.

Building

The basic steps:

  • qmake --- Generates the makefiles. Usually only needed to run once.
  • make -j --- Build Allie.

If everything went well, you now have bin/allie .

If the build system cannot locate CUDA or you want to use a specific CUDA version, you can use NVCC, CUDA_INC_DIR and CUDA_LIB_DIR. For instance:

make -j CUDA_INC_DIR=/opt/cuda-10.1.168/include CUDA_LIB_DIR=/opt/cuda-10.1.168/lib64 NVCC=/opt/cuda-10.1.168/bin/nvcc

Similarly, you can specify the C++ compiler:

make -j CXX=clang++-7

To clean up all the build temporaries:

make clean

There's also make distclean to clean up everything, including the targets and the generated makefiles. You need to re-run qmake after this command. If in doubt whether changing compiler/CUDA/whatever is in effect, use make distclean.

Running Allie

  • Add the network weights file in the same directory with the Allie binary. Symlink (ln -s) can be used
  • Launch allie. If everything went well, you'll see "allie" in stylished ASCII art with the version information
  • You should now be ready to use Allie with your favorite UCI-compatible chess GUI.

allie's People

Contributors

adtreat avatar coolchess123 avatar jjoshua2 avatar manyoso avatar skiminki avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

allie's Issues

Proposal to add Transposition Difference Value:

EDIT: I've revised this pseudo-code to make more slightly more sense. I realize the actual code will somewhat different (and maybe fewer lines), but the principles should be the same.

I made 2 functions allowing 50-move-rule transpositions in with NNCache;

function makeTD(board.plyPlayed(original_position))
    cutoff = 20 #tunable; also is a somewhat arbitrary number
    distance_factor = 8 #tunable; linearly increases the TD value.
    ply_until_100 = ply_played - 100
    if ply_until_100>cutoff: #prevents the log function being 0 or lower
        return TD=abs(int(distance_factor*log(ply_until_100-cutoff))) #This function does not have to be a log function, but it probably works better that way.
#Two numbers in this function are tunable. Notably, odd number outputs should be rounded to even numbers because transpositions do not happen with a difference of 1 in ply, as it is then the other side's turn.
#Another option is just to divide 100 by 2 beforehand and count moves instead of ply.

#NNCache search code
#invoked when comparing boards/FENs to see whether a new position was already searched or not
#first compare boards, then compare 50-MRs using TD
function comparePositions_withTD(original_position, new_position)
    if board.position(new_position) == board.position(original_position): #comparing old and new positions without 50MR
        TD=makeTD(board.plyPlayed(original_position))
        OP=board.plyPlayed(original_position) #original position's ply count
        NP=board.plyPlayed(original_position) #new position's ply count
        if OP+TD>NP and NP>OP-TD: #comparing 50MRs with bounds TD
            return True #means the search code treats the old position as if it were the new position
        else:
            return False #means the search code treats the old position as if it wasn't the original position
   else:
       continue

I only expect TD to fail in certain conditions, which I believe the distance of 10 moves from the 50MR as (x-10); x>10 accounts for. If functional errors are accounted for and TD fails, it is because:
-X is small
-Somehow, repetitions are encouraged and expands search unnecessarily on multiple fronts
-Descending evaluations due to approaching the 50MR are delayed and hampered by a single visit
-Evaluations don't descend enough and serve as an ineffective or proactive draw avoidance mechanism
-Being practically useless? (Which I do not expect at all if implemented correctly)
-Could need to be paired with a policy boosting/subtracting mechanism that increases/decreases priority to searching transpositions in NNcache (maybe causing a small but noticeable slowdown?)
-could also need to be paired with an additional visit beyond the transposition (for worst-case 50MR faults)
-conflicts with moves-left head when the moves-left head's prediction approaches -> 50MR only because the transposition is closer to the 50MR, therefore necessitating that transpositions should only be allowed as equal or less distance to root.
-Extremely high depth searches requiring precise and stable evaluations may spike and increase expected time needed for moves during games
-Multivariable tunes could be required, possibly including the function I've defined above

As for training:
-training could not like it (for various unknown outerspace reasons)
-positive feedback loops in training due to high Q insensitivity, wildly differing evaluations between places in the 50 MR, and preference to positions with/without transpositions could cause overfitting of some kind
-everything needs retuning? (heavy rocks to be lifted?)

A possible remedy to the encouragement of high-depth related repetitions (which would unnecessarily allow for expansion of the search tree along already explored paths) would be to reconsider implementing 2-fold draw scoring, but perhaps only for already searched positions.

Download link for Allie 0.5 chess engine

I saw you released 0.5 version of Allie chess engine, but I don't see the download link for executabile file .exe, I see only for source code.
Congratulation for Allie chess engine TCEC S16 final.
Thanks a lot.

OPEN CL

can you compile it on opencl next release?

Anyway to compile in new OS (Fedora 32 or Ubuntu 20.04) ??

I have been trying to compile the binary in both updated OS. As they CUDA only supports gcc < 8, there is no way to compile this. I have installed gcc 8.2 and now I have gcc 10 ad gcc 8 installed in my system. I have compiled leela binary with the flag -Dnvcc_ccbin=gcc82, but I have been not able to add some flag to use gcc 8 instead of gcc 10.

I have tried with export, CXX, CC and GCC flag to "make" command, but no luck.

Any ideas?

Thanks in advance

Preparing for Neural Net changes

Should Leelenstein or Lc0 adopt double playout measures from KataGo, we should prepare for such a change by allowing exceptions in Allie for different net architectures. (Is that how it works? Could you refer me to some lines of code if you could on that end?)

Double playout measures essentially allow the first player to be informed that they are stronger with double the playouts in training while the other opponent is informed that it is weaker with half the playouts of the other player. This is a form of contempt that is used by KataGo and supposedly makes it significantly stronger in play at longer time controls.

Allie 0.2 running on Windows 7 64-bit and CPU Intel Core i 3570 Quad instead of GPU?

Hello manyoso,

I obtained the following error messages during installation of https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64 ( NVIDIA CUDA 10.1 Toolkit:) GPU Library Advisor - CURAND Development / Runtime - CUDART Runtime- NPP Runtine - CUPTI - CUSOLVER Development - CUFFT Development / Runtime - NVRTC Development / Runtime - CUSOLVER Runtime - CUBLAS Development - Demo Suite - NVGRAPH Development / Runtime - NVCC - NPP Development - Visual Profiler - DISASSEMBLER - CUDA Profiler Tools - Fortran Examples - NVML Development - Nsight Systems / Compute - Samples - CUSPARSE / CUBLAS Runtime - PhysX System Software - CUDA Documentation / Sanitizer API - MEMCHECK - Occupancy Calculator- NOT INSTALLED Graphics Driver (= Intel Standard VGA 1280x1025) could not find compatible hardware

I added cudart64_100.dll and cudnn64_7.dll, but nothing happens. What can be done in such a case? Or is Allie designed only for GPU, and does not support CPU-structure? cublas64_100.dll is still required for download to run the executable file.

Regards,

Allie Playing Chess Variants, Musketeer Chess

Hi
Nice to see inspiring and open source work.

Can you please consider a fork of your engine playing chess variants.

I'm especially interested in variants i created (and naturally other popular ones). Mine is Musketeer Chess. It is discussed in many forums and also there is Musketeer Stockfish a fork of Stockfish.

Please find details here:

https://github.com/ianfab/Musketeer-Stockfish

http://talkchess.com/forum3/viewtopic.php?f=7&t=72572

https://github.com/fsmosca/musketeer-chess

www.musketeerchess.net

i'd like to create a chess server playing chess and chess variants. The majority of the servers that exist now are based on Stockfish or LC0. It's certainly much more interesting to have servers with other engines. Please mail me to musketeerchess (a) gmail .com to discuss this and make a good development plan;

Best regards
Zied

TB path length

Beyond a certain path length Tablebases do not work with allie.

Blas support in Allie

It would be a good idea to implement DNNL Blas and OpenBlas support in Allie for those users who do not have a GPU.

NNUE Networks for Allie 0.7

Hello Mr.,
This is not an issue.
Please tell me what kind of NNUE networks uses Allie 0.7 for long time controls and for short time controls.
Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.