suragnair/alpha-zero-general

A clean implementation based on AlphaZero for any game in any framework + tutorial + Othello/Gobang/TicTacToe/Connect4 and more

Jupyter NotebookPythonShellreinforcement-learningdeep-learningneural-networktensorflowkeraspytorchmctsothellogomokumonte-carlo-tree-searchgobangalphagotfalphago-zeroalpha-zeroalphazeroself-play
This is stars and forks stats for /suragnair/alpha-zero-general repository. As of 29 Apr, 2024 this repository has 3402 stars and 951 forks.

Alpha Zero General (any game, any framework!) A simplified, highly flexible, commented and (hopefully) easy to understand implementation of self-play based reinforcement learning based on the AlphaGo Zero paper (Silver et al). It is designed to be easy to adopt for any two-player turn-based adversarial game and any deep learning framework of your choice. A sample implementation has been provided for the game of Othello in PyTorch and Keras. An accompanying tutorial can be found here. We also have...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
androidx/androidxKotlinJavaC++4.8k08440
spdx/license-list-XMLMakefilePythonShell28702930
mesonbuild/wrapdbMesonCPython5401420
atulloh/oddballOpenSCADShell4280320
IndySockets/IndyPascalCMakefile388+11330
metacpan/metacpan-apiPerlShellOther28102290
mkassaf/UnitTestInPythonRoffPythonPowerShell2040
Schniz/fnmRustTypeScriptJavaScript13.5k03720
FuelLabs/swayRustLLVMCSS29.7k+2442.2k+58
softwaremill/bootzookaScalaTypeScriptHTML69701520