/Home

https://chessforeva.gitlab.io
......... ...........
.... ....

Chess software for fun

ceturtdiena, 2024. gada 7. novembris

Chelpy python chess library

C compiled extension for python on Google Colab. Notebook. 




When chess movegen performance matters. C and magic bitboards.
But it is not GPU cuda tensors based, nor TPUs. Actually, depends on usage and tasks. Avoid python based position processing anyway. Just there is a way to write something in C too.

In python:
 import chelpy
 chelpy.ucimove("e2e4")
 chelpy.movegen()
 ...

Sources:

For most usual cases please use the original chess library that is already included in python. It is much more developed and supported. Also includes functionality for endgame tables lookups. Community and well documented sources. Especially using endgame tablebases (wiki), they are various kinds. Sources are python PyPy scripts. Kinda fast and windows or mac compatible.

 import chess
 board = chess.Board()


A few tips on AI matter. Everybody knows it.

1. Neural Network based AI is a MiniMax (wiki) search already. By it's nature of pytorch. An Evaluation function iteration in depth. With backpropagation (wiki) , it corrects itself depending on training results. The task is to train a model (a .pth file, or .keras file in tensorflow), a kind of a binary tree, that "predicts" best chess moves by given chess position as a parameter. Kind of opening databases combined with statistics. "Likewise playing" in gigabytes. Yes, yes, it can be strong. Lc0 is strong when trained and updates of weights already downloaded on an appropriate hardware. ChatGPT says Lc0 needs half a Gb. Interestingly, how many times less then tetris being developed today.

2. Another approach. Very theoretically speaking, in case of unlimited resources we could build a simple NN with evaluate checkmate only. Then perform megatraining. After it would be just the right chess engine that guides us throughout the game till the checkmate. Unfortunately chess is not a TicTacToe 9-squares simple problem, well described with samples on internet. There should be a little evaluation logic built-in for better selection of what to compute and what to ignore. We can't just generate gazillions of dumb chess games and simply count winnings of them.

3. Nobody wants to provide free computational resources to calculate evaluations for all possible chess positions. Should be a selection anyway. And what to play in the endgame?! Stable diffusion? xexe
There is FishNet project of distributed computing resources, obviously people have lots to give for chess analysis project. Data is king there, including expensive hardware too.

4. Nalimov endgame tablebases are calculated already.  Just a lookup is most needed in such cases when almost nothing left on the chess board. In fact, there is simple chess theory for centuries already.
For chess openings, the polyglot .bin files of chess evaluations for given key of chess position available on the web. Chess openings on websites available, as oldschool chessgames.com, don't think they are wrong. Anyway, large collections of chess games available at  https://database.lichess.org/ 
Syzygy and Gaviota tablebases available to lookup a 7-pieces position in terrabytes large files. 

5. There is stockfish for python, already usable tool for strong Evaluation function, for example. https://pypi.org/project/stockfish/

6. The only chess calculation thing that can be easily splitted in parallel GPUs is a self written evaluation function of a given chess position, obtained somewhere before. GPU have no recursions, it should do fast and simple math and independently of data. Let's say, simply compare material, count centered pieces, etc.  Checkmate verification would be too slow.
So, maybe it is better not to try split movegen, or smth. Let GPUs do their job in pytorch / tensorflow part only, calculating matrices of nodes with weights.
Simply speaking, write everything in tensors (matrices), and this will be processed on GPUs in parallel, or on TPUs. At least, will try. GPU is designed for simple math on numbers (pixels,vertices), not decision (ifs) on decision on (recursive) .... of numbers - the smallest part of calculations. Kinda explanation. Therefore CPU does much of work.

7. Evaluation function of a new NN based AI can be correlated to statistics of openings played by chess masters and results of these games. So, no need to do Stockfish evaluation much there. Lc0 goes right or left only, counts winnings, no need for counting evaluation decimals. By the way, model outputs maybe can be transferred to a newer better models.

8. Ignore chess redundancies related to human playing. Ignore the chess clock, 50-the moves rule (real masters do not play such a dumbchess), also ignore "ups! omg" touched pieces to move them after to somewhere. The chess960 case also. I suggest to put a queen always when promoting pawn to piece, ignore promotion of rook, bishop, knight. Move number also is very irrelevant - no information there at all to keep it all time actual. Simply do not think about it. The main task is to find the best move (or sequence, variants) for given chess position and NN can assist here for sure.
Also, I think, the endgame can be a different model because of approach. Maybe split NN into three independent models: for opening (ready databases), middlegame (lots of pieces and should calculate much, the battle), endgame (do lookups, if less than 7 pieces). Absolutely. Make it today and it will be superstrong tomorrow. Just ideas, why not. Nvidia cares tomorrow, stock goes up today.

9. Also, the old binary file level was designed for PCs running on 8Mb of RAM, even less. Today it is possible to operate in very large memory space. I mean, no need to encode decode anything. Simply why. Also the web is very fast now. Keep the data plain and simple, visible in text files. Dump (flush) large memory to disk, no need to itearate the old filesystem for small chunk to save anything at all. Python does lots of hashing inside itself. No gain counting bytes. Some games with extensive graphics require more than 100Gb for installation nowadays.
Making short bit-squeeze records may be reasonable for extremely large endgame tables maybe, but AI should play a good game till 7 pieces left anyway. Can ignore it. If NN gets best results at the beginning and middlegame of chess playing then may say it is strongest ever. After the middlegame, when half of pieces are off the board, things get kind of less CPU requiring, always can compensate with an existing chess engine that plays at decent level anyway.  
Playing till checkmate is boring and just technically possible for real chess players. They are interested in gaining advantage that is obvious and clearly visible to everybody just taking a look at position on the chess board. In most cases this chess game is over then, chess clock being stopped, chess engine turned off, nothing interesting more. By the way, chess engines do not search for checkmate as most important thing, sometimes a checkmate case just happens as by-product.
I mean, it is too much legacy everywhere on the web. Intel stock goes down, sorry. Let's build a database.

10. You have to get all ready data from Lichess.org, these all are mostly donated by fish-net users. Chess lovers spent money for lichess.org reasons, as I see. NN training will require lots of data. Otherwise will get a sh*tty NN. To be honest, ChessBase sells their databases products almost at price of a new PC. Lichess, at least, gives large files of games for free. So, that's it - the real value of TensorFlows for chess.

11. Speaking of pythons. For Windows users there are DLL-libraries to do the same on local PC, or even process Stockfish position evaluation right in python. Go to git.

12. Good luck.


Nav komentāru:

Ierakstīt komentāru

SiteMap: Index of/ ..
Blogposts:
free counters